0 التعليقات
0 المشاركات
93 مشاهدة
الدليل
الدليل
-
الرجاء تسجيل الدخول , للأعجاب والمشاركة والتعليق على هذا!
-
GAMERANT.COMGenshin Impact Officially Reveals Yumemizuki Mizuki for Version 5.4Genshin Impact has officially unveiled Yumemizuki Mizuki as a 5-Star Anemo character from Inazuma, set to make her playable debut in Version 5.4. With Version 5.3 expected to wrap up the storyline in Natlan, Genshin Impact is preparing its players for a small adventure in Inazuma in the upcoming update. Its Flagship Event will take place in Inazuma and will largely revolve around the yokai, with Yae Miko taking center stage.0 التعليقات 0 المشاركات 88 مشاهدة
-
GAMERANT.COMBest Character Archetypes In Fighting GamesFighting games possess some of the most iconic and recognizable rosters of characters across the entire medium of video games. Fighting games allow players to express and represent themselves and their playstyle via diverse rosters of characters that each come alongside their own strengths and weaknesses.0 التعليقات 0 المشاركات 88 مشاهدة
-
SMASHINGMAGAZINE.COMWhat Does AI Really Mean?In 2024, Artificial Intelligence (AI) hit the limelight with major advancements. The problem with reaching common knowledge and so much public attention so quickly is that the term becomes ambiguous. While we all have an approximation of what it means to use AI in something, its not widely understood what infrastructure having AI in your project, product, or feature entails.So, lets break down the concepts that make AI tick. How is data stored and correlated, and how are the relationships built in order for an algorithm to learn how to interpret that data? As with most data-oriented architectures, it all starts with a database.Data As CoordinatesCreating intelligence, whether artificial or natural, works in a very similar way. We store chunks of information, and we then connect them. Multiple visualization tools and metaphors show this in a 3-dimensional space with dots connected by lines on a graph. Those connections and their intersection are what make up for intelligence. For example, we put together chocolate is sweet and nice and drinking hot milk makes you warm, and we make hot chocolate.We, as human beings, dont worry too much about making sure the connections land at the right point. Our brain just works that way, declaratively. However, for building AI, we need to be more explicit. So think of it as a map. In order for a plane to leave CountryA and arrive at CountryB it requires a precise system: we have coordinates, we have 2 axis in our maps, and they can be represented as a vector: [28.3772, 81.5707].For our intelligence, we need a more complex system; 2 dimensions will not suffice; we need thousands. Thats what vector databases are. Our intelligence can now correlate terms based on the distance and/or angle between them, create cross-references, and establish patterns in which every term occurs.A specialized database that stores and manages data as high-dimensional vectors. It enables efficient similarity searches and semantic matching.Querying Per ApproximationAs stated in the last session, matching the search terms (your prompt) to the data is the exercise of semantic matching (it establishes the pattern in which keywords in your prompt are used within its own data), and the similarity search, the distance (angular or linear) between each entry. Thats actually a roughly accurate representation. What a similarity search does is define each of the numbers in a vector (thats thousands of coordinates long), a point in this weird multi-dimensional space. Finally, to establish similarity between each of these points, the distance and/or angles between them are measured.This is one of the reasons why AI isnt deterministic we also arent for the same prompt, the search may produce different outputs based on how the scores are defined at that moment. If youre building an AI system, there are algorithms you can use to establish how your data will be evaluated. This can produce more precise and accurate results depending on the type of data. The main algorithms used are 3, and Each one of them performs better for a certain kind of data, so understanding the shape of the data and how each of these concepts will correlate is important to choosing the correct one. In a very hand-wavy way, heres the rule-of-thumb to offer you a clue for each:Cosine SimilarityMeasures angle between vectors. So if the magnitude (the actual number) is less important. Its great for text/semantic similarityDot ProductCaptures linear correlation and alignment. Its great for establishing relationships between multiple points/features.Euclidean DistanceCalculates straight-line distance. Its good for dense numerical spaces since it highlights the spatial distance.INFOWhen working with non-structured data (like text entries: your tweets, a book, multiple recipes, your products documentation), cosine similarity is the way to go.Now that we understand how the data bulk is stored and the relationships are built, we can start talking about how the intelligence works let the training begin!Language ModelsA language model is a system trained to understand, predict, and finally generate human-like text by learning statistical patterns and relationships between words and phrases in large text datasets. For such a system, language is represented as probabilistic sequences.In that way, a language model is immediately capable of efficient completion (hence the quote stating that 90% of the code in Google is written by AI auto-completion), translation, and conversation. Those tasks are the low-hanging fruits of AI because they depend on estimating the likelihood of word combinations and improve by reaffirming and adjusting the patterns based on usage feedback (rebalancing the similarity scores).As of now, we understand what a language model is, and we can start classifying them as large and small.Large Language Models (LLMs)As the name says, use large-scale datasets &mdash with billions of parameters, like up to 70 billion. This allows them to be diverse and capable of creating human-like text across different knowledge domains.Think of them as big generalists. This makes them not only versatile but extremely powerful. And as a consequence, training them demands a lot of computational work.Small Language Models (SLMs)With a smaller dataset, with numbers ranging from 100 million to 3 billion parameters. They take significantly less computational effort, which makes them less versatile and better suited for specific tasks with more defined constraints. SLMs can also be deployed more efficiently and have a faster inference when processing user input.Fine-TunningFine-tuning an LLM consists of adjusting the models weights through additional specialized training on a specific (high-quality) dataset. Basically, adapting a pre-trained model to perform better in a particular domain or task.As training iterates through the heuristics within the model, it enables a more nuanced understanding. This leads to more accurate and context-specific outputs without creating a custom language model for each task. On each training iteration, developers will tune the learning rate, weights, and batch-size while providing a dataset tailored for that particular knowledge area. Of course, each iteration depends also on appropriately benchmarking the output performance of the model.As mentioned above, fine-tuning is particularly useful for applying a determined task with a niche knowledge area, for example, creating summaries of nutritional scientific articles, correlating symptoms with a subset of possible conditions, etc.Fine-tuning is not something that can be done frequently or fast, requiring numerous iterations, and it isnt intended for factual information, especially if dependent on current events or streamed information.Enhancing Context With InformationMost conversations we have are directly dependent on context; with AI, it isnt so much different. While there are definitely use cases that dont entirely depend on current events (translations, summarization, data analysis, etc.), many others do. However, it isnt quite feasible yet to have LLMs (or even SLMs) being trained on a daily basis.For this, a new technique can help: Retrieve-Augmented Generation (RAG). It consists of injecting a smaller dataset into the LLMs in order to provide it with more specific (and/or current) information. With a RAG, the LLM isnt better trained; it still has all the generalistic training it had before but now, before it generates the output, it receives an ingest of new information to be used.INFORAG enhances the LLMs context, providing it with a more comprehensive understanding of the topic.For an RAG to work well, data must be prepared/formatted in a way that the LLM can properly digest it. Setting it up is a multi-step process:RetrievalQuery external data (such as web pages, knowledge bases, and databases).Pre-ProcessingInformation undergoes pre-processing, including tokenization, stemming, and removal of stop words.Grounded GenerationThe pre-processed retrieved information is then seamlessly incorporated into the pre-trained LLM. RAG first retrieves relevant information from a database using a query generated by the LLM. Integrating an RAG to an LLM enhances its context, providing it with a more comprehensive understanding of the topic. This augmented context enables the LLM to generate more precise, informative, and engaging responses.Since it provides access to fresh information via easy-to-update database records, this approach is mostly for data-driven responses. Because this data is context-focused, it also provides more accuracy to facts. Think of a RAG as a tool to turn your LLM from a generalist into a specialist.Enhancing an LLM context through RAG is particularly useful for chatbots, assistants, agents, or other usages where the output quality is directly connected to domain knowledge. But, while RAG is the strategy to collect and inject data into the language models context, this data requires input, and that is why it also requires meaning embedded.EmbeddingTo make data digestible by the LLM, we need to capture each entrys semantic meaning so the language model can form the patterns and establish the relationships. This process is called embedding, and it works by creating a static vector representation of the data. Different language models have different levels of precision embedding. For example, you can have embeddings from 384 dimensions all the way to 3072.In other words, in comparison to our cartesian coordinates in a map (e.g., [28.3772, 81.5707]) with only two dimensions, an embedded entry for an LLM has from 384 to 3072 dimensions.Lets BuildI hope this helped you better understand what those terms mean and the processes which encompass the term AI. This merely scratches the surface of complexity, though. We still need to talk about AI Agents and how all these approaches intertwine to create richer experiences. Perhaps we can do that in a later article let me know in the comments if youd like that!Meanwhile, let me know your thoughts and what you build with this!Further Reading on SmashingMagUsing AI For Neurodiversity And Building Inclusive Tools, Pratik JoglekarHow To Design Effective Conversational AI Experiences: A Comprehensive Guide, Yinjian HuangWhen Words Cannot Describe: Designing For AI Beyond Conversational Interfaces, Maximillian PirasAIs Transformative Impact On Web Design: Supercharging Productivity Across The Industry, Paul Boag0 التعليقات 0 المشاركات 211 مشاهدة
-
UXDESIGN.CCReclaiming your humanity, cognitive offloading, UX storytellingWeekly curated resources for designersthinkers andmakers.We could argue all day about which things are fundamentally different in an AI-first world, but one undeniable difference is that the more intelligent technology gets, the fewer visible interfaces a human sees.()With each automated decision, we remove a choice, a slice of agency, from the user. As we do this more often, we begin to leave some of our personality behind. If the algorithms are an average of our calculated habits, then living on autopilot will leave us regressing toward themean.Reclaiming your humanity in an algorithmic world By Joe BernsteinIs your UX Research reaching its full potential in business decisions? [Sponsored] Join leading UX research experts as we explore the reasons why UX research is undervalued despite the high demand for it. Well also look at practical ways to reframe research to tackle visibility issues and enhance its impact on the business.Editor picksWhat does it mean to have an experience? Different definitions shape product perspectives.By RogerLaureanoAI and cognitive offloading Sharing the thinking process with machines.By Tetiana SydorenkoMaking product quality a team sport Two intentional rituals you should try.By AletheiaDelivreThe UX Collective is an independent design publication that elevates unheard design voices and helps designers think more critically about theirwork.The challenge with design sprints, by Elan MillerMake methinkEthical web principles These principles are not merely theoretical; they constitute a call to action. They encourage everyone involved in the webs evolution to assess their contributions societal and environmental impacts. We can create a web that truly benefits everyone by adhering to these principles.Resist summary The output of Large Language Models (AI) is an aid to learning extremely simple things, and it is an impediment to learning anything complex or creative. I worry that without a complex model in ones own mind, one may never notice complex relationships that are otherwise missed. A loss of attention todetail.The web is too big, or scaling down Ultimately, though, the problem with this situation isnt that Mozilla or Firefox arent good enough. Its that the web is too complicated. A browser is an extraordinarily complex piece of software.Little gems thisweekWhat we can learn from the 7 worst designs ByBenRedefining chatbot design in the age of AI By Wojciech WasilewskiHow tabs changed the way we browse By ElvisHsiaoTools and resourcesRewards strategies How Duolingo, Nike, and Amazon use rewards to keep you hooked.By Angele LenglemetzShould AI write your alt text? Its time you write alt text on every meaningful image.By AlliePaschalUX storytelling Practical advice, not vague analogies, on how to use UX storytelling.By KaiWongSupport the newsletterIf you find our content helpful, heres how you can supportus:Check out this weeks sponsor to support their worktooForward this email to a friend and invite them to subscribeSponsor aneditionReclaiming your humanity, cognitive offloading, UX storytelling was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.0 التعليقات 0 المشاركات 124 مشاهدة
-
UXDESIGN.CCThe obscure side of HoneyDeceptive design tricks turned a savings tool into a trust trap.Continue reading on UX Collective0 التعليقات 0 المشاركات 132 مشاهدة
-
UXDESIGN.CC2024 showed weve learned nothing from our past accessibility mistakesThe 2024 WebAIM Million results are in and its not goodContinue reading on UX Collective0 التعليقات 0 المشاركات 132 مشاهدة
-
WWW.ENGADGET.COMElvies newest product is a smart baby bouncer that transforms into a bassinetElvie, the company known for its popular wearable breast pump, is showing off a new piece of baby gear at CES. Called Rise, its an app-controlled baby bouncer that can transform into a bassinet with a baby inside. The $799 device is made for infants in the first few months of their lives. Elvie says the bouncer works for babies up to 20 lbs or 6 months old, while the bassinet is meant for babies up to 5 months or 22 lbs. While in bouncer mode, parents can customize the specific bounce pattern from the accompanying Elvie Rise Sleep & Soothe app. The company says its SootheLoop technology is meant to be a gentle motion thats more like the movement of a caregiver rather than a repetitive robotic movement. Theres also a manual mode for babies to bounce themselves as they grow a bit bigger and stronger. In its press release, Elvie says its own study found that two-thirds of babies between 0 and 3 months often sleep in non-safe products like bouncers or swings. The Rise is meant to address this as parents can switch from bouncer mode to bassinet mode without hopefully waking their child. Elvie The Rise is equipped with a "transition handle that allows parents to transition the device between modes. While in bouncer mode, this involves pushing on the bottom end, near the feet, and squeezing the handle to pull up the sides to form the walls of the bassinet. The straps from bouncer mode automatically retract to make it a surface suitable for sleeping. The company says its bassinet complies with the American Academy of Pediatrics (AAP)s sleep safe guidelines, though babies should not be left in the bouncer unattended. The device is also meant to be more portable than the typical bassinet. It collapses for easier transport and has a magnetic charger so it can be used even when its not plugged in. The Elvie Rise is available now for pre-order. The company expects to begin shipping orders March 14, 2025. This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/elvies-newest-product-is-a-smart-baby-bouncer-that-transforms-into-a-bassinet-111550670.html?src=rss0 التعليقات 0 المشاركات 121 مشاهدة
-
WWW.ENGADGET.COMThe Espresso 15 Pro is a compact version of our favorite portable monitorThe Espresso 17 Pro is our favorite portable monitor. It delivers great image quality, has a rugged build, boasts built-in speakers and includes a touchscreen function. The only real trouble is that, with a 17-inch screen, it's perhaps not as truly portable as it could be.Enter the Espresso 15 Pro.As you might have guessed, the latest model has a 15-inch display. This is the second Pro-level portable monitor from Espresso Displays. The company already has a 15-inch non-touch version, but as the name implies, this one's geared toward professionals and business travelers who could do with more on-the-go screen real estate.The Espresso 15 Pro, which was unveiled at CES 2025, has a resolution of 4K and 1,500:1 contrast. It's said to display 1.07 billion colors with full coverage of the AdobeRGB color spectrum. The LCD panel is actually brighter than the 17-inch model at 550 nits versus the larger monitor's 450 nits of peak brightness. It also has two USB-C inputs. On the downside, the refresh rate is limited to 60Hz.Espresso DisplaysAlong with MacOS and Windows devices, the Espresso 15 Pro works with iPhones, iPads and DeX-enabled Samsung Galaxy devices. It's possible to use the Espresso Pen for notetaking on the touchscreen as well.Elsewhere, the Espresso 15 Pro will come with the brand's new Stand+. The monitor magnetically attaches to the Stand+, which supports landscape and portrait orientations.Pricing and availability for the Espresso 15 Pro has yet to be revealed, though it's slated to arrive in the coming months. Logic dictates that the price will fall somewhere in between the $299 Display 15 and $799 Espresso 17 Pro.This article originally appeared on Engadget at https://www.engadget.com/computing/the-espresso-15-pro-is-a-compact-version-of-our-favorite-portable-monitor-105237176.html?src=rss0 التعليقات 0 المشاركات 117 مشاهدة
-
WWW.TECHRADAR.COMGarmin Instinct 3 and 3S leaked, promising 'infinite' battery life, Garmin Pay, and moreThe Garmin Instinct 3 has been leaked again by an online retailer.0 التعليقات 0 المشاركات 122 مشاهدة