• Discover the Origins of a Psychedelic Drug Synthesized by a Swiss Chemist Who Claimed It 'Found and Called Me'
    www.smithsonianmag.com
    Albert Hoffman, the chemist who first synthesized LSD, as photographed in 1976 Noldi Kng / RDB / ullstein bild via Getty ImagesWhen chemist Albert Hofmann moved to small, straight-laced Basel, Switzerland, in 1929, he had no intention of becoming the first man to synthesize, ingest and record his experimentations with lysergic acid diethylamide, the psychedelic drug commonly known as LSD that became a countercultural touchstone in the following decades.I did not choose LSD, he later recalled. LSD found and called me.Really, LSD was just an unremarkable side product of his main research developing a respiratory and circulatory drug derived from ergot, a fungus found in rotting rye, for the pharmaceutical wing of Swiss chemical company Sandoz.In the Middle Ages, ergot was considered a poisonous scourge, causing spasms, gangrene and death if ingested in significant quantities through rye. St. Anthonys fire, as ergotism was then known, left millions of patients writhing with hallucinations and burning sensations. These dramatic and frightening symptoms of ergotism even inspired accusations of witchcraft.But in small doses, ergot had useful medicinal qualities, including as an aid in childbirth and abortion. The trick for chemists was to chemically isolate and purify ergots beneficial components while avoiding its deadly side effects.Earlier research at the Rockefeller Institute in New York had isolated lysergic acid, a common compound in all ergot alkaloids. Hofmanns task was to add other compounds to stabilize lysergic acid and create an analeptic drug to improve respiration and blood circulation. He would then test each compound on animals and record the effects. This circa 1501 painting by Hieronymus Bosch is titledTriptych of the Temptation of St. Anthony. The center panel contains several references to ergotism, which is also known as St. Anthonys fire. Public domain via Wikimedia CommonsOn November 16, 1938, the 32-year-old chemist tested the 25th combination, an amalgam of lysergic acid and diethylamine, an ammonia derivative. He called it LSD-25 for short. The compound made the test animals restless and twitchy, but Hofmann and other researchers noted nothing else out of the ordinary.The new substance aroused no special interest in our pharmacologists and physicians, Hofmann later wrote. Testing was therefore discontinued.That was that, and for five years, LSD-25 was left on the ash heap of pharmaceutical history, as journalist Tom Shroder wrote in Acid Test. In the meantime, Hofmann synthesized and explored other successful lysergic acid compounds, eventually creating hydergine, a circulatory drug that increases blood flow to the brain.But something about the experiments and the animals odd reactions to LSD-25 stuck with Hofmann. In 1943, he decided to synthesize it again.As he worked, he was interrupted by strange sensations. I was forced to interrupt my work in the laboratory in the middle of the afternoon and proceed home, being affected by a remarkable restlessness, combined with a slight dizziness, Hofmann later wrote in a note to his supervisor. At home I lay down and sank into a not unpleasant intoxicated-like condition, characterized by an extremely stimulated imagination.As he entered a dreamlike state, he continued, I perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors. After some two hours this condition faded away.Hofmann had unknowingly ingested a small amount of LSD-25 through the skin of his fingertips, experiencing the first acid trip in human history.Three days later, he began his intentional self-experimentation with LSD, only telling his lab assistant about his unofficial research into the compounds psychedelic effects. Although he continued to work with LSD in the lab, testing it on chimpanzees and small fish, he also experimented privately and in the company of friends.Never, though, did Hofmann predict LSD would reach the countercultural status and widespread usage it has today. It was no accident that Hofmann synthesized LSD on this day in 1938, but the global effects it would have on art, psychiatry and culture were thoroughly beyond his fingertips.Get the latest stories in your inbox every weekday.Filed Under: Chemistry, Discoveries, History of Science, Illegal Drugs, Medicine, On This Day in History, Psychology, Switzerland
    0 Commentarii ·0 Distribuiri ·91 Views
  • Our brains are vector databases heres why thats helpful when using AI
    venturebeat.com
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreIn 2014, a breakthrough at Google transformed how machines understand language: The self-attention model. This innovation allowed AI to grasp context and meaning in human communication by treating words as mathematical vectors precise numerical representations that capture relationships between ideas. Today, this vector-based approach has evolved into sophisticated vector databases, systems that mirror how our own brains process and retrieve information. This convergence of human cognition and AI technology isnt just changing how machines work its redefining how we need to communicate with them.How our brains already think in vectorsThink of vectors as GPS coordinates for ideas. Just as GPS uses numbers to locate places, vector databases use mathematical coordinates to map concepts, meanings and relationships. When you search a vector database, youre not just looking for exact matches youre finding patterns and relationships, just as your brain does when recalling a memory. Remember searching for your lost car keys? Your brain didnt methodically scan every room; it quickly accessed relevant memories based on context and similarity. This is exactly how vector databases work.The three core skills, evolvedTo thrive in this AI-augmented future, we need to evolve what I call the three core skills: reading, writing and querying. While these may sound familiar, their application in AI communication requires a fundamental shift in how we use them. Reading becomes about understanding both human and machine context. Writing transforms into precise, structured communication that machines can process. And querying perhaps the most crucial new skill involves learning to navigate vast networks of vector-based information in ways that combine human intuition with machine efficiency.Mastering vector communicationConsider an accountant facing a complex financial discrepancy. Traditionally, theyd rely on their experience and manual searches through documentation. In our AI-augmented future, theyll use vector-based systems that work like an extension of their professional intuition. As they describe the issue, the AI doesnt just search for keywords it understands the problems context, pulling from a vast network of interconnected financial concepts, regulations and past cases. The key is learning to communicate with these systems in a way that leverages both human expertise and AIs pattern-recognition capabilities.But mastering these evolved skills isnt about learning new software or memorizing prompt templates. Its about understanding how information connects and relates thinking in vectors, just like our brains naturally do. When you describe a concept to AI, youre not just sharing words; youre helping it navigate a vast map of meaning. The better you understand how these connections work, the more effectively you can guide AI systems to the insights you need.Taking action: Developing your core skills for AIReady to prepare yourself for the AI-augmented future? Here are concrete steps you can take to develop each of the three core skills:Strengthen your readingReading in the AI age requires more than just comprehension it demands the ability to quickly process and synthesize complex information. To improve:Study two new words daily from technical documentation or AI research papers. Write them down and practice using them in different contexts. This builds the vocabulary needed to communicate effectively with AI systems.Read at least two to three pages of AI-related content daily. Focus on technical blogs, research summaries or industry publications. The goal isnt just consumption but developing the ability to extract patterns and relationships from technical content.Practice reading documentation from major AI platforms. Understanding how different AI systems are described and explained will help you better grasp their capabilities and limitations.Evolve your writingWriting for AI requires precision and structure. Your goal is to communicate in a way that machines can accurately interpret.Study grammar and syntax intentionally. AI language models are built on patterns, so understanding how to structure your writing will help you craft more effective prompts.Practice writing prompts daily. Create three new ones each day, then analyze and refine them. Pay attention to how slight changes in structure and word choice affect AI responses.Learn to write with query elements in mind. Incorporate database-like thinking into your writing by being specific about what information youre requesting and how you want it organized.Master queryingQuerying is perhaps the most crucial new skill for AI interaction. Its about learning to ask questions in ways that leverage AIs capabilities:Practice writing search queries for traditional search engines. Start with simple searches, then gradually make them more complex and specific. This builds the foundation for AI prompting.Study basic SQL concepts and database query structures. Understanding how databases organize and retrieve information will help you think more systematically about information retrieval.Experiment with different query formats in AI tools. Test how various phrasings and structures affect your results. Document what works best for different types of requests.The future of human-AI collaborationThe parallels between human memory and vector databases go deeper than simple retrieval. Both excel at compression, reducing complex information into manageable patterns. Both organize information hierarchically, from specific instances to general concepts. And both excel at finding similarities and patterns that might not be obvious at first glance.This isnt just about professional efficiency its about preparing for a fundamental shift in how we interact with information and technology. Just as literacy transformed human society, these evolved communication skills will be essential for full participation in the AI-augmented economy. But unlike previous technological revolutions that sometimes replaced human capabilities, this one is about enhancement. Vector databases and AI systems, no matter how advanced, lack the uniquely human qualities of creativity, intuition, and emotional intelligence.The future belongs to those who understand how to think and communicate in vectors not to replace human thinking, but to enhance it. Just as vector databases combine precise mathematical representation with intuitive pattern matching, successful professionals will blend human creativity with AIs analytical power. This isnt about competing with AI or simply learning new tools its about evolving our fundamental communication skills to work in harmony with these new cognitive technologies.As we enter this new era of human-AI collaboration, our goal isnt to out-compute AI but to complement it. The transformation begins not with mastering new software, but with understanding how to translate human insight into the language of vectors and patterns that AI systems understand. By embracing this evolution in how we communicate and process information, we can create a future where technology enhances rather than replaces human capabilities, leading to unprecedented levels of creativity, problem-solving and innovation.Khufere Qhamata is a research analyst, author of Humanless Work: How AI Will Transform, Destroy And Change Life Forever and the founder of Qatafa AI. DataDecisionMakersWelcome to the VentureBeat community!DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.You might even considercontributing an articleof your own!Read More From DataDecisionMakers
    0 Commentarii ·0 Distribuiri ·93 Views
  • EA's four evergreen principles to keep your service games fresh
    www.gamesindustry.biz
    EA's four evergreen principles to keep your service games freshHead of operations Arjun Balaram encouraged India GDC attendees to be flexible with how they update their titlesImage credit: Electronic Arts/PopCap Games Academy by James Batchelor Editor-in-chief Published on Nov. 15, 2024 There are many considerations to factor in when planning how to update and expand a live service title, but chief among them is what players best respond to.That's according to Electronic Arts' head of operations Arjun Balaram, who delivered a presentation at India Game Developers Conference in Hyderabad this week on the evergreen tenets of developing live service games.Drawing on his extensive experience of such titles, particularly on mobile, Balaram said there were three lenses through which every update must be viewed:Players: How the new content will encourage them to keep engaging with the gamePeople: How your team is kept informed of the goal and business context of each updateProcess: How you create each addition in efficient a manner as possibleBalaram kicked off with the player-centric considerations and the four key principles behind them, which we present below.1. Be ready to change, no matter how much work you have put inBalaram emphasised the need for developers and publishers to stay flexible, using the development of Bejeweled Blitz as an example. Arjun Balaram, EAThe PopCap Games team behind the match-three game spent months creating a new feature called 'Encore,' which allowed players to spend some coins to boost their score. It was presented alongside other options via a pop-up that appeared at the end of each match. PopCap spent weeks fine-tuning the UI, but focus groups revealed it hadn't worked out as they hoped."What we found was that, after all the effort we put in for months and weeks, the thing that players did most was they just clicked the 'X' [to close the pop-up]," Balaram explained. "They didn't go through the other stuff, the only thing they cared about was the score boost."So we redesigned it to really emphasise that all the other stuff was not relevant. That score boost was the motivation for playing, and redesigning this really helped."2. Give the players what they wantWhen Plants vs Zombies 2 was first developed, Electronic Arts planned to expand it every two to three months with a new 'world' that would be sold for $5. In addition to new levels, each world came with a set of 20 to 30 new plants players could deploy, as well as additional zombies to face and different challenges related to these additions. Balaram reported that each world would take three months to build and the team soon got into a cyclical process of updating the game. But as they paid closer attention to how players were interacting with the new content, the developers realised there was only one particular feature that was getting the most traction."Players really enjoyed the new plants and the quirkiness in them," Balaram said. "So we shifted to making premium plants, and instead of all the effort going into this world and so many different characters, every month we would have a plant we would sell for the same $5 and that worked well."We'd then have a system where there were quests and events associated with the plant, and that worked well too. So we went from this heavy content treadmill to [focusing on] what the players really cared about: having these new plants that they could play with."3. Ritualise your updatesA regular cadence of updates, and even notifications that build anticipation for those updates, can really help keep your players engaged. Live services games thrive when they become part of the players' regular routine, when players know what to expect and can plan for new additions.This can work on a mid-term basis, such as the two to three-month expansions mentioned above, or even on a daily basis. Balaram gave another example from Plants vs Zombies 2, whereby a timer would pop up at the top of the screen to let players know when the next daily quest would kick off."The simple thing of adding this timer basically stabilised our DAU," he said. "And we would see our DAU spike when the new quest was there so that was pretty cool."4. Players care more about what they get than how you make itBalaram said that live service titles should be designed with expandability in mind from the outset, but that your strategy should adapt around how players engage with those expansions.When designing new additions, it can be easy to get caught up in what the team thinks will improve the game. Equally, it can be tempting to share more and more detail about what you're working on and how you're making it as your community grows."Remember, the game you make is only as good as players say it is that's all that matters."However, the EA exec said that in his experience players are primarily concerned with what he described as "the three hows." In order of importance to your audience, these are 'how often', 'how much', and 'how'."Players care most about how often you're engaging them," he said. "What comes next, which they care slightly less about, is how much new stuff you're providing them."What they least care about is how you're actually doing it, what's under the hood, and so on. Is it a reskin? Is it something built from scratch? Is it a client update or a service-side push? They don't care, they just want something new and interesting to engage with."Above all, Balaram urged developers and publishers to pay attention to player feedback, and adapt their strategy and development processes accordingly."Remember, the game you make is only as good as players say it is that's all that matters."
    0 Commentarii ·0 Distribuiri ·79 Views
  • Netflix served the Tyson vs. Paul fight to 60 million households
    www.theverge.com
    Netflix peaked at 65 million concurrent streams during the boxing match between Mike Tyson and Jake Paul last night, according to Most Valuable Promotions, the promoter for the fight. Those streams went out to 60 million households globally, the group said in a press release shared with The Verge via email. Thats more than twice the traffic Netflix could see for its Christmas Day NFL stream this year, if everyone who watched last year streamed it.The crush of people trying to watch Tyson vs. Paul seemed to be more than Netflixs servers could easily handle, as the social web was awash with complaints about the quality of the stream, which many found to be muddy, or plagued with buffering and dropped connections. Downdetector recorded more than 100,000 complaints of Netflix streaming issues during the event, according to Bloomberg.Thats also just a massive number of people streaming a single live event at the same time. Disney served 59 million concurrent streams of a World Cup cricket match through its Disney Plus Hotstar service last year. It hit similar numbers a few days earlier, and again in June this year. Netflix CTO Elizabeth Stone told employees that the company dealt with this unprecedented scale by prioritizing keeping the stream stable for the majority of viewers, according to Bloombergs Mark Gurman. We dont want to dismiss the poor experience of some members, and know we have room for improvement, but still consider this event a huge success, Stone reportedly wrote. Update November 16th: Added Disney Plus Hotstar streaming numbers for additional context.
    0 Commentarii ·0 Distribuiri ·91 Views
  • The FTC says spam call complaints are way down since 2021
    www.theverge.com
    Complaints about unwanted telemarketing calls have dropped for the third straight year, the Federal Trade Commission announced yesterday. Reports of such calls have fallen by over 50 percent versus 2021, according to the FTC a decline that could be thanks in no small part to stepped-up government efforts to fight irritating telemarketing and phone scams.There were about 33,000 fewer unwanted call complaints during the 2024 fiscal year versus the year prior, writes the FTC. The drop affected all sorts of unwanted calls, although the agency writes that reports about debt reduction calls had jumped more than 85 percent from last year.FTC Bureau of Consumer Protection Director Sam Levine said that while illegal calls are still a scourge, he credited the FTCs strategy to pursue upstream players and equip the agency to confront emerging threats with the reduction in complaints.As for what the FTC has actually been doing, it points to its crackdowns on illegal telemarketing last year, and its rules that ban the impersonation of governments or businesses. The agency also cites its Telemarketing Sales Rule (TSR), which places a number of restrictions on telemarketers like when they can make their calls and its clarification that the TSR applies to scam calls that use AI, too. Last year, the FTC banned those extended vehicle warranty scams that many of us are familiar with. That ban followed the Federal Communication Commissions proposal of a $300 million fine over one such campaign. The FCC has played a part in other ways, too. The biggest US mobile carriers started complying with a new anti-spoofing protocol aimed at verifying that the phone number shown on consumers caller ID was actually the one that called them. The FCC also banned AI-generated robocalls, has cracked down on robocallers / robotexters that dont let people opt out, and requires cell carriers to block likely-illegal robotexts.
    0 Commentarii ·0 Distribuiri ·90 Views
  • Xogot Godot on iPad
    gamefromscratch.com
    Xogot is a new project from Miguel de Icaza (the creator of Mono among other projects) building upon his work on bringing Swift to the Godot game engine. Xogot is now available for testing via TestFlight and its a VERY good implementation of the Godot game engine on iPad devices, with a user interface optimized toward mobile usage.Details from the official blog:We are now ready for folks interested in testing it and finding places where we fall short of the expectations of iPadOS users and aspiring game developers to take it for a spin and help us identify the bumps in the road.It has been almost a year since we started to explore what it would take to bring Godot to the iPad as a bonafide iPadOS application. We are now ready for folks interested in testing it and finding places where we fall short of the expectations of iPadOS users and aspiring game developers to take it for a spin and help us identify the bumps in the road.We have prepared some documents for you:Goals for the TestFlight preview and signup form.Known Issues in this ReleaseGetting Started Guide: covers how to start new projects or bring existing projects into the iPadOnce you have signed up, you can chat with us in theXogot Discord Server.You will need to have an iPad up to date with the most current version of iOS (currently iPad OS 18) to install the preview release. You can check out Xogot, Godot on iPad in action in the video below.
    0 Commentarii ·0 Distribuiri ·127 Views
  • List of Large Mixture of Experts (MoE) Models: Architecture, Performance, and Innovations in Scalable AI Solutions
    www.marktechpost.com
    Mixture of Experts (MoE) models represents a significant breakthrough in machine learning, offering an efficient approach to handling large-scale models. Unlike dense models, where all parameters are active during inference, MoE models activate only a fraction of their parameters. This approach balances computational efficiency with scalability, making MoE models highly attractive for various use cases. MoE models achieve efficiency by activating fewer parameters while maintaining a larger total parameter count. This design introduces unique trade-offs, including increased architectural complexity, but it provides greater flexibility for developers and researchers.Lets explore the largest MoE models released to date, focusing on their architecture, capabilities, and relative performance. These models are all publicly available and exceed 100 billion parameters. The analysis is ordered chronologically by release date, with rankings provided where available from the LMSYS leaderboard as of November 4, 2024.Googles Switch-C Transformer is one of the earliest models in the MoE space. Released on Hugging Face in November 2022, it boasts a staggering 1.6 trillion total parameters, supported by 2048 experts. Despite being an early innovator in this domain, Switch-C is now considered outdated, as it is not ranked on modern benchmarks like LMSYS. However, it remains noteworthy as a foundational MoE model and continues to influence subsequent innovations. Smaller variants of the Switch-C Transformer are also available, offering more accessible entry points for experimentation.In March 2024, X AI released Grok-1, a model with 314 billion total parameters and 86 billion active during inference. Unlike its predecessor, Grok-1 utilizes a smaller pool of experts, eight in total, with only two active per inference task. Its 8k context length is suitable for moderately long input sequences, though it is not competitive with newer models. While Grok-1 has limited adoption and is not ranked on LMSYS, its successor, Grok-2, has shown promise in preliminary benchmarks. Grok-2, yet to be publicly released, has ranked fifth overall in specific LMSYS tasks, suggesting that future iterations of this model could redefine performance benchmarks in the MoE landscape.Shortly after Grok-1, Databricks released DBRX in late March 2024. This model features 132 billion total parameters, with 36 billion active, spread across 16 experts. Its 32k context length significantly outpaces many contemporaries, allowing it to process longer input sequences efficiently. DBRX is supported by multiple backends, including llamacpp, exllama v2, and vLLM, making it a versatile choice for developers. Despite its strong architecture, its LMSYS rankings place it only at 90th overall and 78th for hard prompts in English, indicating room for improvement in quality and adoption.April 2024 saw the release of Mistral AIs Mixtral 8x22b. This model stands out with its 141 billion total parameters and 39 billion active during inference. It incorporates eight experts, two of which are chosen dynamically based on the input. With a 64k context length, Mixtral is well-suited for tasks requiring extensive input handling. While its LMSYS rankings, 70th overall and 66th on hard prompts, indicate middling performance, its compatibility with multiple backends ensures usability across diverse platforms.Another April release was Snowflakes Arctic, an MoE model with 480 billion total parameters but only 17 billion active during inference. Arctics unique design combines sparse (7 billion) and dense (10 billion) components distributed among 128 experts. However, its performance falls short, ranking 99th overall on LMSYS and a notably low 101st for hard prompts. Its limited 4k context length further restricts its applicability, making it a less competitive option despite its innovative architecture.Skywork joined the MoE space in June 2024 with the release of Skywork-MoE. This model features 146 billion total parameters, of which 22 billion are active, and employs 16 experts during inference. With an 8k context length, it supports moderately lengthy tasks but lacks LMSYS rankings, which suggests limited testing or adoption. The base model is the only available version, as the promised chat variant has yet to be released.In August 2024, AI21 Labs released Jamba 1.5 Large, a hybrid model that merges MoE and mamba-transformer architectures. With 398 billion total parameters and 98 billion active, Jamba 1.5 Large offers an exceptional 256k context length, making it ideal for tasks requiring extensive input processing. Its LMSYS rankings reflect its high performance, placing 34th overall and 28th for hard prompts. Additionally, Jamba models excel in context benchmarks, particularly the RULER context benchmark, solidifying their reputation for long-context tasks.DeepSeek V2.5, released in September 2024, currently leads the MoE space in performance. This model incorporates 236 billion total parameters, with 21 billion active during inference. Its architecture includes 160 experts, of which six are dynamically chosen and two are shared, resulting in eight active parameters. With a 128k context length, DeepSeek V2.5 demonstrates robust capabilities for long-context tasks. It ranks 18th overall on LMSYS and 6th for hard prompts, outperforming all available MoE models. Earlier iterations, such as DeepSeek V2, laid the groundwork for its success.The most recent addition to the MoE family is Tencents Hunyuan Large, released in November 2024. With 389 billion total parameters and 52 billion active, Hunyuan Large employs a unique design, where one expert is chosen dynamically and one is shared. This results in two active parameters during inference. Its 128k context length matches that of DeepSeek V2.5, positioning it as a strong competitor. While it is not yet ranked on LMSYS, early indications suggest it could rival or surpass DeepSeeks performance.Among the MoE models discussed, DeepSeek V2.5 is the most robust option currently available. However, newer models such as Hunyuan Large and the anticipated Grok-2 may soon shift the rankings. Models like Jamba 1.5 Large also highlight the strengths of hybrid architectures, particularly in tasks requiring extensive context handling. The LMSYS rankings, while useful for initial comparisons, do not capture every nuance of model performance, especially for specialized tasks.In conclusion, MoE models represent a growing frontier in AI, offering scalable and efficient solutions tailored to diverse applications. Developers and researchers are encouraged to explore these models based on specific use cases, leveraging their unique architectures to optimize performance. As the field evolves, the MoE landscape will likely witness further innovations, pushing the boundaries of what these architectures can achieve.This article is based on this Reddit post. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast
    0 Commentarii ·0 Distribuiri ·121 Views
  • Web-LLM Assistant: Bridging Local AI Models With Real-Time Web Intelligence
    towardsai.net
    Web-LLM Assistant: Bridging Local AI Models With Real-Time Web Intelligence 0 like November 16, 2024Share this postAuthor(s): Isuru Lakshan Ekanayaka Originally published on Towards AI. Top highlightThis member-only story is on us. Upgrade to access all of Medium.In the dynamic realm of artificial intelligence, the ability to access and synthesize real-time information is paramount. Traditional large language models (LLMs) like ChatGPT excel in generating human-like text based on extensive training data. However, their knowledge is static, often limited to information available up to their last update. Enter Web-LLM Assistant, an innovative open-source project designed to overcome this limitation by integrating local LLMs with real-time web searching capabilities. This comprehensive guide delves into the functionalities, installation process, and practical demonstrations of Web-LLM Assistant, inspired by its GitHub repository .Image sourceIntroductionWhat is Web-LLM Assistant?Key FeaturesInstallation GuideUsage InstructionsDemonstration WalkthroughConfiguration OptionsDependenciesContributing to Web-LLM AssistantLicenseAcknowledgmentsDisclaimerPersonal Journey Behind Web-LLM AssistantConclusionAs the AI landscape continues to evolve, the demand for models that can provide up-to-date information grows. Web-LLM Assistant is a pioneering project that addresses this need by combining the strengths of local LLMs with the vast, ever-changing data available on the web. Whether youre a developer looking to integrate intelligent search capabilities into your applications or an AI enthusiast eager to explore cutting-edge technologies, Web-LLM Assistant offers a versatile and powerful solution.Web-LLM Assistant is a sophisticated web search assistant that leverages Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Commentarii ·0 Distribuiri ·143 Views
  • Cubic Roots-Fit a Quadratic Between a Turning Point And Midpoint!
    towardsai.net
    LatestMachine LearningCubic Roots-Fit a Quadratic Between a Turning Point And Midpoint! 0 like November 16, 2024Share this postAuthor(s): Greg Oliver Originally published on Towards AI. A Root Approximation Tool Kit Mixing and Matching Polynomial ArchitecturesGenetic Cubic Architectural DimensionsThis member-only story is on us. Upgrade to access all of Medium.This post presents a novel Cubic-Quadratic function matchup for finding Cubic roots. It exploits the little publicised fact that the Midpoint between 2 adjacent roots of a reduced Cubic when multiplied by -2 gives us the 3rd root!This is related to the sum of the factors = Coefficient B of x. In the example B=0 being a reduced Cubic.Besides being graphically intuitive the adopted Quadratic function greatly simplifies Cubic function redesign with varying Constants D, because its a lot easier to find Quadratic roots with changing Constants c than Cubic roots with changing Constants D.This post assumes math at the year 12 level.Before doing a couple of examples, lets do a brief recap on genetic Cubic architecture.Cubic Architecture RecapThe header graph shows reduced Cubic y=Ax+Cx+D and its genetic dimensions shown in black. It is rotationally symmetrical about its Inflection Point Ip(0, y)=Constant D; (Imaginary propellor shaft ?):):). It has y=Ip(y) intercepts as follows:Int A(x)= SqRt[-C/A] and Int B(x)= + SqRt[-C/A] with Midpoints:Midpoint (Int A : Ip(0, D)=Int A(x)-SqRt[-C/4A] and +SqRT[-C/4A] (not shown)And Turning Points Tp(x)=+-SqRt[-C/3A]Roots Rt 1, Rt 2 with Root Midpoint; Mid Point (Rt Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Commentarii ·0 Distribuiri ·142 Views
  • Daily Deals: 65" Samsung S90C 4K OLED TV, ROG Ally, Metroid Dread, and More
    www.ign.com
    The weekend is officially here, and we've rounded up the best deals you can find! Discover the best deals for Sunday, November 10, below:65" Samsung S90C 4K OLED Smart TV for $999.9965" Samsung S90C 4K OLED Smart TVThe Samsung S90C is a 2023 model (superseded by the S90D for 2024) and was considered one of the best OLED TVs on the market last year, superior to even the LG C3. The S90C uses Samsung's proprietary quantum dot (QD) OLED panel. QD OLED panels are brighter than traditional OLED panels without losing the color accuracy, range, and wide-viewing angles that OLEDs are known for. Compared to a traditional LED LCD TV, an OLED TV offers superior image quality, near-infinite blacks, near-infinite contrast ratio, and near-instantaneous response times.Mario & Luigi: Brothership for $49.99Mario & Luigi: BrothershipMario & Luigi: Brothership is the first Mario & Luigi title on Nintendo Switch, acting as the first new entry in the series in over nine years. Developed by Acquire, this is the first 3D entry in the series, with plenty of new mechanics to discover. Join Mario and Luigi on this adventure to reconnect the world of Concordia and set sail to many islands on Shipshape Island!Save on ROG Ally at Best Buy$449.99 for My Best Buy Plus MembersAsus ROG Ally AMD Ryzen Z1 Extreme 512GB Gaming HandheldThis weekend at Best Buy, you can save on the ROG Ally Z1 Extreme model, where it's priced at $499.99. This handheld device is perfect for exploring your Steam library on the go, with PC Game Pass support also easily accessible. If you're a My Best Buy Plus member, you can save an additional $50 off this deal, scoring the ROG Ally for $449.99. Monster Hunter Stories Collection for $36.99Monster Hunter Stories Collection - Nintendo SwitchThe recently released Monster Hunter Stories Collection includes both Monster Hunter Stories and Monster Hunter Stories 2: Wings of Ruin. This marks the first time that players can experience the first game with the Japan-exclusive Title Updates, in addition to full voice acting. Jump into the world of Monster Hunter in a new light with this collection!Metroid Dread for $39.99Metroid Dread - Nintendo SwitchMetroid Dread was the grand return of 2D Metroid on Nintendo Switch, with developer MercurySteam teaming up with Nintendo EPD to craft the long-awaited next chapter in Samus Aran's story. Challenging puzzles, fun boss fights, and wide exploration combine to create one of the best games on Nintendo Switch. Don't miss your chance to pick it up at a discount this weekend. Save on This M2 MacBook AirApple - MacBook Air 13-inch Apple M2 - 16GB Memory - 256GB SSDAs part of Best Buy's early Black Friday sales, you can save $250 off this M2 MacBook Air. With 16GB of RAM and 256GB of SSD storage, this is a very solid option for those looking to either upgrade their current Mac or enter the ecosystem for the first time. This model includes features like TouchID for login, a display capable of up to 500 nits of brightness, and Apple Intelligence support.Persona 5 Royal (PC) for $19.99Persona 5 Royal: Standard Edition (PC)Persona 5 Royal is by far one of the most beloved games of the last ten years. With a vibrant cast of characters and impressive narrative, there are well over 100 hours of gameplay here to discover. The turn-based combat system the Persona series is known for feels better than ever, with new mechanics to customize your gameplay the way you like it.
    0 Commentarii ·0 Distribuiri ·119 Views