• How Does Stress Impact Listening? For Mice, They Don't Hear as Well
    www.discovermagazine.com
    We process our world differently when were stressed out, and so, too, do mice. According to a new paper in PLOS Biology, mice perceive sounds in a different way when theyve been subjected to repeated stressors, responding to some louder sounds as if they were softer. We found that repetitive stress alters sound processing, the study authors stated in their paper. These alterations in auditory processing culminated in perceptual shifts, particularly a reduction in loudness perception.Read More: How Do Other Animals See the World?Brain Processing Under Chronic StressThough an abundance of research has recognized that chronic stress impacts our complex cognition, impairing processes like learning and memory, far fewer studies have looked into the impacts of chronic stress on our senses. There remains a notable gap, the study authors stated, in our understanding of its influence on fundamental cortical functions, such as sensory processing.In fact, of the few studies that have tested how chronic stress shapes the perception of stimuli, most have tested how chronic stress shapes the perception of negative stimuli things as terrible as pain and unpleasant smells. There is little research on how our brains process neutral sounds under chronic stress, said study author Jennifer Resnik, an assistant professor at Ben-Gurion University of the Negev in Israel, in a press release.To tease out the impacts of stress on neutral sensory processing, the study authors turned to mice. Confining the mice to a small space for thirty minutes a day over the course of a week and then assessing their responses to sound with a behavioral task, the team revealed that the mices perception of loudness was reduced, as seen by their tendency to treat some louder sounds as softer sounds.Our research suggests that repeated stress doesnt just impact complex tasks like learning and memory, Resnik said in the release. It may also alter how we respond to everyday neutral stimuli.Read More: What Animals Can Sense That Humans Can'tQuiet for a MouseIt isnt a simple task to figure out how loud a sound seems to a mouse. To arrive at their results, the study authors trained mice to identify three types of sounds low-intensity (at 40 to 45 dB), mid-intensity (at 50 to 70 dB), and high-intensity (at 75 to 80 dB) as soft or loud by licking one of two water spouts, a loud spout and a soft spout, in the lab. If the mice correctly identified a low-intensity sound as soft by licking the soft spout or a high-intensity sound as loud by licking the loud spout, they were rewarded with a taste of sweetened water. They were also rewarded with a taste whether they identified a mid-intensity sound as soft or loud. Testing the mice before their week of stress and after, the study authors found that the animals tendency to identify low-intensity sounds as soft and high-intensity sounds as loud remained the same, though their labelling of mid-intensity sounds changed. Impaired by stress, they were more likely to report mid-intensity sounds as soft than loud, indicating their reduced perception of loudness.While the stress didnt alter what the mice were able to hear, as seen in the activity in their auditory brainstems, it did alter their perception of what they heard. Indeed, brain images of the mice showed that their altered perception correlates to increased activity in some sensory cells and decreased activity in others a unique combination that could connect to the overall softening of their sound perception.Additional research could catch similar sound perception changes in other chronically stressed animals. Until then, another press release states that the results reveal that a stressed-out mouse is a less sensitive one, at least in terms of neutral sounds. Our findings provide insight into a possible mechanism by which repetitive stress alters sensory processing and behavior, the study authors concluded in their paper, challenging the idea that stress primarily modulates emotionally charged stimuli.Read More: What Are the Loudest Animals in the World?Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Frontiers in Neuroscience. Chronic Unpredictable Mild Stress Alters Odor Hedonics and Adult Olfactory Neurogenesis in MiceSam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.
    0 Commenti ·0 condivisioni ·30 Views
  • A New Fish Species Seems to Wear Red Face Paint, Similar to a Studio Ghibli Character
    www.discovermagazine.com
    After noticing a fish with striking red stripes under its eyes, Chinese scientists knew theyd identified a new species. The fish a species of tilefish appears to be wearing red face paint and thus has been named after San, a character from Princess Mononoke, a Studio Ghibli film.With this rare find, researchers are hoping to learn more about this genus and further investigate the species genetic diversity.The findings were recently published in ZooKeys.Finding a new species in this group is a rare and fortunate event, especially one as distinctive as Branchiostegus sanae, said Haochen Huang, lead author of the new study, in a press release.A New Species DiscoveredThe facial markings of Branchiostegus sanae resemble the markings on San's face in Princess Mononoke.Credit: (Fish: Branchiostegus sanae. Huang et al. CC-BY 4.0 Illustration: San from Princess Mononoke 1997 Hayao Miyazaki/Studio Ghibli, ND)While searching through online fish markets, members of the research team noticed a fish with strange red marks on their faces. The fish in question was a type of tilefish, deep-sea dwellers found in the Atlantic, Pacific, and Indian Oceans. According to the study, this fish was caught in the South China Sea.Having never seen a fish with such markings, the research team from the South China Sea Institute of Oceanology, Chinese Academy of Science, Zhejiang University, and Ocean University of China, performed a genetic analysis on it to determine that it was indeed a new species in the family Branchiostegidae.The team then named the fish Branchiostegus sanae due to the resemblance to San from Princess Mononoke.Read More: Do Fish Feel Pain?The Rare Branchiostegus sanaeTilefish are an important food source for people. Some live at depths of nearly 2,000 feet and make their homes in silty holes. Though they are a common food source, according to the study, they have a low genetic diversity.The family Branchiostegidae only has 31 species, 19 of which are in the genus Branchiostegus. Since 1990, science has only identified three species of Branchiostegus.The research team has preserved several B. sanae specimens in marine biological locations for further study.The Princess and the FishThe term "Mononoke" comes from Japanese folklore, describing supernatural spirits. According to the press release, this term relates to the phrase Chinese anglers use to describe B. sanae Ghost Horsehead Fish.In Princess Mononoke, San is a young woman raised by wolves after being abandoned by her human parents. She sees herself as a part of the forest and fights to protect it, Huang said in a press release. The film delves into the complex relationship between humans and nature, promoting a message of harmonious coexistence between the two: something we hope to echo through this naming.While fishing is vital to the economy and acts as an important food source, fish are also vital to the ocean's ecosystem. And further research and study, especially of this species, could help preserve them for the future. Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:A graduate of UW-Whitewater, Monica Cull wrote for several organizations, including one that focused on bees and the natural world, before coming to Discover Magazine. Her current work also appears on her travel blog and Common State Magazine. Her love of science came from watching PBS shows as a kid with her mom and spending too much time binging Doctor Who.
    0 Commenti ·0 condivisioni ·32 Views
  • We put the Powerbeats Pro 2 earbuds through hard workouts to see if they stay in place
    www.popsci.com
    Stan HoraczekShareWe may earn revenue from the products available on this page and participate in affiliate programs. Learn more Workout headphones arent worth the plastic theyre made out of if they wont stay in place during a workout. With the Powerbeats Pro 2 earbuds, Beats (owned by Apple) has gone to great lengths to keep them in your ears during even the most vigorous exercise. In addition to the new heart-rate tracking function, these ambitious earbuds also add familiar AirPod features, like Spatial Audio and active noise cancellation. As a result, the Powerbeats Pro 2 are some of the best and most versatile wireless earbuds Ive ever used, even outside the confines of the gym.ProsHeart-rate monitoring is built inRedesigned ear loop is very comfortable and extremely effectiveSolid sound qualityIntegrates easily with Apple devicesLots of eartip optionsFit test helps find the best fitConsPricyCase is largeHow we tested the Beats Powerbeats Pro 2 earbudsI have personally been working out for a week with the Powerbeats Pro 2, doing both regular cardio and power lifts. To pressure test them, I also had my advanced-level CrossFit pals do some novel movements while wearing them. To test their viability outside the gym, I wore them for a full day of working remotely while listening to music and participating in video calls.Three key areas we evaluated: Fit, sound quality, heart-rate monitoring.Stan Horaczek FitWhen the original Powerbeats Pro debuted way back in 2019, the oversized hooks were a welcome but imperfect feature for those of us who couldnt keep regular AirPods in. They were effective but started to chafe and squeeze with extended use. With the Powerbeats Pro 2, the ear hooks have shrunk by half while actually increasing stability thanks to the new shape and nickel-titanium core. They didnt chafe during extended cardio, and I only felt mild soreness after two hours of wearing them during the workday. Ears are sensitive, so no earhook-style earbud will ever be perfect, but wearability isnt an issue here.The package includes a total of five different ear tips, plus an automatic fit test function that will help you choose the best ones for your specific ear holes. I had several people try the test with our review pair, and almost everyone was surprised that they required larger tips than they would have expected. I tried a few sizes just to see if they really make a difference, and I can say with certainty that you want to do the test and get the right tips for the best fit and sound.I had no issues with stability at all. I even had some of my very fit friends try them while doing burpees and bar muscle-ups (the scary older sibling of the pull-up in which you pull the top half of your body up and over the pull-up bar) without issue. They stay where theyre supposed to.The only slight hiccup comes when you have to put them in. Because the earbud itself has a click button on its outer shell, I found myself pausing and unpausing the music as I first inserted and adjusted the earbuds. Finger placement is key, and I eventually figured out the ideal technique, but putting them in quickly still results in the occasional input.Sound qualityThe Powerbeats Pro 2 earbuds have inherited many of their pure sound performance characteristics from the AirPods Pro 2. Thanks to Apples H2 chipset, the new Powerbeats support Spatial Audio for simulated surround sound, active noise-canceling (ANC) for isolation, and dynamic EQ for optimum levels.I started the test in transparency mode, which is crucial for a workout headphone that you might wear while running or cycling in the world. Its similar to transparency mode on the AirPods in that its effective, but it cant replicate the true pass-through effect of something like a bone conduction or open-back earbud. I took them walking on a busy street and felt confident I could hear cars, pedestrians, and whatever else was happening around me.With ANC turned on, the Powerbeats Pro 2 earbuds do a good job of blocking out the world. I employed this mode on a 45-minute elliptical trainer session in which the buds thankfully obfuscated the sound of the person on the machine next to me playing TikToks at full volume from their phone speaker. Again, performance is similar to the AirPods Pro.The dynamic EQ kicks in when youre not using ANC or transparency. Youre sick of hearing this, but theyre once again similar to the AirPods Pro. Ive been listening to a lot of slam metal at the gym lately, and the blast beats and pig squeals of the most recent PeelingFlesh EP sounded crisp and punchy, thanks to the custom vented drivers. Because I contain multitudes, I followed that up with Please, Please, Please by Sabrina Carpenter, and the Powerbeats Pro 2 did a solid job making the highs sound sufficiently sparkly without harshing the pleasing tone of the sultry vocals.Heart-rate monitoringI dont typically wear an Apple Watch at the gym. I cant resist digging into the workout and fitness data, but it ultimately ends up stressing me out and adding a layer of anxiety that I wouldnt otherwise feel. Oddly, the Powerbeats Pro 2 work very well for someone like me.Each Powerbeat Pro 2 earbud has an optical heart-rate sensor baked into it. Thats an upgrade from other similar products, which typically only have a sensor in one ear. The ear is a great place to monitor heart rate because the skin is so thin and the veins are so close to the surface. Plus, that redesigned hook works overtime to keep the sensor right where it should be.The heart-rate sensor plays nicely with seven third-party apps, which makes the setup a little finicky. I used the Nike Run Club app, and once it was up and running, the heart-rate measurements seemed accurate and steady. I had a friend of mine check it against his very fancy dedicated chest strap heart-rate monitor set up during a light workout, and they stayed within three to four BPM of each other for the duration of the activity.If you wear an Apple Watch and the earbuds at the same time, the Apple Watch data will override the readings from the earbuds, but hopefully, down the line, users will get the option to choose between the two. For now, though, this is great for people like me who dont always wear the watch.Beats Fit Pro vs. Powerbeats Pro 2While the Powerbeats Pro 2 earbuds have clearly taken over the title of best workout headphones in my eyes, this is also a good time to check out the venerable Beats Fit Pro. They dont have the outer ear hook, so theyre not as good if youre doing truly bombastic or explosive movements. And they dont offer heart-rate tracking. But you do get a great fit with their silicone wingtips, comparable battery life, active noise canceling, Spatial Audio, transparency mode, and a rugged build in an earbud thats often just $159 on sale.The verdictIf youre not worried about budget or you regularly do very strenuous and bombastic exercise, then the Powerbeats Pro 2 earbuds are absolutely worth the $250 asking price. They check all the boxes when it comes to sound, will never come out of your ears, and provide built-in heart-rate monitoring when no other Apple-native earbud does.
    0 Commenti ·0 condivisioni ·33 Views
  • Bizarre Dog-Headed Creatures Rigged With Houdini
    cgshares.com
    At first glance, these may appear to be ordinary goldfish, but up close, youll realize theyre far from fish and resemble more of a bizarre human-like capybara in some way. For this project, Yota Tanabe was inspired by a dragon rig, aiming to replicate its movement while working with a range of characters, successfully achieving this with 50 fish in real-time.Now, the setup follows the animated points properly and has been optimized to be even lighter. This is kind of gross but also amusing:Check out some of Yotas other recent Houdini experiments below and follow him on X/Twitter for more:Dont forget to join our80 Level Talent platformand ournew Discord server, follow us onInstagram,Twitter,LinkedIn,Telegram,TikTok, andThreads,where we share breakdowns, the latest news, awesome artworks, and more.Source link The post Bizarre Dog-Headed Creatures Rigged With Houdini appeared first on CG SHARES.
    0 Commenti ·0 condivisioni ·31 Views
  • EXCLUSIVE: Unity CEOs Internal Announcement to Staff Amidst the Layoffs
    cgshares.com
    In case you missed it, earlier today, 80 Level reported on the numerous layoffs that recently took place at Unity Technologies, affecting entire departments and described as massive by several impacted employees. Over the past few hours, I have contacted additional developers to clarify several points from the original article and have even obtained the full text of the email sent by Unity CEO Matthew Bromberg to the staff.Firstly, regarding the clickbait-sounding detail about the internal memo being sent to employees at 5 AM PST 80 Level can now confirm that the job cuts were announced simultaneously across the company. Based on accounts from several laid-off Unity workers in different time zones, the timing checks out, and indeed, many West Coast-based employees had their breakfast ruined by the early morning announcement, followed shortly after by a generic termination letter from HR.Secondly, addressing the number of affected employees, unfortunately, we were unable to determine the exact figure, as even those impacted dont know yet. According to Brombergs email, all notifications are expected to be sent by the end of February 12, meaning only Unity executives know the exact number and, as youve probably guessed, they arent telling.Lastly, the reasons behind the layoffs. Along with the companys strategy going forward, they were detailed in Matthew Brombergs internal statement, which reads as follows:Folks,We are making some important organizational changes today within the CTO, Engine Product, and Ads teams. These changes are a response to choices were making about what direction Unity will take in the future, and some of our colleagues jobs will be impacted.What follows provides some detail on the rationale behind the decisions weve made and how those decisions will be implemented. I know that there is some exhaustion associated with prior changes at Unity that havent delivered the promised results, but 2025 is going to be the year where we bring to market products and services that will transform our position in the marketplace and provide a springboard to long-term growth.The EngineOur product and engineering teams are currently stretched across too many products, creating complexity and limiting impact. Historically, weve engaged in extended debates about what our focus would be, which would prevent crisp decision making and limit release velocity. We also added people and created operating structures that were meant to speed us up, only to find they were slowing us down. Under the leadership of Steve Collins, Shanti Gaudreault, Andie Nordgren, and Adam Smith, we are changing this approach. Some principles well be following:Optimize around fidelity for ubiquity: While well always try to enable the best quality graphics we can, our primary directive is to help customers reach the widest possible audience across platforms and devices.Improve the customer experience today: While we wont sacrifice innovation, we need a better balance between looking ahead and shipping higher quality, better performing, more stable software. We are going to invest in stability by tackling critical technical debt, making it easier for customers to build and run games while reducing risks tied to outdated technologies. To innovate, we must first strengthen our existing foundation.Platform extensibility: Our platforms extensibility is its greatest strength. Well double down on this by allowing customers and partners to build on our core capabilities with strong support.Invest in Industry, Live Services, and AI.Data is our future: Our engine customers need better insight into player behavior and Runtime stability, and our advertising customers need better ROI to grow their games. The Runtime must enable both.As part of this new approach, we are also bringing key technical teams together to ensure all product decisions directly support our new principles. Pierre-Paul Girouxs AI group and Amar Mehtas Central Technology Services team are joining the CTO organization, with both Pierre-Paul and Amar reporting directly to Steve.Advertising Products, Engineering and RevenueTwo years on from the merger with ironSource, it is time to bring our go-to market teams, technology, and product offering together, integrating them directly into the Unity ecosystem so that our customers can gain a competitive edge in the market.In 2025, in conjunction with completing the rebuild of our machine learning stack, well integrate Unity Ads, Unity LevelPlay, and the Tapjoy offerwall into the Runtime so that they are on the same cloud and data platform and share a single data set. Our Ads revenue teams will then require some modification to align fully with our product and engineering teams, and well be able to streamline our data science and ad serving teams as well.We are splitting the revenue organization into two global teams Supply and Demand which will be led from EMEA and the U.S., respectively.This will allow the Demand leader in the U.S. to be closer to the PE teams working on the machine learning and data initiatives that will have the greatest impact on our advertising customers.The Supply team will align more closely with the relevant PE teams in Tel Aviv and EMEA for smoother coordination, and will own supply sales, LevelPlay and Offerwall integrations, and tech support.The product and engineering teams for the ironSource ad network will remain as a cohesive, standalone team that can move fast and adjust to customer needs with no investment in tech migrations. This will create two distinct paths for each network to thrive, and ensure we can maintain growth in our current business while evolving as quickly as we can to meet the challenges in the marketplace.As part of this change, we also want to consolidate the Ads leadership in the U.S., and therefore in a few months, after completing the transition and ensuring were set up for success, Nadav Ashkenazy will hand over the CRO responsibility to a new leader in North America. Nadav wears many hats at Unity leader of the Ads revenue org, GM of Supersonic, and site leader of Tel Aviv. I want to extend my deep gratitude to him for his leadership, dedication, and the amazing job hes done leading our Tel Aviv office. Im very grateful for his partnership.Thats the gist of what we are doing and why. People whose roles are being eliminated or those entering an employment consultation period will be notified over the course of the next couple of days, with instructions on next steps. We expect all notifications to be completed by EOD on Feb 12.I want to thank each impacted colleague for their contributions to Unity. Well do everything we can to handle these difficult changes with a lot of care and consideration, and to support impacted employees through this transition. Please remember to take care of yourselves as well. Confidential support through Lyra is available if you need it, and well extend access to mental health benefits to those who are leaving.If you have questions or concerns, dont hesitate to reach out to your manager, an executive leader, or #ask-hr. More details about the changes and updated org charts will be added to this intranet page.Starting later this week, Ill be sharing more about our 2025 strategy in a series of Town Halls in Montreal, Tel Aviv, Copenhagen, Seoul, Tokyo, and San Francisco, where Ill also be able to answer your questions about how these changes support that strategy. The first Town Hall will be global and I will host it in Montreal tomorrow. I look forward to seeing many of you both in person and virtually then.MattRead the original report here and dont forget to join our80 Level Talent platformand ournew Discord server, follow us onInstagram,Twitter,LinkedIn,Telegram,TikTok, andThreads,where we share breakdowns, the latest news, awesome artworks, and more.Source link The post EXCLUSIVE: Unity CEOs Internal Announcement to Staff Amidst the Layoffs appeared first on CG SHARES.
    0 Commenti ·0 condivisioni ·31 Views
  • Almost 30 Years Later, One Of The Longest-Running MMORPGs Is Getting A New Class
    www.gamespot.com
    One of the first and longest-running MMORPGs ever made, Tibia, will receive a new class for the first time ever in its almost 30-year history, developer CipSoft has announced. The Monk class will join Tibia's four original classes--Sorcerer, Paladin, Druid, and Knight--which launched alongside the top-down 2D MMO in 1997.As revealed in a blog post and teaser video, the Monk will be a melee fighter that can also provide a support role by healing allies. CipSoft said the Monk can be "played in a very special way" but wanted to make sure it fit seamlessly into the world of Tibia and existed in harmony with the game's existing four classes. The team also knew it wanted to introduce a new melee class that could also heal, but didn't want it to encroach on secondary roles like debuffing filled by Paladin and Sorcerers. "In fact, one of our main goals was to ensure that the new vocation holds a unique position without undermining the value of others or rendering them obsolete," CipSoft said. "We are very happy about the arrival of the Monk and hope you are just as eager to experience what it is like when they spring into action."Continue Reading at GameSpot
    0 Commenti ·0 condivisioni ·17 Views
  • CoD: Black Ops 6 And Warzone Season 2 Reloaded Start Date And Details
    www.gamespot.com
    The Season 2 Reloaded update for Call of Duty arrives in Black Ops 6 and Warzone later this month, and here you'll find everything rumored and announced so far. This includes new melee weapons, more maps, and limited-time events. There's even a rumor of a potential TMNT crossover this season.Table of Contents [hide]Call of Duty Season 2 Reloaded start times Call of Duty Season 2 Reloaded start timesBased on the days left for the Season 2 battle pass, the midseason "Reloaded" update should arrive around Thursday, February 20. These seasonal updates usually go live around 9 AM PT / 12 PM ET / 5 PM BST across all platforms.Continue Reading at GameSpot
    0 Commenti ·0 condivisioni ·17 Views
  • How To Eliminate All Bouncing Bomb Intelligence in Sniper Elite Resistance Mission 4
    gamerant.com
    Mission 4: Collision Course of Sniper Elite Resistance takes players back to the extended version of the map from the first mission. One of the objectives of mission 4 is to eliminate all Bouncing Bomb intel the Nazis have stolen from the Allied Forces. Players must infiltrate a Nazi compound and eliminate the scientists tasked with studying the bomb and replicating it. Here's how you can complete this objective.
    0 Commenti ·0 condivisioni ·18 Views
  • Solo Leveling: Monarch's Domain, Explained
    gamerant.com
    During Sung Jinwoo's battle against Kargalgan and his High Orc forces in Solo Leveling Season 2 -Arise from the Shadow-, episode 6, "Don't Look Down On My Guys", the necromancer unleashed his Shadow Army and made use of a new Skill he gained during his last visit to the Demon Castle, a Skill that directly affects the combat strength of his Shadows. The Skill is unique to the Shadow Monarch, and enables him to significantly buff his Shadows.
    0 Commenti ·0 condivisioni ·18 Views
  • What Are Foundation Models?
    blogs.nvidia.com
    Editors note: This article, originally published on March 13, 2023, has been updated.The mics were live and tape was rolling in the studio where the Miles Davis Quintet was recording dozens of tunes in 1956 for Prestige Records.When an engineer asked for the next songs title, Davis shot back, Ill play it, and tell you what it is later.Like the prolific jazz trumpeter and composer, researchers have been generating AI models at a feverish pace, exploring new architectures and use cases. According to the 2024 AI Index report from the Stanford Institute for Human-Centered Artificial Intelligence, 149 foundation models were published in 2023, more than double the number released in 2022.In a 2021 paper, researchers reported that foundation models are finding a wide array of uses.They said transformer models, large language models (LLMs), vision language models (VLMs) and other neural networks still being built are part of an important new category they dubbed foundation models.Foundation Models DefinedA foundation model is an AI neural network trained on mountains of raw data, generally with unsupervised learning that can be adapted to accomplish a broad range of tasks.Two important concepts help define this umbrella category: Data gathering is easier, and opportunities are as wide as the horizon.No Labels, Lots of OpportunityFoundation models generally learn from unlabeled datasets, saving the time and expense of manually describing each item in massive collections.Earlier neural networks were narrowly tuned for specific tasks. With a little fine-tuning, foundation models can handle jobs from translating text to analyzing medical images to performing agent-based behaviors.I think weve uncovered a very small fraction of the capabilities of existing foundation models, let alone future ones, said Percy Liang, the centers director, in the opening talk of the first workshop on foundation models.AIs Emergence and HomogenizationIn that talk, Liang coined two terms to describe foundation models:Emergence refers to AI features still being discovered, such as the many nascent skills in foundation models. He calls the blending of AI algorithms and model architectures homogenization, a trend that helped form foundation models. (See chart below.)The field continues to move fast.A year after the group defined foundation models, other tech watchers coined a related term generative AI. Its an umbrella term for transformers, large language models, diffusion models and other neural networks capturing peoples imaginations because they can create text, images, music, software, videos and more.Generative AI has the potential to yield trillions of dollars of economic value, said executives from the venture firm Sequoia Capital who shared their views in a recent AI Podcast.A Brief History of Foundation ModelsWe are in a time where simple methods like neural networks are giving us an explosion of new capabilities, said Ashish Vaswani, an entrepreneur and former senior staff research scientist at Google Brain who led work on the seminal 2017 paper on transformers.That work inspired researchers who created BERT and other large language models, making 2018 a watershed moment for natural language processing, a report on AI said at the end of that year.Google released BERT as open-source software, spawning a family of follow-ons and setting off a race to build ever larger, more powerful LLMs. Then it applied the technology to its search engine so users could ask questions in simple sentences.In 2020, researchers at OpenAI announced another landmark transformer, GPT-3. Within weeks, people were using it to create poems, programs, songs, websites and more.Language models have a wide range of beneficial applications for society, the researchers wrote.Their work also showed how large and compute-intensive these models can be. GPT-3 was trained on a dataset with nearly a trillion words, and it sports a whopping 175 billion parameters, a key measure of the power and complexity of neural networks. In 2024, Google released Gemini Ultra, a state-of-the-art foundation model that requires 50 billion petaflops.This chart highlights the exponential growth in training compute requirements for notable machine learning models since 2012. (Source: Artificial Intelligence Index Report 2024)I just remember being kind of blown away by the things that it could do, said Liang, speaking of GPT-3 in a podcast.The latest iteration, ChatGPT trained on 10,000 NVIDIA GPUs is even more engaging, attracting over 100 million users in just two months. Its release has been called the iPhone moment for AI because it helped so many people see how they could use the technology.One timeline describes the path from early AI research to ChatGPT. (Source: blog.bytebytego.com)Going MultimodalFoundation models have also expanded to process and generate multiple data types, or modalities, such as text, images, audio and video. VLMs are one type of multimodal models that can understand video, image and text inputs while producing text or visual output.Trained on 355,000 videos and 2.8 million images,Cosmos Nemotron 34B is a leading VLM that enables the ability to query and summarize images and video from the physical or virtual world.From Text to ImagesAbout the same time ChatGPT debuted, another class of neural networks, called diffusion models, made a splash. Their ability to turn text descriptions into artistic images attracted casual users to create amazing images that went viral on social media.The first paper to describe a diffusion model arrived with little fanfare in 2015. But like transformers, the new technique soon caught fire.In a tweet, Midjourney CEO David Holz revealed that his diffusion-based, text-to-image service has more than 4.4 million users. Serving them requires more than 10,000 NVIDIA GPUs mainly for AI inference, he said in an interview (subscription required).Toward Models That Understand the Physical WorldThe next frontier of artificial intelligence is physical AI, which enables autonomous machines like robots and self-driving cars to interact with the real world.AI performance for autonomous vehicles or robots requires extensive training and testing. To ensure physical AI systems are safe, developers need to train and test their systems on massive amounts of data, which can be costly and time-consuming.World foundation models, which can simulate real-world environments and predict accurate outcomes based on text, image, or video input, offer a promising solution.Physical AI development teams are using NVIDIA Cosmos world foundation models, a suite of pre-trained autoregressive and diffusion models trained on 20 million hours of driving and robotics data, with the NVIDIA Omniverse platform to generate massive amounts of controllable, physics-based synthetic data for physical AI. Awarded the Best AI And Best Overall Awards at CES 2025, Cosmos world foundation models are open models that can be customized for downstream use cases or improve precision on a specific task using use case-specific data.Dozens of Models in UseHundreds of foundation models are now available. One paper catalogs and classifies more than 50 major transformer models alone (see chart below).The Stanford group benchmarked 30 foundation models, noting the field is moving so fast they did not review some new and prominent ones.Startup NLP Cloud, a member of the NVIDIA Inception program that nurtures cutting-edge startups, says it uses about 25 large language models in a commercial offering that serves airlines, pharmacies and other users. Experts expect that a growing share of the models will be made open source on sites like Hugging Faces model hub.Experts note a rising trend toward releasing foundation models as open source.Foundation models keep getting larger and more complex, too.Thats why rather than building new models from scratch many businesses are already customizing pretrained foundation models to turbocharge their journeys into AI, using online services like NVIDIA AI Foundation Models.The accuracy and reliability of generative AI is increasing thanks to techniques like retrieval-augmented generation, aka RAG, that lets foundation models tap into external resources like a corporate knowledge base.AI Foundations for BusinessAnother new framework, the NVIDIA NeMo framework, aims to let any business create its own billion- or trillion-parameter transformers to power custom chatbots, personal assistants and other AI applications.It created the 530-billion parameter Megatron-Turing Natural Language Generation model (MT-NLG) that powers TJ, the Toy Jensen avatar that gave part of the keynote at NVIDIA GTC last year.Foundation models connected to 3D platforms like NVIDIA Omniverse will be key to simplifying development of the metaverse, the 3D evolution of the internet. These models will power applications and assets for entertainment and industrial users.Factories and warehouses are already applying foundation models inside digital twins, realistic simulations that help find more efficient ways to work.Foundation models can ease the job of training autonomous vehicles and robots that assist humans on factory floors and logistics centers. They also help train autonomous vehicles by creating realistic environments like the one below.New uses for foundation models are emerging daily, as are challenges in applying them.Several papers on foundation and generative AI models describing risks such as:amplifying bias implicit in the massive datasets used to train models,introducing inaccurate or misleading information in images or videos, andviolating intellectual property rights of existing works.Given that future AI systems will likely rely heavily on foundation models, it is imperative that we, as a community, come together to develop more rigorous principles for foundation models and guidance for their responsible development and deployment, said the Stanford paper on foundation models.Current ideas for safeguards include filtering prompts and their outputs, recalibrating models on the fly and scrubbing massive datasets.These are issues were working on as a research community, said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. For these models to be truly widely deployed, we have to invest a lot in safety.Its one more field AI researchers and developers are plowing as they create the future.
    0 Commenti ·0 condivisioni ·17 Views