• Smart glasses appeal comes into focus at CES 2025
    www.computerworld.com
    Smart glasses attracted a lot of attentionat last weeksConsumer Electronics Show, with a range of devices on display that combine lightweight frames with functionality such as heads-up displays and AI-powered assistants.These contrast with the mixed-reality headsets that created a buzz earlyin 2024, includingMetas Quest 3 andApples Vision Proboth of which aremuch heavier devices designed for shorter periods of use.Apples Vision Pro headset captured a lot of attention in 2024, but lighter-weight smart glasses were the rage at CES 2025.JLStock / ShutterstockThis year,the focus definitely seemed to be more on smart glasses than on headsets, in part because the Ray-Ban Meta smart glasses were a huge hit last year, said Avi Greengart, president and lead analyst at Techsponential.Smart glasses require purposeful compromise, when it comes to balancing functionality with a lightweight form factor, and different vendors are making different decisions, to achieve this, said Greengart.Hallidays smart glasses, for example, project text and imagesdirectly into the wearers field of view. This is perceived as a 3.5-in.screen that appears in the upper-right corner of the users view, and remains visibleevenin bright sunlight, Halliday claims. A proactive AI assistantwhich requiresaBluetooth connection to a smartphoneenables features such as real-time translation in up to 40 languages, live navigation for directions, and teleprompter-style display of notes.Hallidays smart glasses come in three different colors.HallidayAt1.2 ounces, theyre even lighter than Metas glasses (whichat1.7 ounces areonlymarginally heavierthan regular Ray-Bans). Hallidays smart glasses are available for preorder for $489, with shipping expected to begin at the end ofthe first quarter of this year.Even Realitiesalso offers a minimalist take with its G1 smart glasses, which start at $599. These include a micro-LED projectorthat beams a heads-up display ontoeachlens,whilean AI assistantenableslive translation and navigation when paired with a smartphone.Another vendor inthespace, Rokid,recentlyannounced its Glasses, alightweight (1.7 ounces) aimed at continuous use through the day.In addition toa simple green text display and intelligent assistant, Rokids device also packs a 12-megapixelcamera for image and video capture into the frames.Nuance Audioowned by Metas Ray-Ban partner, EssilorLuxotticahas an even more focused product: glasses that integrate a hearing aid into the frames. When you need a bit more help hearing someone, you turn them on and the glasses amplify the sound of the person you are looking at and direct it to speakers on the glasses stems that are aimed at your ears, said Greengart.Meta is rumored to be have an updated version of its Ray-Ban devices slated for release later this year. Theyhis willreportedlyfeature a simple display to show notifications and responses from Metas AI assistant. Meta has sold more thanamillion Ray-Ban smart glasses to date, according toCounterpoint Research stats.Most of these glasses are ones that I wouldnt mind wearing out in public, said Ramon Llamas, research director with IDCs devices and displays team. Were finally seeing designs that look and feel less bulky, and were getting into a bunch of styles instead of the usual wayfarer design.Other glasses, such asXreals One ProandTCLs RayNeo X2(marketed as augmented reality rather than smart glasses), are heftier andactas a portable display, with the ability to watch videos and access apps when tethered to a laptop or smartphone.Although demand for smart glasses isstillin its infancy, shipments are expected to see a compound annual growth rate of 85.7% through to 2028, according to recent IDCstats. These extended reality devices will soon be the second largest category within the broader AR/VR market, IDC predicts, with several million devices sold each year.Mixed reality headsets such as Apples Vision Pro and Metas Quest products will continue to account for the largest share of the AR/VR market, according to IDC, with extended reality smart glasses in second place.IDCThough many of the devicesshownat CES are largelyaimed atconsumers,somesmart glasses are also being tailored to enterprise customers (Vuzixbeinganexample).As the technology matures, Llamas sees a growing range of business use cases for smart glasses: capturing visual information hands-free, for instance, or live translation, whichcould also be useful for business travelers.This is where having access to business apps can help, especially if you can speak into those apps to execute a task and the smart glasses can handle that, said Llamas.I think were still a ways off from that actually taking place, so for now, expect smart glasses to be mostly within the realm of consumersspecifically tech enthusiasts and cognoscenti.
    0 Commentaires ·0 Parts ·102 Vue
  • Training robots in the AI-powered industrial metaverse
    www.technologyreview.com
    Imagine the bustling floors of tomorrows manufacturing plant: Robots, well-versed in multiple disciplines through adaptive AI education, work seamlessly and safely alongside human counterparts. These robots can transition effortlessly between tasksfrom assembling intricate electronic components to handling complex machinery assembly. Each robots unique education enables it to predict maintenance needs, optimize energy consumption, and innovate processes on the fly, dictated by real-time data analyses and learned experiences in their digital worlds.Training for robots like this will happen in a virtual school, a meticulously simulated environment within the industrial metaverse. Here, robots learn complex skills on accelerated timeframes, acquiring in hours what might take humans months or even years.Beyond traditional programmingTraining for industrial robots was once like a traditional school: rigid, predictable, and limited to practicing the same tasks over and over. But now were at the threshold of the next era. Robots can learn in virtual classroomsimmersive environments in the industrial metaverse that use simulation, digital twins, and AI to mimic real-world conditions in detail. This digital world can provide an almost limitless training ground that mirrors real factories, warehouses, and production lines, allowing robots to practice tasks, encounter challenges, and develop problem-solving skills.What once took days or even weeks of real-world programming, with engineers painstakingly adjusting commands to get the robot to perform one simple task, can now be learned in hours in virtual spaces. This approach, known as simulation to reality (Sim2Real), blends virtual training with real-world application, bridging the gap between simulated learning and actual performance.Although the industrial metaverse is still in its early stages, its potential to reshape robotic training is clear, and these new ways of upskilling robots can enable unprecedented flexibility.Italian automation provider EPF found that AI shifted the companys entire approach to developing robots. We changed our development strategy from designing entire solutions from scratch to developing modular, flexible components that could be combined to create complete solutions, allowing for greater coherence and adaptability across different sectors, says EPFs chairman and CEO Franco Filippi.Learning by doingAI models gain power when trained on vast amounts of data, such as large sets of labeled examples, learning categories, or classes by trial and error. In robotics, however, this approach would require hundreds of hours of robot time and human oversight to train a single task. Even the simplest of instructions, like grab a bottle, for example, could result in many varied outcomes depending on the bottles shape, color, and environment. Training then becomes a monotonous loop that yields little significant progress for the time invested.Building AI models that can generalize and then successfully complete a task regardless of the environment is key for advancing robotics. Researchers from New York University, Meta, and Hello Robot have introduced robot utility models that achieve a 90% success rate in performing basic tasks across unfamiliar environments without additional training. Large language models are used in combination with computer vision to provide continuous feedback to the robot on whether it has successfully completed the task. This feedback loop accelerates the learning process by combining multiple AI techniquesand avoids repetitive training cycles.Robotics companies are now implementing advanced perception systems capable of training and generalizing across tasks and domains. For example, EPF worked with Siemens to integrate visual AI and object recognition into its robotics to create solutions that can adapt to varying product geometries and environmental conditions without mechanical reconfiguration.Learning by imaginingScarcity of training data is a constraint for AI, especially in robotics. However, innovations that use digital twins and synthetic data to train robots have significantly advanced on previously costly approaches.For example, Siemens SIMATIC Robot Pick AI expands on this vision of adaptability, transforming standard industrial robotsonce limited to rigid, repetitive tasksinto complex machines. Trained on synthetic datavirtual simulations of shapes, materials, and environmentsthe AI prepares robots to handle unpredictable tasks, like picking unknown items from chaotic bins, with over 98% accuracy. When mistakes happen, the system learns, improving through real-world feedback. Crucially, this isnt just a one-robot fix. Software updates scale across entire fleets, upgrading robots to work more flexibly and meet the rising demand for adaptive production.Another example is the robotics firm ANYbotics, which generates 3D models of industrial environments that function as digital twins of real environments. Operational data, such as temperature, pressure, and flow rates, are integrated to create virtual replicas of physical facilities where robots can train. An energy plant, for example, can use its site plans to generate simulations of inspection tasks it needs robots to perform in its facilities. This speeds the robots training and deployment, allowing them to perform successfully with minimal on-site setup.Simulation also allows for the near-costless multiplication of robots for training. In simulation, we can create thousands of virtual robots to practice tasks and optimize their behavior. This allows us to accelerate training time and share knowledge between robots, says Pter Fankhauser, CEO and co-founder of ANYbotics.Because robots need to understand their environment regardless of orientation or lighting, ANYbotics and partner Digica created a method of generating thousands of synthetic images for robot training. By removing the painstaking work of collecting huge numbers of real images from the shop floor, the time needed to teach robots what they need to know is drastically reduced.Similarly, Siemens leverages synthetic data to generate simulated environments to train and validate AI models digitally before deployment into physical products. By using synthetic data, we create variations in object orientation, lighting, and other factors to ensure the AI adapts well across different conditions, says Vincenzo De Paola, project lead at Siemens. We simulate everything from how the pieces are oriented to lighting conditions and shadows. This allows the model to train under diverse scenarios, improving its ability to adapt and respond accurately in the real world.Digital twins and synthetic data have proven powerful antidotes to data scarcity and costly robot training. Robots that train in artificial environments can be prepared quickly and inexpensively for wide varieties of visual possibilities and scenarios they may encounter in the real world. We validate our models in this simulated environment before deploying them physically, says De Paola. This approach allows us to identify any potential issues early and refine the model with minimal cost and time.This technologys impact can extend beyond initial robot training. If the robots real-world performance data is used to update its digital twin and analyze potential optimizations, it can create a dynamic cycle of improvement to systematically enhance the robots learning, capabilities, and performance over time.The well-educated robot at workWith AI and simulation powering a new era in robot training, organizations will reap the benefits. Digital twins allow companies to deploy advanced robotics with dramatically reduced setup times, and the enhanced adaptability of AI-powered vision systems makes it easier for companies to alter product lines in response to changing market demands.The new ways of schooling robots are transforming investment in the field by also reducing risk. Its a game-changer, says De Paola. Our clients can now offer AI-powered robotics solutions as services, backed by data and validated models. This gives them confidence when presenting their solutions to customers, knowing that the AI has been tested extensively in simulated environments before going live.Filippi envisions this flexibility enabling todays robots to make tomorrows products. The need in one or two years time will be for processing new products that are not known today. With digital twins and this new data environment, it is possible to design today a machine for products that are not known yet, says Filippi.Fankhauser takes this idea a step further. I expect our robots to become so intelligent that they can independently generate their own missions based on the knowledge accumulated from digital twins, he says. Today, a human still guides the robot initially, but in the future, theyll have the autonomy to identify tasks themselves.This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.
    0 Commentaires ·0 Parts ·146 Vue
  • Apple joins AI hardware standards consortium to improve server performance
    appleinsider.com
    Apple has joined the board of directors for the Ultra Accelerator Link Consortium, giving it more of a say in how the architecture for AI server infrastructure will evolve.UALink logoThe Ultra Accelerator Link Consortium (UALink), is an open industry standard group for the development of UALink specifications. As a potential key element used for the development of artificial intelligence models and accelerators, the development of the standards could be massively beneficial to the future of AI itself.On Tuesday, it was announced that three more members have been elected to the consortium's board. Apple was one of the trio, alongside Alibaba and Synopsys. Continue Reading on AppleInsider | Discuss on our Forums
    0 Commentaires ·0 Parts ·133 Vue
  • FEMA: America's buildings are woefully underprepared for natural disasters
    archinect.com
    A code adoption tracking resource produced by the Federal Emergency Management Agency (FEMA) that shows the status of different states compliance with hazard-resistant zoning measures is especially relevant given the recent spate of catastrophic weather eventsaffecting Los Angeles and other American cities.The BCAT portal includes data through the end of Q4 2024. Overall, just one-third (33%) of all "natural hazard-prone jurisdictions"have successfully adopted the most current hazard-resistant building codes. This includes protections against damaging wind loads, hurricanes, floods, seismic activity, and tornadoes and can be taken as a snapshot of the overall readiness for buildings in the U.S. to protect against other kinds of natural disasters.We saw succinctly in the past six months the efficacy of these codes in protecting structures (or not) against forces such as hurricanes and other extreme weather events.Related on Archinect: Burning down the house to make American hom...
    0 Commentaires ·0 Parts ·130 Vue
  • Exodus Will Have Long-Term Narrative Consequences Depending on Players Relationships
    gamingbolt.com
    Developer Archetype Entertainment recently held a Q&A in order to provide an update on upcoming sci-fi action RPG Exodus. In the Q&A, creative director James Ohlen and executive producer Chad Robertson have revealed some new details about the upcoming title.The relationships players form with their teammates in Exodus will be a big factor in the game, with the studio revealing that setting out on missions will affect relationships with characters you dont meet for a while. This is because missions in Exodus could take several years, and even decades to complete.You go off on your Exodus Journeys and you leave behind your city and some of your friends and maybe even family members, and you make choices about them, said Ohlen.An example provided by Ohlen indicated that players could end up not meeting some characters for several decades, and the choices they make will affect characters that are now several years older than when you previously met them.Not everyone has to come on an Exodus with you, he explained. You might leave some behind and when you come back it could be a decade later, it could be four decades later, and those choices will have impacted your relationships with people that are now a decade older or three or four decades older.This narrative decision stems from the fact that the story of Exodus revolves around humanity looking for a new home in the stars, and journeys at that massive scale will definitely take several years. There is also some level of time dilation happening in the games story, likely owing to interstellar travel at incredible speeds.Robertson also revealed that the primary antagonists in the game, referred to as the Mara Yama, will be creepy. This stems from the studio wanting the games primary enemies being an evil Celestial civilization, but could also manage to be creepy.The Mara Yama were revealed in a trailer from October, which showcased just how strange and creepy they can be. The civilization doesnt quite have a planet it calls home, and is instead more nomadic in nature, travelling across the stars in its own citadels.The studio goes on to talk about various other smaller aspects of the game, including how time dilation will affect players, details about the player character (known in the game as the Traveler), and even which characters the developers would like to be if they lived in the universe of Exodus.Originally unveiled back in 2023, Exodus will be a third-person action RPG that will involve a mix of both fast-paced action, as well as quite a bit of exploration. The games narrative is being handled by storied sci-fi writer Drew Karbyshyn, who revealed in an interview last year that there will be plenty of long-term choices and consequences for players to experience.The most recent trailer released for the game back in December gave us a closer look at its third-person shooter combat. The trailer gave us a look at a firefight against a host of different enemies, while also letting us catch a glimpse of some of the various abilities players will have access to in the game.
    0 Commentaires ·0 Parts ·122 Vue
  • Free tool: Hair Cinematic Tool for Unreal Engine 5
    www.cgchannel.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Argentum Studios free Hair Cinematic Tool makes it easy to access Unreal Engine 5s internal settings controlling lighting and shadows for hair to improve the look of rendered animation.Animation firm Argentum Studio has released its in-house Hair Cinematic Tool for Unreal Engine for free.The add-on makes it easier to access hidden parameters for rendering hair grooms, helping to create higher-quality renders for cinematics, animations and VFX.A dedicated UI through which to adjust internal UE5 settings for rendering hair and furArgentum Studio describes the Hair Cinematic Tool as designed to give creators full control over Groom rendering settings, addressing the platforms lack of detailed native options.It provides a graphical interface through which to adjust Unreal Engines internal CVars (Console Variables) for hair, and for Voxelization shadows and Deep Shadows.By adjusting their values, users can fine tune lighting and shadows for hair and fur rendered using Unreal Engine 5.You can find Argentum Studios run-down of what the CVars control, and its suggested values for key settings, in this blog post on ArtStation.System requirementsArgentum Studios Hair Cinematic Tool is compatible with Unreal Engine 5.1+. The add-on is available free under ArtStation Marketplaces Standard license, which permits use in commercial projects.Download Argentum Studios free Hair Cinematic Tool from ArtStation MarketplaceFind details of how to use the Hair Cinematic ToolHave your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we dont post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentaires ·0 Parts ·125 Vue
  • Bats Hitch a Ride on Storm Fronts When Migrating, Saving Energy by 'Surfing' Through the Sky, Study Finds
    www.smithsonianmag.com
    Researchers tracked 71 common noctule bats (Nyctalus noctula) to parse their migration patterns. Kamran Safi / Max Planck Institute of Animal BehaviorMore than 1,400 species of bats exist worldwide, making them some of the most widespread creatures on Earththey can be found on every continent except for Antarctica. Chances are, theres one not too far from you right now. But despite the animals prevalence, their migration patterns remain largely a mystery. Their speed, small size and nocturnal nature make studying bats challenging. Now, researchers at the Max Planck Institute of Animal Behavior are shining a rare light inside the black box of bat migration.In a new study published in Science this month, a team of biologists used tiny tags attached between bats shoulder blades to track their movements. The tags, which the researchers developed, used the Internet of Thingsa wireless network of computers, smartphones and devices that can transfer informationto triangulate the bats position.On certain nights, we saw an explosion of departures that looked like bat fireworks, lead author Edward Hurme, a biologist at the Max Planck Institute for Animal Behavior, says in a statement. We needed to figure out what all these bats were responding to on those particular nights.The team followed the movements of 71 female noctule bats (Nyctalus noctula) across central Europe during their spring migrations. They tagged bats across three years, though each bats tracker fell off naturally after about four weeks. Originally tagged in Switzerland, the bats later dispersed, flying in a general northeastern direction to Germany, Poland and the Czech Republic, reports Sciences Elizabeth Pennisi. The research revealed that when the bats migrated, they would fly up to 238 miles each nightnearly 125 miles longer than previously thought. The trackers remained on the bats for up to four weeks, then they naturally fell off. MPI of Animal Behavior / Christian ZieglerAfter incorporating weather data into their analysis, the researchers concluded that the bats coordinate their movements with warm fronts that precede storms. These nifty night surfers use the strong winds generated by the front to get a boost to their destinationand expend less energy in the process, according to the paper.This was actually a big surprise. We had some clue that bats were responding to good wind conditions, but we didnt think that there was this connection to storms, Hurme told NPRs Jonathan Lambert.The scientists still dont know how the bats can predict a storm is coming, but they hope the technology they developed will allow for more bat studies.This technology revolutionizes the tracking of bat movements and will surely help researchers answer many questions about migration, says Charlotte Roemer, a conservation biologist at Frances National Museum of Natural History who was not involved in the study, to Science. The possibilities are very exciting.For instance, further research on this topic might help protect bats from human-caused fatalities, especially as the animals are increasingly endangered. Understanding where and when bats migrate could help wind turbine operators mitigate collisions with the blades, which are the cause of millions of bat deaths globally each year.More studies like this will pave the way for a system to forecast bat migration, Hurme says in the statement. We can be stewards of bats, helping wind farms to turn off their turbines on nights when bats are streaming through.Get the latest stories in your inbox every weekday.Filed Under: Animals, Bats, Biology, Conservation, Internet, Mammals, Migration, New Research, Technology
    0 Commentaires ·0 Parts ·120 Vue
  • Cerebras Systems teams with Mayo Clinic on genomic model that predicts arthritis treatment
    venturebeat.com
    Cerebras Systems has teamed with Mayo Clinic to create an AI genomic foundation model that predicts the best medical treatments.Read More
    0 Commentaires ·0 Parts ·130 Vue
  • Godfall developer Counterplay has reportedly shut down
    www.gamesindustry.biz
    Godfall developer Counterplay has reportedly shut downWhilst not formally confirmed, a since-edited LinkedIn post stated the studio had "disbanded"Image credit: Counterplay Games / Gearbox News by Vikki Blake Contributor Published on Jan. 14, 2025 Godfall developer Counterplay has reportedly shut down.Whilst not formally confirmed by the studio, PlayStation Lifestyle allegedly spotted and verified a post on LinkedIn that stated the studio had "disbanded" after a partnership with Jackalyptic fell through."Over the past six months or so our project at Jackalyptic has been supercharged by the world-class devs at Counterplay Games," the statement began."It's impossible to overstate their impact. From the very first day they put their shoulders to the wheels like it was their baby."Unfortunately, we were unable to continue our partnership into the new year and [Counterplay Games] was disbanded," it concluded before sharing profiles of those impacted by the changes. The post has since been edited to erase mention of the closure.It is unclear how many people have been impacted.Despite backing from Gearbox, Counterplay's sole published game, Godfall - which was developed with Disruptive Games as a PS5 launch title - released to middling critic and player reviews.The swath of job cuts from last year seem to be continuing in 2025. Yesterday, we reported that Robocraft 2 developer Freejam had shuttered. Swedish games firm Enad Global 7 (EG7) also initiated the "wind down" of Toadman Interactive, which resulted in 69 job losses and 38 layoffs at Piranha Games.In the first two weeks of 2025 alone, over 150 developers have lost their jobs, including cuts at Splash Damage and Jar of Sparks.
    0 Commentaires ·0 Parts ·124 Vue
  • ChatGPT can now handle reminders and to-dos
    www.theverge.com
    ChatGPT can now handle reminders and to-dosChatGPT can now handle reminders and to-dos / The AI chatbot can now set reminders and perform recurring actions.By Kylie Robison, a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider. Jan 14, 2025, 6:00 PM UTCShare this story Illustration: The VergeOpenAI is launching a new beta feature in ChatGPT called Tasks that lets users schedule future actions and reminders.The feature, which is rolling out to Plus, Team, and Pro subscribers starting today, is an attempt to make the chatbot into something closer to a traditional digital assistant think Google Assistant or Siri but with ChatGPTs more advanced language capabilities.Tasks works by letting users tell ChatGPT what they need and when they need it done. Want a daily weather report at 7AM? A reminder about your passport expiration? Or maybe just a knock-knock joke to tell your kids before bedtime? ChatGPT can now handle all of that through scheduled one-time or recurring tasks.Do you work at OpenAI? Id love to chat. You can reach me securely on Signal @kylie.01 or via email at kylie@theverge.com.To use the feature, subscribers need to select 4o with scheduled tasks in ChatGPTs model picker. From there, its as simple as typing out what you want ChatGPT to do and when you want it done. The system can also proactively suggest tasks based on your conversations, though users have to explicitly approve any suggestions before theyre created. (Honestly, I feel like suggestions has the potential of creating annoying slop by accident).All tasks can be managed either directly in chat threads or through a new Tasks section (available only via the web) in the profile menu, so its easy to modify or cancel any task youve set up. Upon completion of these tasks, notifications will alert users on web, desktop, and mobile. Theres also a limit of 10 active tasks that can run simultaneously.OpenAI hasnt specified when (or if) the feature might come to free users, suggesting Tasks might remain a premium feature to help justify ChatGPTs subscription costs. The company has monthly $20 and $200 subscription tiers.An example of a ChatGPT Task. OpenAIWhile scheduling capabilities are a common feature in digital assistants, this marks a shift in ChatGPTs functionality. Until now, the AI has operated solely in real-time, responding to immediate requests rather than handling ongoing tasks or future planning. The addition of Tasks suggests OpenAI is expanding ChatGPTs role beyond conversation into territory traditionally held by virtual assistants.OpenAIs ambitions for Tasks appear to stretch beyond simple scheduling too. Bloomberg reported that Operator, an autonomous AI agent capable of independently controlling computers, is slated for release this month. Meanwhile, reverse engineer Tibor Blaho found that OpenAI appears to be working on something codenamed Caterpillar, that could integrate with Tasks and allow ChatGPT to search for specific information, analyze problems, summarize data, navigate websites, and access documentswith users receiving notifications upon task completion.The rise of agentic AI in 2025 isnt just about technological advancementits about economics.As I previously wrote back in October, the rise of agentic AI in 2025 isnt just about technological advancementits about economics. These agent-like features represent a strategic way to monetize expensive AI infrastructure. While OpenAIs decision to put this functionality behind ChatGPTs paywall was predictable, the real question remains: Will it deliver reliable results? The last time I got an OpenAI agent demo, it produced inaccurate information. The coming months will reveal whether theyve solved these fundamental reliability challenges.I also think of this new feature as a slightly more sophisticated script, but at the end of the day, Tasks is following a simple, rote set of instructions much like a typical bot. The goal of many frontier AI labs like OpenAI is to evolve these features into something that is able to interact with environments, learn from feedback, and make decisions without constant human input.However, questions remain about how reliable these scheduled tasks will be and what happens if ChatGPT fails to deliver time-sensitive information. OpenAIs decision to launch Tasks in beta suggests theyre still working out these details and want to gather real-world feedback before a wider rollout.For now, if youre a paying ChatGPT user, you can start experimenting with Tasks by looking for the 4o with scheduled tasks option in your model picker. Just remember its still in beta maybe dont rely on it for that super important meeting reminder just yet.Most PopularMost Popular
    0 Commentaires ·0 Parts ·111 Vue