• Why fast-learning robots are wearing Meta glasses
    www.computerworld.com
    AI and robots need data lots of it. Companies that have millions of users have an advantage in this data collection, because they can use the data of their customers.A well-known example isGooglesreCAPTCHA v2service,which I recently lambasted. reCAPTCHA v2 is a cloud-based service operated by Google that is supposed to stop bots from proceeding to a website (it doesnt). When usersprove theyre human by clicking on the boxes containing traffic lights, crosswalks, buses and other objects specified in the challenge, that information is used to train models for Waymos self-driving cars and improve the accuracy of Google Maps. Its also used to improve vision capabilities used in Google Photos and image search algorithms.Google isnt the only company that does this.Microsoftprocesses voice recordings from Teams and Cortana to refine speech models, for example.Now Meta is using data user to train robots, sort of. Metas augmented reality (AR) division and researchers at Georgia Institute of Technology have developed a framework calledEgoMimic, which uses video feeds from smartglasses to train robots.Regular robot imitation learning requires painstaking teleoperation of robotic arms humans wearing special sensor-laden clothing and VR goggles perform the movements robots are being trained for, and the robot software learns how to do the task. It is a slow, expensive, and unscalable process.EgoMimic uses Metas Project Aria glasses to train robots.Meta announced Project Aria in September 2020 as a research initiative to develop advanced AR glasses. The glasses have five cameras (including monochrome, RGB, and eye-tracking), Inertial Measurement Units (IMUs), microphones, and environmental sensors. The project also has privacy safeguards such as anonymization algorithms, a recording indicator LED, and a physical privacy switch, according to Meta.The purpose of Aria is to enable applications in robotics, accessibility, and 3D scene reconstruction. Meta rolled out the Aria Research Kit (ARK) on Oct. 9.By using Project Aria glasses to record first-person video of humans performing tasks like shirt folding or packing groceries, Georgia Tech researchers built a dataset thatsmore than 40 times more demonstration-richthan equivalent robot-collected data.The technology acts as a sophisticated translator between human and robotic movement. Using mathematical techniques called Gaussian normalization, the system maps the rotations of a human wrist to the precise joint angles of a robot arm, ensuring natural motions get converted into mechanical actions without dangerous exaggerations. This movement translation works alongside a shared visual understanding both the human demonstrators smartglasses and the robots cameras feed into the same artificial intelligence program, creating common ground for interpreting objects and environments.A critical safety layer called action masking prevents impossible maneuvers, functioning like an invisible fence that stops robots from attempting any biomechanically implausible actions they observe in humans.The magic happens through theEgoMimic algorithm, which bridges the gap between human demonstration and robotic execution.Heres the funny part: After collecting video data, researchers mount the same AR glasses onto a robot, effectively giving iteyes and sensor data that perceive the tasks from the exact same perspective as the human who demonstrated the task.Thats right. The research involves robots wearing glasses. (You can see what this looks like inthe video published by Meta.)The algorithm then translates the human movements from the videos into actionable instructions for the robots joints and grippers. This approach reduces the need for extensive robot-specific training data.Training by teachingThe approach taken by EgoMimic could revolutionize how robots are trained, moving beyond current methods that require physically guiding machines through each step. The EgoMimic approach basically democratizes robot training, enabling small business, farmers and others who normally wouldnt even attempt robot training to do the training themselves.(Note that Metas and Georgia Techs EgoMimic is different from the University of North Carolina at Charlottes EgoMimic project, which uses first-person videos for training large vision-language models.) EgoMimic is expected to be publically demonstrated at the2025 IEEE EngineersInternational Conference on Robotics and Automationin Atlanta, which begins May 19.This general approach is likely to move from the research stage to the public stage, where multimodal AI video feeds from millions of users wearing smartglasses (which in the future will come standard with all the cameras and sensors in Project Aria glasses). AI will identify the user videos where the user is doing something (cooking, tending a garden, operating equipment) and making that available to robot training systems. This is basically how chatbots got their AI for training billions of people live their lives and in the process generate content of every sort, which is then harvested for AI training.The EgoMimic researchers didnt invent the concept of using consumer electronics to train robots. One pioneer in the field, a former healthcare-robot researcher named Dr. Sarah Zhang, has demonstrated 40% improvements in the speed of training healthcare robots using smartphones and digital cameras; they enable nurses to teach robots through gestures, voice commands, and real-time demonstrations instead of complicated programming.This improved robot training is made possible by AI that can learn from fewer examples. A nurse might show a robot how to deliver medications twice, and the robot generalizes the task to handle variations like avoiding obstacles or adjusting schedules. The robots also use sensors like depth-sensing cameras and motion detectors to interpret gestures and voice instructions.Its easy to see how the Zhang approach combined with the EgoMimic system using smartglasses and deployed at a massive scale could dramatically enhance robot training.Heres a scenario to demonstrate what might be possible with future smartglasses-based robot training. A small business owner who owns a restaurant cant keep up with delivery pizza orders. So, he or she buys a robot to make pizzas faster, puts on the special smartglasses while making pizzas, then simply puts the same glasses on the robot. The robot can then use that learning to take over the pizza-making process.The revolution here wouldnt be robotic pizza making. In fact, robots are already being used for commercial pizzas. The revolution is that a pizza cook, not a robot scientist or developer, is training robots with his proprietary and specific recipes. This could provide a small business owner with a huge advantage over buying a pre-programmed robot that performs a task generically and identically to other buyers of that particular robot.You could see similar use cases in homes or factories. If you want a robot to perform a task, you simply teach it how to do that task by demonstrating it.Of course, this real-world, practical use case is years away. But using smartglasses to train robots isnt something the public, or even technologists, are even thinking about right now.But smartglasses-based training is just one way robotics will be democratized and mainstreamed in the years ahead.
    0 Kommentare ·0 Anteile ·46 Ansichten
  • Inside Apple Shortcuts - the best feature that can revolutionize how you work
    appleinsider.com
    Next time your Mac is too slow, or you're having to keep repeating the same steps on your iPhone, learn Shortcuts and it will dramatically improve your speed and even accuracy.Shortcuts on an iPhoneShortcuts take one, ten, or a hundred different steps and have your Mac, iPhone, and iPad instantly do them all for you, after you've tapped or clicked a single button. Open a dozen documents, turn them into PDFs, email them to your clients and play some music at the same time.It takes a little effort to learn how to exploit Shortcuts. But then they do everything you need, quickly, flawlessly, and over and over again. Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare ·0 Anteile ·41 Ansichten
  • Best iPhone 16e cases available right now
    appleinsider.com
    If you're planning on purchasing an iPhone 16e, make sure to grab the perfect phone case to protect your new purchase. Here are some great choices, from clear to rugged and everything in between.Best iPhone 16e casesNo matter what kind of case you're looking for, chances are it's out there somewhere. From extreme drop protection to kickstands and a wide array of designs, there's a lot of variety in the case market.AppleInsider has rounded up many popular cases you can buy to protect your new smartphone. The list isn't exhaustive of the entire market, as there are more expensive and unusual items, but it is a good starting point to find your ideal iPhone case. Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare ·0 Anteile ·40 Ansichten
  • MVRDV uses 500 recycled plastic mats to create Bangkok Design Week installation
    archinect.com
    MVRDV has unveiled Mega Mat, a temporary installation at Bangkok Design Week that transforms a public space into an educational experience on plastic waste and recycling in Thailand. Located at Lan Khon Mueang Town Square, the installation seeks to combine urban design with environmental storytelling, using over 500 recycled plastic mats to create a 9,200-square-foot infographic.Image credit: DOF SkyGroundImage credit: DOF SkyGroundThe Mega Mat installation uses 532 modular mats, each measuring 6 by 3 feet, inspired by the traditional Thai Sua household mat. The bright colors of the mats form a gradient that visually represents Thailands plastic waste management: red indicates unsanitary landfills, orange covers sanitary landfills with pollution barriers, yellow depicts uncollected waste, and green highlights the percentage of plastic recycled. The color scheme also indicates a connection to the vibrant roofs of the nearby Wat Suthat Thepwararam temple.Image credit: DOF SkyGrou...
    0 Kommentare ·0 Anteile ·40 Ansichten
  • La Quimera is the Next Shooter by the Studio Formerly Known as 4A Games Ukraine
    gamingbolt.com
    Developer Reburn, previously 4A Games Ukraine, has announced its new game: La Quimera. The game, pitched as an entirely new IP for Reburn, will feature developers that have previously worked on games from the Metro series. Check out the announcement trailer below.La Quimera will use Latin America as its main setting, with the game taking place in a megalopolis alongside a lush jungle. Its story is written by Nicolas Winding Refn (Drive, The Neon Demon), and E.J.A. Warren. According to the press release announcing La Quimera, its story will combine advanced sci-fi weaponry and technology with Latin American folklore.The story takes place in the year 2064, in a world where most nations have gone extinct thanks to a series of catastrophes that took place between the 2030s and 2040s. While most of the world is ruled by microstates, there is constant conflict between them and corporations.Reburn is proud to introduce La Quimera, which draws upon our success crafting narrative-driven shooter games for the Metro game series, said Reburn founder and CEO Dmytro Lymar in the press release. We look forward to sharing this mysterious new world with players and hope they revel in suiting up to join the fight.While primarily a story-driven shooter, La Quimera will also feature co-op multiplayer for up to three total players. Players will get to make use of a host of different weapons and abilities as they make their way through the games story, thanks to the exosuit that the players character is equipped with. These exosuits can be customised for different needs, depending on the needs of the mission.Currently in development for PC, La Quimera doesnt yet have a release date.
    0 Kommentare ·0 Anteile ·48 Ansichten
  • Helldivers 2s Galactic War Feature Has Been Taken Offline to Fix a Backend Error
    gamingbolt.com
    Developer Arrowhead Game Studios has announced that it will be taking Helldivers 2s Galactic War mechanic offline for a little bit. Because of this, while players can still join online games, they wont be able to make any progress in the Galactic War while it remains offline.The studio took to social media platform X to explain why it had taken Galactic War down. According to the post, an error on the games backend caused the liberation rates for planets to be too high. The studio decided to take Galactic War down in order to address these backend issues.Today we updated our backend and we have encountered an error that is affecting planet liberation rates, causing them to skyrocket, wrote the studio. While we know this is fun, we also know that it is breaking the Galactic War mechanic itself.In short, this is harming the game experience and will have detrimental effects on the war mechanic in the future.The studio also explained that, since it was a mistake on the developers part, the damage done by the bug will not be reverted. Rather, players will get to enjoy their victory over the Major Order once the Galactic War system is back online.While work on bringing the Galactic War system back online is currently ongoing at the studio, there has been no mention of a solid timeline for when we can expect to see it.For those that may not know, the Galactic War forms what is essentially the backbone of the ongoing story in Helldivers 2. The system is used by Arrowhead to issue orders that players can then collaborate to complete. These orders can range from defending planets from invasion to capturing new ones. All of this culminates in a gigantic storyline that tells the tale of humanities fight against alien forces.Earlier this month, Arrowhead Game Studios decided to celebrate the first anniversary of Helldivers 2 with a video looking back at the first year of the games Galactic War story. Narrated by Game Master J.O.E.L., the video focused on one of the iconic moments from the games early days: the Defense of Mort.Over the course of the last year of the Galactic War, we have, on numerous occasions, been positively surprised by the actions and choices that the community have taken, said Joel in the video. Choices that are now canon, that are now part of Helldivers lore for all time.Helldivers 2 is a co-op shooter available on PC and PS5. Since its release, it has gotten quite a bit of support through the release of free updates as well as paid DLC in the form of Premium Warbonds. As of February 2025, the latest Premium Warbond is Servants of Freedom, which brings with it Portable Hellbombs, among other things. The studio is also seemingly working on a new collaboration event in the vein of what we saw with the Killzone 2 event from last year.For more details about Helldivers 2, check out our review.Today we updated our backend and we have encountered an error that is affecting planet liberation rates, causing them to skyrocket. While we know this is fun, we also know that it is breaking the Galactic War mechanic itself.In short, this is harming the game experience and HELLDIVERS 2 (@helldivers2) February 27, 2025
    0 Kommentare ·0 Anteile ·47 Ansichten
  • Foundry releases Nuke 16.0
    www.cgchannel.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Originally posted on 12 December 2024 for the beta, and updated for the final release.Foundry has released Nuke 16.0, the latest update to its family of compositing apps.The update lays the foundations of a new native multishot compositing workflow, updates Nukes 3D compositing system, and improves interactive performance when rotoscoping.NukeX, the advanced edition, also gets workflow improvements to the BlinkScript editor.Nuke Studio, which includes editorial capabilities, gets a new contact sheet view, support for multi-channel soft effects, and a new quick export system.A parallel release, Nuke 15.2, provides some of the same features, but remains on the VFX Reference Platform CY2023 spec, rather than updating to CY2024.Nuke 16.0, NukeX 16.0, Nuke Studio 16.0: New native multishot workflowNuke 16.0 is Foundrys first serious step towards implementing support for a native multishot compositing workflow inside the software. Whereas Nuke was designed for use on individual shots, with a one-to-one relationship between a shot and a Nuke .nk script, Foundry now aims to let artists reuse scripts across shots.The backbone of the system is Graph Scope Variables (GSVs), which make it possible to define the data required for multiple contexts and scopes in a single Nuke script, while Group nodes define the nature of those scopes, and make it possible to inherit and override variables.The release introduces many of the day-to-day features needed for working with GSVs and Group nodes, including a new VariableGroup node, for defining variables or scopes, and a VariableSwitch node, for switching between different shots or scopes using those variables.A new Variables Panel lets artists interact with the available variables within a script, and a Group View lets them edit the contents of multiple Group nodes without having to switch tabs.GSVs are also now supported in LiveGroups, and when rendering from the command line, making it possible to render scripts in the correct shot context.Some of the functionality was actually introduced in Nuke 15.1, but the UI for the new features was hidden by default.Nuke 16.0, NukeX 16.0, Nuke Studio 16.0: Link NodesWorkflow improvements to Nukes node graph include Link Nodes, a new node type that makes it possible to create a linked copy of a node.Change made to one node are then automatically propagated to the other, with users still able to manually override any of the knobs.Nuke 16.0, NukeX 16.0, Nuke Studio 16.0: Updates to the 3D system and ScanlineRenderThe USD-based 3D compositing system introduced two years ago in Nuke 14.0 gets an update, although it remains in beta.While there are a couple of new nodes, a key objective is simply to update the most commonly used nodes including GeoCard, GeoTransform, GeoMerge and GeoScene based on user feedback. Foundry has also started the building blocks for ray tracing, with ScanlineRender2, the new version of the ScanlineRender render node, being a ray traced architecture by default, although shader support for lights and materials is still not fully implemented.The functionality is aimed at compositors particularly for generating accurate render passes late in production and shouldnt be thought of as a replacement to large-scale scene renders.Nuke 16.0, NukeX 16.0, Nuke Studio 16.0: Better roto performanceUnder the hood, the release also features a number of changes intended to improve performance when rotoscoping, particularly when using large numbers of roto shapes, or when working with motion blur.The changes are intended to reduce UI lag, and to raise the frame rates achieved when playing back complex shots to levels sufficient to resolve edge issues like boiling.NukeX 16.0, Nuke Studio 16.0: Quality-of-life improvements to BlinkScriptTDs get quality-of-life improvements to the BlinkScript editor, Nukes native scripting system, which has been updated to support common IDE functionality.That includes text and type behaviours like auto-indenting and bracket autoclosure; find and replace; and a Tab menu with autofill suggestions and context-specific documentation.Library Files enable users to share common functions and code snippets across multiple kernels and projects. A new Safety Rails system makes it easier to catch problems when prototyping new BlinkScripts and makes the consequences less dramatic when mistakes occur.Nuke Studio 16.0: New Contact Sheet, Multichannel Soft Effects and quick export systemNew features in Nuke Studio include the Contact Sheet view.It is makes it easier to compare multiple shots, and supports user-defined rules for the order in which the contact sheet is populated with shots, and tag filtering of shots.The Soft Effects system gets support for multichannel effects, with users now able to view and modify multilayer EXR files within the timeline.Potential uses include masking color effects with non-RGBA layers like mattes or depth passes.Support for layer transforms in the timeline is intended to reduce the need for slap comps, and to enable supervisors to provide more accurate feedback when creating sample frames.In addition, a new render engine based on Nuke Studios real-time playback engine speeds up exports of sequences as ProRes, DNxHD, DNxHR and H.264 videos.It provides an average 12-fold increase in performance over the existing export system.VFX Reference Platform support and changes to pipeline integrationNuke 16.0 also moves the software to support the VFX Reference Platform CY2024 spec.A parallel release, Nuke 15.2, is intended for studios who dont want to update their pipelines from the CY2023 spec, and has some of the same features, including multi-shot compositing,Both releases switch Nuke from Apples system OpenGL library to Foundrys own alternative, FoundryGL, when running on macOS, Apple having deprecated OpenGL in macOS 10.14. Price and system requirementsNuke 16.0 is compatible with Windows 10+, Rocky Linux 9.0 and macOS 14.0+. The software is rental-only.Annual subscriptions cost $3,649/year for Nuke, up $180/year since the release of Nuke 15.0; $4,969/year for NukeX, up $240/year; and $6,069/year for Nuke Studio, up $290/year. Nuke Render licenses cost $440/year, up $21/year.Subscriptions to Nuke Indie, the feature- and resolution-limited commercial edition for artists earning under $100,000/year, cost $499/year.Read an overview of the features in Nuke 16.0 on Foundrys websiteRead a full list of new features in Nuke 16.0 in the online release notesHave your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we dont post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Kommentare ·0 Anteile ·40 Ansichten
  • Boris FX releases SynthEyes 2025
    www.cgchannel.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"SynthEyes 2025 introduces a new readymade head mesh to speed up head tracking work.Boris FX has released SynthEyes 2025, the latest version of the 3D tracking software.The release adds a new AI-based roto mask generation system, a readymade 3D head mesh to simplify head tracking, and a new multi-export system to streamline pipeline integration.A powerful, affordable 3D object and camera tracking applicationFirst released two decades ago, and acquired by Boris FX in 2023, SynthEyes is a standalone 3D tracking app, used in VFX pipelines for its combination of speed, accuracy and affordability.It can handle a wide range of shot types, including stereoscopic and 360-degree virtual reality, and exports to most major 3D and compositing applications.SynthEyes 2025: new AI-based automated roto masking systemSynthEyes becomes the latest of Boris FXs applications to support AI-powered rotoscoping workflows based on the companys Mask ML algorithm.Previously integrated in Silhouette and Mocha Pro, the companys other tracking app, it enables users to isolate objects in a shot simply by clicking on them, with the software automatically inferring where the edges of the object lie, and generating a corresponding mask.In SynthEyes, it is primarily intended to streamline the process of removing unwanted tracking points generated during auto-tracking, and is integrated within the existing Roto Masking room.It supports a layer-based workflow, and background tracking and solving of Mask ML layers via the SynthEyes Batcher.New readymade 3D head mesh simplifies digital makeup and beauty workSynthEyes 2025 also adds a new readymade head mesh to the SynthEyes library, intended to simplify close-up head tracking for tasks like digital makeup and beauty work.Users can align the mesh to an actors head in the source footage, have SynthEyes track it through the shot, then export the animated head geometry in Alembic or FBX format.The workflow is reminiscent of KeenTools FaceBuilder and FaceTracker plugins, although to judge from the video above, aligning the mesh to the footage in SynthEyes requires more manual input.Improvements to export and presets managementWorkflow improvements include a new Multi-Export system for managing the settings for all of the formats in which SynthEyes exports data from a single centralized interface panel.It standardizes file naming and organization with preset tags, simplifying versioning and asset management, and custom export settings can be saved as presets for reuse in other projects.The workflow is underpinned by new Project Selector, Project Tag Manager and Workflow Presets Manager tools, shown in more detail in this video.Other changes include a new Assimilate 360VR Stabilization script for exporting data to Scratch, Assimilates color grading and finishing system, and Live FX, its media server for LED walls.Pricing and availabilitySynthEyes 2025 is available for Windows 10+, CentOS 7+ Linux or a compatible distro, and macOS 12.0+. New perpetual licenses cost $595. Subscriptions cost $49/month or $295/year.Read an overview of the new features in SynthEyes 2025 on Boris FXs blogRead a full list of new features in SynthEyes 2025 in the online release notesHave your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we dont post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Kommentare ·0 Anteile ·40 Ansichten
  • 4A Games Ukraine rebrands to Reburn, announces new IP
    www.gamesindustry.biz
    4A Games Ukraine rebrands to Reburn, announces new IP4A Games itself has not rebranded, continues to work on the next game in the Metro seriesImage credit: Reburn News by Sophie McEvoy Staff Writer Published on Feb. 28, 2025 4A Games Ukraine has rebranded to Reburn, and has announced its new IP La Quimera.As reported by Polygon, La Quimera was co-created by Reburn alongside filmmaker Nicolas Refn (Drive, Only God Forgives) and writer E.J.A. Warren.Currently in development, La Quimera is a first-person shooter set in a fictional Latin American metropolis. It is both a single-player and co-op game with up to two other players.4A Games clarified on social media that the overall studio has not rebranded, and is currently working on the next game in the Metro series."For clarity, we have not rebranded or changed in any way," it said. "The same founders and beating heart of the Metro series continue to work on the next Metro game from our studios in Ukraine, Malta, and remotely.We are also still at work on our other new IP as referenced in previous studio updates."Reburn founder and CEO Dmytro Lymar also confirmed that the 4A Games Ukraine trademark will remain with the Metro series, while Reburn is for the new IP."We came up with Reburn with the help of a creative agency here in Kyiv, who will announce their involvement a bit later," said Lymar. "The name means 'burning again,' but for us it has the meaning similar to rebirth or reincarnation in a new form, for the creation of a new game with [its] own IP."But we [have kept] our original values of making games that we would love to play ourselves and taking into account ideas from any member of the team."
    0 Kommentare ·0 Anteile ·42 Ansichten
  • WB Games to refocus business around 'tentpole franchises' and 'top tier characters'
    www.gamedeveloper.com
    Warner Bros. Games intends to rebuild its video game business around four tentpole franchises: Harry Potter, Game of Thrones, Mortal Kombat, and DC.The company outlined that plan in its latest shareholder letter and said it continues to view its games business as a "strategic differentiator."It reiterated 2024 was a 'disappointing year' for the division, with titles like Suicide Squad: Kill The Justice League and Multiversus underperforming, and confirmed it will be restructuring around "proven IP and games from proven, world class studios."That cost-cutting initiative has already resulted in the closure of notable studios such as Shadow of Mordor developer Monolith Productions (which had been working on a Wonder Woman project), Multiversus developer Player First Games, and WB San Diego.Those closures followed layoffs at other internal studios including Rocksteady and WB Games Montreal.WB Games seems intent on looking to the past for cues on how to revitalize its flagging businesswith game revenues decreasing by 29 percent year-on-year during Q4. Annual content revenue, which includes video games, also decreased by 8 percent year-on-year across FY2024.WB Games repeatedly name-dropped 2023 release Hogwarts Legacy as an example of how it can leverage a major franchise to deliver success, and it seems the company believes prioritizing a few colossal franchises and 'top tier characters' (obviously Batman was mentioned here) is now the correct play."Just two years ago, our Games team broke through with Hogwarts Legacy and created a completely new gaming franchise that was the best-selling game of the yeara result that only three other franchises in the last 15 years have achieved," continues the shareholder letter."That gives us confidence that with our re-focused strategy we can get back to producing high-quality games built for long term consumer engagement, which we expect to propel our Games division back to profit in 2025 and emerge as a more significant contributor to growth in the years ahead."
    0 Kommentare ·0 Anteile ·49 Ansichten