• Where to Find All Grubs in Hollow Knight
    gamerant.com
    Hollow Knight has no shortage of collectibles, many of which are required to get the challenging 112% completion status for the game. One of these collectibles is Grub Jars, which the Knight can break open to free the Grub trapped within. In total, there are 46 Grubs Hollow Knight players can rescue throughout Hallownest.
    Like
    Love
    Wow
    Sad
    Angry
    320
    · 0 Kommentare ·0 Geteilt
  • Into the Omniverse: How OpenUSD and Digital Twins Are Powering Industrial and Physical AI
    blogs.nvidia.com
    Editors note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.Investments in industrial AI and physical AI are driving increased demand for digital twins across industries.These physically accurate, virtual replicas of real-world environments, facilities and processes arent just helping manufacturers streamline planning and optimize operations. They serve as the training ground for helping ensure vision AI agents, autonomous vehicles and robot fleets can operate safely, efficiently and reliably.Creating physically accurate simulation environments that enable physical AI to transition seamlessly to the real world typically involves substantial manual effort. However, with the latest advancements in OpenUSD a powerful open standard for describing and connecting complex 3D worlds alongside improvements in rendering, neural reconstruction and world foundation models (WFMs), developers can fast-track the construction of digital twins at scale.Accelerating Digital Twin and Physical AI DevelopmentTo speed digital twin and physical AI development, NVIDIA announced at this years SIGGRAPH conference new research, NVIDIA Omniverse libraries, NVIDIA Cosmos WFMs and advanced AI infrastructure including NVIDIA RTX PRO Servers and NVIDIA DGX Cloud.The latest Omniverse software development kits bridge MuJoCo and Universal Scene Description (OpenUSD), enabling over 250,000 MJCF robot learning developers to simulate robots across platforms.Omniverse NuRec libraries and AI models enable Omniverse RTX ray-traced 3D Gaussian splatting, allowing developers to capture, reconstruct and simulate the real world in 3D using sensor data.NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robot simulation and learning frameworks are now available on GitHub. Isaac Sim features NuRec neural rendering and new OpenUSD robot and sensor schemas to narrow the simulation-to-reality gap.Cosmos WFMs, including Cosmos Transfer-2 and NVIDIA Cosmos Reason, deliver leaps in synthetic data generation and reasoning for physical AI development.NVIDIA research advances in rendering and AI-assisted material generation help developers scale digital twin development.Growing OpenUSD EcosystemOpenUSD serves as a foundational ecosystem for digital twin and physical AI development, empowering developers to integrate industrial and 3D data to create physically accurate digital twins.The Alliance for OpenUSD (AOUSD) recently welcomed new general members, including Accenture, Esri, HCLTech, PTC, Renault and Tech Soft 3D. These additions underscore the continued growth of the OpenUSD community and its commitment to unifying 3D workflows across industries.To address the growing demand for OpenUSD and digital twins expertise, NVIDIA launched a new industry-recognized OpenUSD development certification and a free digital twins learning path.Developers Building Digital TwinsIndustry leaders including Siemens, Sight Machine, Rockwell Automation, EDAG, Amazon Devices & Services and Vention are building digital twin solutions with Omniverse libraries and OpenUSD to enable transformation with industrial and physical AI.Siemens Teamcenter Digital Reality Viewer enables engineers to visualize, interact with and collaborate on photorealistic digital twins at unprecedented scale. These efforts are enabling faster design reviews, minimizing the need for physical prototypes and accelerating time to market all while reducing costs.Sight Machines Operator Agent platform combines live production data, agentic AI-powered recommendations and digital twins to provide real-time visibility into production and enable faster, more informed decisions for plant operations teams.Rockwell Automations Emulate3D Factory Test platform enables manufacturers to build factory-scale, physics-based digital twins for simulating, validating and optimizing automation and autonomous systems at scale.EDAGs industrial digital twin platform helps manufacturers improve project management, optimize production layouts, train workers and perform data-driven quality assurance.Amazon Devices & Services uses digital twins to train robotic arms to recognize, inspect and handle new devices. Robotic actions can be configured to manufacture products purely based on training performed in simulation including for steps involved in assembly, testing, packaging and auditing.Vention is using NVIDIA robotics, AI and simulation technologies including Omniverse libraries, Isaac Sim and Jetson hardware to deliver plug-and-play digital twin and automation solutions that simplify and accelerate the deployment of intelligent manufacturing systems.Get Plugged Into the World of OpenUSDTo learn more about OpenUSD and how to develop digital twin applications with Omniverse libraries, take free courses as part of the new digital twin learning path, and check out the Omniverse Kit companion tutorial and how-to guide for deploying Omniverse Kit-based applications at scale.Watch a replay of NVIDIAs SIGGRAPH Research Special Address. Plus, try out Omniverse NuRec on Isaac Sim and CARLA, and learn more about Isaac Sim.Stay up to date by subscribing to NVIDIA Omniverse news, joining the Omniverse community and following Omniverse on Discord, Instagram, LinkedIn, Threads, X, and YouTube.Explore the Alliance for OpenUSD forum and the AOUSD website.Featured image courtesy of Siemens, Sight Machine.
    Like
    Love
    Wow
    Angry
    Sad
    214
    · 0 Kommentare ·0 Geteilt
  • PFX SHIFTS INTO TOP GEAR FOR LOCKED
    www.vfxvoice.com
    By TREVOR HOGGImages courtesy of ZQ Entertainment, The Avenue and PFX. Plates were captured by a six-camera array covering 180 and stitched together to achieve the appropriate background width or correct angle.Taking the concept of a single location on the road is Locked, where a carjacker is held captive inside a high-tech SUV that is remotely controlled by a mysterious sociopath. An English language remake of 4X4, the thriller is directed by David Yarovesky, stars Bill Skarsgrd and Anthony Hopkins, and was shot in Vancouver during November and December 2023. Post-production lasted four months with sole vendor PFX creating 750 visual effects shots with the expertise of 75 artists and guidance of VFX Supervisor Jindich ervenka. Every project is specific and unique, ervenka notes. Here, we had a significant challenge due to the sheer number of shots [750], which needed to be completed within four months, all produced in 4K resolution. Additionally, at that time, we didnt have background plates for every car-driving shot. We distributed the workload among our three branches in Prague, Bratislava and Warsaw to ensure timely completion. Director Yarkovesky had a clear vision. That allowed us to move forward quickly. Of course, the more creative and complex sequences involved collaborative exploration, but thats standard and part of the usual process.The greenscreen was set at two distances with one being closer and lower while the other was an entire wall a few meters away, approximately two meters apart.A shot taken from a witness camera on the greenscreen stage.The biggest challenge [of the three-and-a-half-minute take introducing the carjacker] was the length of the shot and the fact that nothing in the shot was static. Tracking such a shot required significant effort and improvisation. The entire background was a video projection onto simple geometry created from LiDAR scans of the parking lot. It greatly helped that we could use real-set footage, timed exactly as needed, and render it directly from Nuke.Jindich ervenka, Visual Effects SupervisorPrevis and storyboards were provided by the client for the more complex shots. We primarily created postvis for the intense sequence with a car crash, fire and other crazy action, ervenka states. We needed to solve this entire sequence in continuity. Continuity was major issue. Throughout the film, we had to maintain continuity in the water drops on all car windows, paying close attention to how they reacted to changes in lighting during the drive. Another area of research involved bokeh effects, which we experimented with extensively. Lastly, we conducted significant research into burning cars, finding many beautiful references that we aimed to replicate as closely as possible. The majority of the visual effects centered around keying, water drops on windows, and cleaning up the interior of the car. ervenka adds, A few shots included digital doubles. There were set extensions, especially towards the end of the film. Additionally, we worked on fire and rain effects, car replacements in crash sequences, bleeding effects, muzzle flashes, bullet hits, and a bullet-time shot featuring numerous CGI elements. PFX adhered to its traditional workflow and pipeline for shot production. We were the sole vendor, which allowed us complete control over the entire process.The studio-filmed interior of the SUV had no glass in the windows which meant that reflections, raindrops and everything visible on the windows had to be added digitally.A signature moment is the three-and-a-half-minute continuous take that introduces the young carjacker portrayed by Bill Skarsgrd. The biggest challenge was the length of the shot and the fact that nothing in the shot was static, ervenka remarks. Tracking such a shot required significant effort and improvisation. The entire background was a video projection onto simple geometry created from LiDAR scans of the parking lot. It greatly helped that we could use real-set footage, timed exactly as needed, and render it directly from Nuke. Window reflections were particularly challenging, and we ultimately used a combination of 3D renders and compositing cheats. When you have moving car parts, the window reflections give it away, so we had to tackle that carefully. Not surprisingly, this was the most complex shot to execute. The three-and-a-half-minute shot involved 12 artists, nine of whom were compositors. Working on extremely long shots is always challenging, so dividing the task into smaller segments was crucial to avoid fatigue. In total, we split it into 96 smaller tasks.[W]e conducted significant research into burning cars, finding many beautiful references that we aimed to replicate as closely as possible. A few shots included digital doubles. There were set extensions, especially towards the end of the film. Additionally, we worked on fire and rain effects, car replacements in crash sequences, bleeding effects, muzzle flashes, bullet hits, and a bullet-time shot featuring numerous CGI elements.Jindich ervenka, Visual Effects Supervisor Over a period of four months, PFX distributed 750 shots among facilities in Prague, Bratislava and Warsaw.Background plates were shot by Onset VFX Supervisor Robert Habros. His crew did excellent work capturing the background plates, ervenka notes. For most car rides, we had footage from six cameras covering 180, allowing us to stitch these together to achieve the appropriate background width or use the correct angle. Additionally, we had footage of an extended drive through the actual city location where the story takes place, so everything was edited by a visual effects editor. We simply synchronized this with the remaining camera recordings and integrated them into the shots. The greenscreen was set at two distances. ervenka explains, There was a closer, lower one and an entire wall a few meters away, approximately two meters apart. Although I wasnt personally on set, this setup helped create parallax since we couldnt rely on the cars interior. For the three-and-a-half-minute shot, we had separate tracking for the background and interior, where all interior walls were tracked as moving objects. Aligning these into a single reliable parallax track was impossible. A shot taken from the three-and-a-half-minute continuous take that introduces the young carjacker portrayed by Bill Skarsgrd.[W]e use an internal application allowing real-time viewing of shots and versions in the context of the films edit or defined workflows, enabling simultaneous comments on any production stage or context. Imagine having daily reviews where everything created up to that point is assessed, with artists continually adding new versions. In these daily sessions, everything was always thoroughly reviewed, and nothing was left for the next day.Jindich ervenka, Visual Effects Supervisor Locked takes place in single location, which is a high-tech SUV.There is an art to painting out unwanted reflections and incorporating desirable ones. The trick was that the studio-filmed interior had no glass in the windows at all, ervenka states. Reflections, raindrops and everything visible on the windows had to be added digitally. Shots from real exteriors and cars provided excellent references. Fire simulations were time-consuming. We simulated them in high resolution, and due to continuity requirements, we simulated from the initial ignition to full combustion, with the longest shot nearly 600 frames long. This was divided into six separate simulations, totaling about 30TB of data. Digital doubles were minimal. Throughout the film, there were only two digital doubles used in violent scenes. We didnt have to create any crowds or face replacements. A CG replica was made of the SUV. We had a LiDAR scan of the actual car, which served as the basis for the detailed CG version, including the interior. Only a few shots ultimately required this, primarily during a scene where another SUV car was initially filmed. We replaced it, and in two cases, we replaced only parts of the car and wheels to maintain real contact with the ground. There was a bit of masking involved, but otherwise, it went smoothly. The interior was mainly used for window reflections in wide shots from inside the car.There was not much need for digital doubles or crowds.We primarily created postvis for the intense sequence with a car crash, fire and other crazy action. We needed to solve this entire sequence in continuity. Throughout the film, we had to maintain continuity in the water drops on all car windows, paying close attention to how they reacted to changes in lighting during the drive.Jindich ervenka, Visual Effects SupervisorThe greatest creative and technical challenge was reviewing shots in continuity within a short production timeline and coordinating across our various offices, ervenka observes. Each shot depended on others, requiring numerous iterations to synchronize everything. For projects like this, we use an internal application allowing real-time viewing of shots and versions in the context of the films edit or defined workflows, enabling simultaneous comments on any production stage or context. Imagine having daily reviews where everything created up to that point is assessed, with artists continually adding new versions. In these daily sessions, everything was always thoroughly reviewed, and nothing was left for the next day. We avoided waiting for exports or caching. Everything needed to run smoothly and in real-time. Complicating matters was that ervenka joined the project only after editing had concluded. I had to quickly coordinate with teams distributed across Central Europe, grasp the intricacies of individual scenes and resolve continuity, which required extensive and precise communication. Thanks to our custom collaboration tools, we managed to streamline this demanding coordination successfully, and we delivered on time. But it definitely wasnt easy! Bill Skarsgrd pretends to try to break a glass window that does not exist.Watch PFXs brief VFX breakdown of the opening scene of Locked. The scene sets the tone for the film with a gripping three-and-a-half-minute single shot brought to life on a greenscreen stage where six crew members moved car parts in perfect sync. Click here: https://www.facebook.com/PFXcompany/videos/locked-vfx-breakdown/4887459704811837/
    Like
    Love
    Wow
    Sad
    Angry
    403
    · 0 Kommentare ·0 Geteilt
  • Romeo is a Dead Man: A sneak peak of what to expect
    blog.playstation.com
    Whats up, everyone? Im gonna assume youve already seen the announcement trailer for Grasshopper Manufactures all-new title, Romeo Is A Dead Man. If not, then do yourself a favor and go watch it now. Its cool Ill wait two and a half minutes.Play VideoOK, so you get that theres gonna be a whole lot of extremely bloody battle action and exploring some weird places, but I think a lot of people may be confused by the sheer amount of information packed into two and a half minutes Today, well give you a teensy little glimpse of how Romeo Stargazer aka DeadMan, a special agent in the FBI division known as the Space-Time Police goes about his investigations.Romeo Is A Dead Man, abbreviated as I dont know, RiaDM? or maybe RoDeMa, if youre nasty? Anyway, one of the most notable features of the game is the rich variety of graphic styles used to depict the game world. Seriously, its all over the place but like, in a good way. The meticulously-tweaked action parts are done in stunning, almost photorealistic 3D, and weve thrown everything but the kitchen sink into the more story-based parts.And dont worry, GhM fans we promise: for as much work as weve put into making the game look cool and unique, the story itself is also ridiculously bonkers, as is tradition here at Grasshopper Manufacture. We think longtime fans will enjoy it, and newcomers will have their heads exploding. Either way, youre guaranteed to see some stuff youve never seen before.As for the actual battles, our hero Romeo is heavily armed with both katana-style melee weapons and gun-style ranged weapons alike, which the player can switch between while dispersing beatdowns. However even the weaker, goombah-type enemies are pretty hardcore. Youre gonna have to think up combinations of melee, ranged, heavy, and light attacks to get by. But the stupidly gratuitous amount of blood splatter and catharsis youre rewarded with when landing a real nuclear power move of a combo is awe-inspiring, if thats your thing. On top of the kinda-humanoid creatures youve already seen, known as Rotters, weve got all kinds of other ultra-creepy, unique enemies waiting to bite your face off!Now, lets look at one of the main centerpieces of any GhM game: the boss battles. This particular boss is, well, hella big. His name is Everyday Is Like Monday, because of course it is. Its on you to make sure Romeo can dodge the mess of attacks launched by this big-ass tyrant and take him down to Chinatown. Its one of the most feelgood beatdowns of the year!Also, being a member of something called the Space-Time Police means that obviously Romeo is gonna be visiting all sorts of weird, what?-type places. And awaiting him at these weird, what?-type places are a range of weird, what?-type puzzles that only the highest double-digit IQ players will be able to solve! This thing looks like a simple sphere that someone just kinda dropped and busted, but once you really wrap your dome around it and get it solved, damn it feels good. There are a slew of other puzzles and gimmicks strategically or possibly just randomly strewn throughout the game, so keep your eyeballs peeled for them and try not to break any controllers as you encounter them along your mission.Thats all for now, but obviously there are still a whole bunch of important game elements we have yet to discuss, so stay tuned for next time!
    Like
    Love
    Wow
    Angry
    Sad
    366
    · 0 Kommentare ·0 Geteilt
  • The Helldivers 2 Halo Warbond skimps on Halo flair, but I'm having fun with the assault rifle anyway
    www.polygon.com
    Helldivers 2's Halo ODST Warbond has landed, and it's fine? It's fine. There's not a whole lot of actual Halo in it aside from a handful of weapons and cosmetics, but it does have the classic MA5C assault rifle. Okay, so it's a little worse than the standard Liberator assault rifle you start out with, and it's certainly no Adjudicator (which, with all the recoil problems I have, is probably a good thing). But look, it's just cool okay?
    Like
    Love
    Wow
    Sad
    Angry
    292
    · 0 Kommentare ·0 Geteilt
  • The Power Of The <code>Intl</code> API: A Definitive Guide To Browser-Native Internationalization
    smashingmagazine.com
    Its a common misconception that internationalization (i18n) is simply about translating text. While crucial, translation is merely one facet. One of the complexities lies in adapting information for diverse cultural expectations: How do you display a date in Japan versus Germany? Whats the correct way to pluralize an item in Arabic versus English? How do you sort a list of names in various languages?Many developers have relied on weighty third-party libraries or, worse, custom-built formatting functions to tackle these challenges. These solutions, while functional, often come with significant overhead: increased bundle size, potential performance bottlenecks, and the constant struggle to keep up with evolving linguistic rules and locale data.Enter the ECMAScript Internationalization API, more commonly known as the Intl object. This silent powerhouse, built directly into modern JavaScript environments, is an often-underestimated, yet incredibly potent, native, performant, and standards-compliant solution for handling data internationalization. Its a testament to the webs commitment to being worldwide, providing a unified and efficient way to format numbers, dates, lists, and more, according to specific locales.Intl And Locales: More Than Just Language CodesAt the heart of Intl lies the concept of a locale. A locale is far more than just a two-letter language code (like en for English or es for Spanish). It encapsulates the complete context needed to present information appropriately for a specific cultural group. This includes:Language: The primary linguistic medium (e.g., en, es, fr).Script: The script (e.g., Latn for Latin, Cyrl for Cyrillic). For example, zh-Hans for Simplified Chinese, vs. zh-Hant for Traditional Chinese.Region: The geographic area (e.g., US for United States, GB for Great Britain, DE for Germany). This is crucial for variations within the same language, such as en-US vs. en-GB, which differ in date, time, and number formatting.Preferences/Variants: Further specific cultural or linguistic preferences. See Choosing a Language Tag from W3C for more information.Typically, youll want to choose the locale according to the language of the web page. This can be determined from the lang attribute:// Get the page's language from the HTML lang attributeconst pageLocale = document.documentElement.lang || 'en-US'; // Fallback to 'en-US'Occasionally, you may want to override the page locale with a specific locale, such as when displaying content in multiple languages:// Force a specific locale regardless of page languageconst tutorialFormatter = new Intl.NumberFormat('zh-CN', { style: 'currency', currency: 'CNY' });console.log(Chinese example: ${tutorialFormatter.format(199.99)}); // Output: 199.99In some cases, you might want to use the users preferred language:// Use the user's preferred languageconst browserLocale = navigator.language || 'ja-JP';const formatter = new Intl.NumberFormat(browserLocale, { style: 'currency', currency: 'JPY' });When you instantiate an Intl formatter, you can optionally pass one or more locale strings. The API will then select the most appropriate locale based on availability and preference.Core Formatting PowerhousesThe Intl object exposes several constructors, each for a specific formatting task. Lets delve into the most frequently used ones, along with some powerful, often-overlooked gems.1. Intl.DateTimeFormat: Dates and Times, GloballyFormatting dates and times is a classic i18n problem. Should it be MM/DD/YYYY or DD.MM.YYYY? Should the month be a number or a full word? Intl.DateTimeFormat handles all this with ease.const date = new Date(2025, 6, 27, 14, 30, 0); // June 27, 2025, 2:30 PM// Specific locale and options (e.g., long date, short time)const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric', hour: 'numeric', minute: 'numeric', timeZoneName: 'shortOffset' // e.g., "GMT+8"};console.log(new Intl.DateTimeFormat('en-US', options).format(date));// "Friday, June 27, 2025 at 2:30 PM GMT+8"console.log(new Intl.DateTimeFormat('de-DE', options).format(date));// "Freitag, 27. Juni 2025 um 14:30 GMT+8"// Using dateStyle and timeStyle for common patternsconsole.log(new Intl.DateTimeFormat('en-GB', { dateStyle: 'full', timeStyle: 'short' }).format(date));// "Friday 27 June 2025 at 14:30"console.log(new Intl.DateTimeFormat('ja-JP', { dateStyle: 'long', timeStyle: 'short' }).format(date));// "2025627 14:30"The flexibility of options for DateTimeFormat is vast, allowing control over year, month, day, weekday, hour, minute, second, time zone, and more.2. Intl.NumberFormat: Numbers With Cultural NuanceBeyond simple decimal places, numbers require careful handling: thousands separators, decimal markers, currency symbols, and percentage signs vary wildly across locales.const price = 123456.789;// Currency formattingconsole.log(new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' }).format(price));// "$123,456.79" (auto-rounds)console.log(new Intl.NumberFormat('de-DE', { style: 'currency', currency: 'EUR' }).format(price));// "123.456,79 "// Unitsconsole.log(new Intl.NumberFormat('en-US', { style: 'unit', unit: 'meter', unitDisplay: 'long' }).format(100));// "100 meters"console.log(new Intl.NumberFormat('fr-FR', { style: 'unit', unit: 'kilogram', unitDisplay: 'short' }).format(5.5));// "5,5 kg"Options like minimumFractionDigits, maximumFractionDigits, and notation (e.g., scientific, compact) provide even finer control.3. Intl.ListFormat: Natural Language ListsPresenting lists of items is surprisingly tricky. English uses and for conjunction and or for disjunction. Many languages have different conjunctions, and some require specific punctuation.This API simplifies a task that would otherwise require complex conditional logic:const items = ['apples', 'oranges', 'bananas'];// Conjunction ("and") listconsole.log(new Intl.ListFormat('en-US', { type: 'conjunction' }).format(items));// "apples, oranges, and bananas"console.log(new Intl.ListFormat('de-DE', { type: 'conjunction' }).format(items));// "pfel, Orangen und Bananen"// Disjunction ("or") listconsole.log(new Intl.ListFormat('en-US', { type: 'disjunction' }).format(items));// "apples, oranges, or bananas"console.log(new Intl.ListFormat('fr-FR', { type: 'disjunction' }).format(items));// "apples, oranges ou bananas"4. Intl.RelativeTimeFormat: Human-Friendly TimestampsDisplaying 2 days ago or in 3 months is common in UI, but localizing these phrases accurately requires extensive data. Intl.RelativeTimeFormat automates this.const rtf = new Intl.RelativeTimeFormat('en-US', { numeric: 'auto' });console.log(rtf.format(-1, 'day')); // "yesterday"console.log(rtf.format(1, 'day')); // "tomorrow"console.log(rtf.format(-7, 'day')); // "7 days ago"console.log(rtf.format(3, 'month')); // "in 3 months"console.log(rtf.format(-2, 'year')); // "2 years ago"// French example:const frRtf = new Intl.RelativeTimeFormat('fr-FR', { numeric: 'auto', style: 'long' });console.log(frRtf.format(-1, 'day')); // "hier"console.log(frRtf.format(1, 'day')); // "demain"console.log(frRtf.format(-7, 'day')); // "il y a 7 jours"console.log(frRtf.format(3, 'month')); // "dans 3 mois"The numeric: 'always' option would force 1 day ago instead of yesterday.5. Intl.PluralRules: Mastering PluralizationThis is arguably one of the most critical aspects of i18n. Different languages have vastly different pluralization rules (e.g., English has singular/plural, Arabic has zero, one, two, many...). Intl.PluralRules allows you to determine the plural category for a given number in a specific locale.const prEn = new Intl.PluralRules('en-US');console.log(prEn.select(0)); // "other" (for "0 items")console.log(prEn.select(1)); // "one" (for "1 item")console.log(prEn.select(2)); // "other" (for "2 items")const prAr = new Intl.PluralRules('ar-EG');console.log(prAr.select(0)); // "zero"console.log(prAr.select(1)); // "one"console.log(prAr.select(2)); // "two"console.log(prAr.select(10)); // "few"console.log(prAr.select(100)); // "other"This API doesnt pluralize text directly, but it provides the essential classification needed to select the correct translation string from your message bundles. For example, if you have message keys like item.one, item.other, youd use pr.select(count) to pick the right one.6. Intl.DisplayNames: Localized Names For EverythingNeed to display the name of a language, a region, or a script in the users preferred language? Intl.DisplayNames is your comprehensive solution.// Display language names in Englishconst langNamesEn = new Intl.DisplayNames(['en'], { type: 'language' });console.log(langNamesEn.of('fr')); // "French"console.log(langNamesEn.of('es-MX')); // "Mexican Spanish"// Display language names in Frenchconst langNamesFr = new Intl.DisplayNames(['fr'], { type: 'language' });console.log(langNamesFr.of('en')); // "anglais"console.log(langNamesFr.of('zh-Hans')); // "chinois (simplifi)"// Display region namesconst regionNamesEn = new Intl.DisplayNames(['en'], { type: 'region' });console.log(regionNamesEn.of('US')); // "United States"console.log(regionNamesEn.of('DE')); // "Germany"// Display script namesconst scriptNamesEn = new Intl.DisplayNames(['en'], { type: 'script' });console.log(scriptNamesEn.of('Latn')); // "Latin"console.log(scriptNamesEn.of('Arab')); // "Arabic"With Intl.DisplayNames, you avoid hardcoding countless mappings for language names, regions, or scripts, keeping your application robust and lean.Browser SupportYou might be wondering about browser compatibility. The good news is that Intl has excellent support across modern browsers. All major browsers (Chrome, Firefox, Safari, Edge) fully support the core functionality discussed (DateTimeFormat, NumberFormat, ListFormat, RelativeTimeFormat, PluralRules, DisplayNames). You can confidently use these APIs without polyfills for the majority of your user base.Conclusion: Embrace The Global Web With IntlThe Intl API is a cornerstone of modern web development for a global audience. It empowers front-end developers to deliver highly localized user experiences with minimal effort, leveraging the browsers built-in, optimized capabilities.By adopting Intl, you reduce dependencies, shrink bundle sizes, and improve performance, all while ensuring your application respects and adapts to the diverse linguistic and cultural expectations of users worldwide. Stop wrestling with custom formatting logic and embrace this standards-compliant tool!Its important to remember that Intl handles the formatting of data. While incredibly powerful, it doesnt solve every aspect of internationalization. Content translation, bidirectional text (RTL/LTR), locale-specific typography, and deep cultural nuances beyond data formatting still require careful consideration. (I may write about these in the future!) However, for presenting dynamic data accurately and intuitively, Intl is the browser-native answer.Further Reading & ResourcesMDN Web Docs:Intl namespace objectIntl.DateTimeFormatIntl.NumberFormatIntl.ListFormatIntl.RelativeTimeFormatIntl.PluralRulesIntl.DisplayNamesECMAScript Internationalization API Specification: The official ECMA-402 StandardChoosing a Language Tag
    Like
    Love
    Wow
    Angry
    Sad
    367
    · 0 Kommentare ·0 Geteilt
  • The HoverAir Aqua Is a Completely Waterproof Drone
    design-milk.com
    When you think of a drone, you probably think of a small device with propellers in the air above but according to HoverAir, the next big step in drone technology could take drones out of the air and into the water. The new HoverAir Aquais the first 100% waterproof drone.The drone is designed to be smart too. Its the second HoverAir drone to be self-flying, and it uses AI to track subjects and automatically capture video. It has 15 different flight modes that you can use for various types of footage. Plus, theres a small display on the drone itself that can allow the user to get a preview of footage being captured.So whats the point of its waterproof design? Well, its not necessarily built to dive underwater, but instead be useful for capturing footage of activities like kayaking, surfing, or paddling. Typical drones would be damaged if they fell into the water, but the Aqua is both designed to be buoyant and has an IP67 water resistance rating to help get around that issue.While the HoverAir Aqua doesnt have a traditional remote, it does support an accessory that HoverAir calls the Lighthouse, which is a small fob with buttons for things like taking off, landing, and recording. You dont have to hold it while youre surfing either it can be worn as an armband.The HoverAir Aqua also has a relatively high-quality camera. It can capture 4K footage at up to 100 frames per second with HDR support. It also has 2x digital zoom, which can help it better frame a shot. Its battery life is pretty typical for a drone, sitting at 23 minutes.The HoverAir Aqua is available on Indiegogo now with early bird pricing of $999. You can learn more about it at hoverair.com.Photography courtesy of HoverAir.
    Like
    Love
    Wow
    Sad
    Angry
    255
    · 0 Kommentare ·0 Geteilt
  • Co-constructing intent with AI agents
    uxdesign.cc
    How can we move beyond the confines of a simple input field to help agents evolve from mere tools into true partners that can perceive our unspoken intentions and spark new ways of thinking?When we share our vague, half-formed ideas with AI agents, are we looking for the predictable, standard answers we expect, or a genuine surprise we didnt seecoming?As the capabilities of AI agents evolve at a breathtaking pace, we increasingly expect them to be intuitive and understanding.Yet, the reality often falls short. Vague or abstract questions typically yield only generic, catch-all answers. This traps us in a frustrating loop of rephrasing and refining, where we might land on a satisfactory result, but only after a frustrating cycle of refinement.Clear questions vs. Vague or abstract questionsThis is a dominant mode of human-AI interaction. But is this the future we reallywant?True connection often sparks in a dialogue between equals. The conversations that leave a lasting impression arent the simple question-and-answer exchanges. Instead, they are the ones where our true intent gradually surfaces through a back-and-forth of clarifying questions, mutual confirmations, and shared moments ofinsight.If AI agents is to make the leap from tool to partner, perhaps we need to reimagine its role. Should it merely provide answers on command, or could it become a true companion in our explorationsone that provokes our thoughts, and ultimately, helps us discover what we trulywant?Speed UnderstandingImagine sending a morning greeting to your family on a hazy, half-awake morning. Your finger instinctively finds the Send button. A light tap, and its done. This simple, natural actionhow different would it have been just a few decadesago?You would have had to carefully type out lines of code on a screen, where a single typo or an extra space would cause the computer to rebuke you with an even more cryptic string of garbledtext.At its core, the difference between these two experiences lies in the challenge of translating between fuzzy human intent and precise computer instructions. Back in the 1980s, Don Norman defined this challenge with two concepts: the Gulf of Execution, which separates our thoughts from the machines commands, and the Gulf of Evaluation, which separates the machines feedback from our ability to understand it.The narrower these gulfs, the more seamless the process of conveying our intent and interpreting the results. Decades of progress in human-computer interaction, from the command-line interface to the graphical user interface, have been dedicated to bridging thisdivide.Whitenton, K. (2018, March 12). The two UX gulfs: evaluation and execution. Nielsen NormanGroup.Todays AI-powered applications are following the same trajectory, continuously lowering the barrier to execution and making it ever faster to communicate in natural language.But does this speed truly mean our intent is being understood?Are AI agents really helping usexplore?Whats the capital of the Netherlands?The capital of the Netherlands is Amsterdam, but the seat of government, central ministries, parliament, and foreign embassies are all in TheHagueHow high is Mount Everest?The latest official height of Mount Everest is 8,848.86 meters, a result from a joint survey by China and Nepal in2020Generate a working timerdemo.Sure, heres a simple countdown timer in Python. The user can input the number of seconds, and the script will count down and print a message at theendYes. For tasks with a clear intent, a specific goal, and low-context requirementslike summarizing articles, translating text, or doing basic mathAI agents deliver accurate and prompt answers. But when faced with the messy, complex challenges of real life, does that efficiency holdup?Impeccable, butuselessHelp me plan a trip to Japan that the whole family willlove.Of course. To plan a trip to Japan the whole family can enjoy, here are some suggestions you can tailor to your needs. The milder weather in spring (March-May) and autumn (September-November) is ideal for family travel. Determine the trips duration based on everyones vacation time. If youre traveling between cities, consider a Japan Rail Pass, but youll need to calculate if its cost-effective for your specific itineraryOur conversation with the agent seems free of any Gulf of Execution. We had no trouble sending the request, and the agent promptly returned a standard-issue response. But how much is that response really worth tous?The parents want to unwind in a hot spring, the kids are dying to go to Universal Studios, and your partner is hoping for a more niche, authentic cultural experience. Everyone has different hopes for the trip, but the agents generic advice fails to address any of them. But why didnt we just give the agent all these details from thestart?The slot machine conversation trapWhen we turn to AI with these kinds of vague, complex problems, 99% of the time we are funneled into a single input box. Its the dominant interface for AI today, a model that originated with ChatGPTs goal of giving people the most direct path to experiencing the power of large languagemodels.The predominant way we interact with AI is almost entirely centered around the inputfield.However, the thought of cramming every detail into that tiny boxeveryones preferences, the family budget, and all the nuances from memoryand then endlessly editing it, is just exhausting.This is too much trouble, just simplifyit.Our brains are wired for shortcuts. To get our vague idea out quickly, we subconsciously strip away all the context, preferences, and other details that are hard to articulate, compressing everything into the oversimplified phrase make the family happy. We toss it into the input box and pin all our hopes on the agents abilities.Then, like a gambler, we pull the lever and pray for a lucky spin that happens to read our minds. To increase its hit rate with such pitiful context, the agent can only flex its capabilities, calling on every tool at its disposal to generate a broad, catch-all answer.The result isnt a helpful guide that inspires new thinking, but an undigested information dump. This interaction becomes less like a conversation and more like a slot machine, defined by uncertainty. It invisibly adds to our cognitive load and pushes us further away from discovering what we reallyneed.Even as AI agents have evolved to handle high-dimensional, ambiguous, and exploratory tasks, the way we communicate with it remains a low-dimensional channel, ill-suited for expressing our own complex thoughts.However, difficulties in obtaining the desired outcome arise from both the AIs interpretation and the translation of intentions into prompts.An evolution in the user experience of AI systems is necessary, integrating GUI-like characteristics with intent-based interaction.On the usability of generative AI: Human generative AIStop guessing, start exploring the realproblemLets revisit the original idea. If you truly wanted to plan a trip to make your whole family happy, how would you do it without an AI? Youd probably engage in a series of exploratory actionsreflecting, researching, and running what-if scenarios to find a plan that balances everyones different needs.Our daily reality isnt about clear instructions and direct execution; its about navigating vague and messy challenges. Whether planning a family vacation or kicking off a new project at work, the hardest problem we face is often how to transform a fuzzy impulse into a clear and valuablegoal.So how can we design our interactions with AI to help us explore these vague, fragile impulses? How can we build a more coherent, natural dialogue instead of getting stuck in a constant guessinggame?Good design is thorough down to the last detail. Nothing must be arbitrary or left to chance. Care and accuracy in the design process show respect towards the user.DieterRamsLike partners: The power of co-constructing intentDo you think this potted plant would look better somewhere else?Oh? Whats on your mind? I thought you liked it where itwas.Its not that I dont I just feel like nothing looks right lately. I guess Im just looking for a change of scenery.When we talk things over with friends, partners, or family, we rarely expect an immediate, clear-cut answer. The conversation often begins with a vague impulse or a half-formed idea.They might build on your thought: How about by the window? The sunlight might help it thrive. Or they might probe deeper, sensing the motive behind the question: Have you been feeling a bit drained lately? It sounds like you want to move more than just the plantmaybe youre looking to bring something new into yourlife.Human conversation is a dynamic, exploratory journey. Its not about simply transferring information. Its about two people taking a fuzzy idea and, through a back-and-forth exchange, co-discovering, refining, and even shaping it into something entirely newuncharted territory neither had imagined at the start. This is a process of Intent Co-construction.As our relationship with AI evolves from tool to partner, we find ourselves sharing more of these ambiguous intentions. To meet this changing need, how can we learn from our human relationships to design interactions that foster deep connection and co-construct intent with our AI counterparts?Anthropics official introduction: Meet Claude, your thinking partnerscreenshot viaReading between the lines with multimodalityPicture a perfect sunny weekend. Youre driving with the windows down, your favorite album playing, on your way to that new park youve been wanting tovisit.You tell your voice assistant your destination. It instantly displays three routes, color-coded by time and traffic, and helpfully highlights the one its algorithm deemsfastest.You subconsciously take its advice, but halfway there, something feelswrong.While it may be the shortest path physically, the route involves constant lane changes on streets barely wide enough for one car. Youre flanked by parked cars whose doors could swing open at any moment and kids who might dart into the road. Your nerves are frayed, your palms are sweating on the wheel, and you find yourself muttering about the cramped, crowded conditions, nearly rear-ending ane-bike.Through it all, the navigation remains indifferent, stubbornly sticking to its original recommendation.Yes, multimodal inputs allow us to give clearer commands. But when our initial command is incomplete, we still end up with a generic solution. A true partner wouldthink:They seem stressed by this complex route. Should I suggest a longer but easier alternative?Im detecting swearing and frequent hard braking. Is this road too difficult for them tohandle?The real breakthrough isnt just understanding what users say, but how they say itcombining their words with environmental cues and situational context. Do they type fluently or constantly backspace? Do they circle a data point with confidence or hesitation? These subconscious signals often reveal our true state ofmind.Hume AI can analyze the emotion in a speakers voice and respond with empathetic intelligence.The AI we need isnt just one that can process text, voice, images, and gestures simultaneously. We need a partner that, while respecting our privacy, can keenly and continuously read between the lines, detecting the unspoken truth in the dissonance between these multimodal signals.To design the best UX, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior. Users do not know what theywant. JakobNielsenNow, lets take this one step further. Imagine an AI that, through multimodal sensing, has perfectly understood our true intent. If it simply serves up a flawless answer like a data report, is that really the best way for us to learn andgrow?Information as a flowingprocessLets rewind and take that drive to the park again. This time, instead of an AI, your co-pilot is a living, breathing friend.When you reach that same algorithm-approved turnoff, you tense up at the sight of the narrow lane. Your friend notices immediately and guides you through the challenge:This road looks rough. Let me guide you to a betterone.Turn right just after that coffee shop upahead.Were almost there. See the people with picnic blankets?The journey is seamless. You realize your friend didnt necessarily give you more information than the AI, but they delivered the right information at the right time, in a way that made sense in themoment.Similarly, AI-generated information can be delivered through diverse mediums; text is by no means the only way. Think about a recent conversation that stuck with you. Was it memorable for its dictionary-like volume of facts? More likely, you were captivated by how the story was toldin a way that helped you visualize it. This power of visualization is rooted in metaphor.we often think we use metaphors to explain ideas, but I believe good metaphors dont explain but rather transform how our minds engage with ideas, opening entirely new ways of thinking. The Secret of Good MetaphorsFiles that look like paper, directories that look like folders, icons for calculators, notepads, and clocksback in the earliest days of personal computing, designers used graphical metaphors based on familiar physical objects to make strange and complex command lines feel intuitive and accessible.Apple Lisa 2 (1984): Features like desktop icons, the menu bar, and graphical windows significantly lowered the barrier to entry for personal computersMetaphors work by tapping into our past experiences and connecting them to something new, bridging the gap to understanding. So, how does this apply to AIoutput?Think about how we typically use an AI to explore a complex topic. We might ask it a direct question, have it synthesize industry reports, or feed it a pile of research to summarize. Even with the AIs best efforts, clicking open a result to find a wall of text can feel overwhelming.We cant see its thought process. We dont know if it considered all the angles we did. We dont know where to begin. What we truly need isnt just a final answer, but to feel like a friend is walking us through their thinkingtransforming information delivery from a static report into a guided process of shared discovery.Metaso: Visualizes its entire thinking process on a canvas as it works on aproblem.But what if, even after seeing the process, the answer is still too abstract?We naturally understand information through different forms: charts for trends, diagrams for processes, and stories told through sound and images. Any good communication orchestrates different dimensions of information into a presentation that conveys meaning more effectively.Google NotebookLM can transform source materials into various easy-to-digest formats, such as narrated video overviews, conversational podcasts, and interactive mind maps. This shifts learning from a process of passive consumption to a dynamic, co-creative experience.NotebookLM (Google): Can autonomously transform source materials into various accessible formats like illustrated videos, podcasts, or mind maps, turning passive learning into active co-creation.However, theres a risk. When an AI uses carefully crafted metaphors to present an output that is clear, beautiful, and logically flawless, it can feel like an unchallengeable finalanswer.Is that how our conversations with human partnerswork?When a friend shares an idea, we dont just agree. Our responses are filled with questions, doubts, and counter-arguments. Sometimes, a single insightful comment can change the direction of an entire project. A meaningful dialogue is less about the period at the end of a sentence and more about the comma or the question mark that keeps the conversation going.Progressive construction through dialogue andmemoryLets go hiking this weekend. I want to challenge myself.Sounds good! But remember last time? You said your knee was bothering you halfway up. Are you sure? We could find an easiertrail.Im fine, my knees allbetter.Dont push yourselfA true partner remembers your past knee injury. They remember youre directionally challenged and that youre not a fan of reading long texts. This long-term memory allows your interactions to build on a shared history, moving beyond simple Q&A into a state of mutual understanding where you can anticipate each others needs without lengthy explanations.Googles Project Astra remembers what it sees and hears in real time, allowing it to answer contextual questions like, Where did I leave my glasses? The Dia browsers memory feature continuously learns from your browsing history to develop a genuine understanding of yourtastesFor an AI to co-construct intent like a partner, persistent memory is not just a featureits essential.Agent failures arent only model failures; they are context failures.The New Skill in AI is Not Prompting, Its Context EngineeringBut memory alone isnt enough; we need to use it to foster deeper exploration. As we said from the start, the goal isnt to get an instant answer, but to refine our intentions and formulate better, more insightful questions.ChatGPT Study Mode. When given a task, its first instinct isnt to jump straight to an answer. Instead, it begins by asking the user clarifying questions to better define theproblemWhen a vague idea or question surfaces, we want an AI that is more than an answer machine. We want a true thinking partner: one that can reach beyond the immediate context, draw on our shared history to initiate meaningful dialogue, and guide us as we peel back the layers of our own thoughts. In this progressive, co-constructive process, it helps us finally articulate what we trulyintend.Where co-construction ends, webeginDeeper insights through multimodality, dynamic presentations that clarify information, and a back-and-forth conversational loop that feels like chatting with a friend As our dialogue with an AI becomes deeper and more meaningful, so too does our understanding of the problem, and our own intent becomesclearer.But is that the end of thejourney?In the film Her, through countless conversations with the AI Samantha, Theodore is compelled to confront his emotions, his past failed marriage, and his own conflicting fear and desire to reconnect. Throughout this process, Samanthas curiosity, learning, and gentle challenges to his preconceptions help him see himself with new clarity, allowing him to truly feel and face his lifeagain.screenshot viaHerThe world of Her is not some distant future; in many ways, it is a portrait of our present moment. In a future where AI companions will be a long-term presence in our lives, their ultimate purpose may not be to replace human connection, but to act as a catalyst for our owngrowth.The ultimate value of co-constructive interaction is not just to help us understand ourselves more deeply. It is to act as an engine, converting that profound self-awareness into the motivation and clarity needed to achieve our potential in the realworld.Of course, times change, but the fundamentals do not. This has always been the goal of the pioneers of human-computer interaction:Boosting mankinds capability for coping with complex, urgent problems.Doug EngelbartReferenceJohnson, Jeff. Designing with the mind in mind: simple guide to understanding user interface design guidelines. Morgan Kaufmann, 2020.Whitenton, K. (2018, March 12). The two UX gulfs: evaluation and execution. Nielsen Norman Group. https://www.nngroup.com/articles/two-ux-gulfs-evaluation-execution/DOCThe secret of good metaphors. (n.d.). https://www.doc.cc/articles/good-metaphorsNielsen, J., Gibbons, S., & Mugunthan, T. (2024, January 30). Accordion Editing and Apple Picking: Early Generative-AI User Behaviors. Nielsen Norman Group. https://www.nngroup.com/articles/accordion-editing-apple-picking/Varanasi, L. (2025, May 25). Meta chief AI scientist Yann LeCun says current AI models lack 4 key human traits. Business Insider. https://www.businessinsider.com/meta-yann-lecun-ai-models-lack-4-key-human-traits-2025-5?utm_source=chatgpt.comPerry, T. S., & Voelcker, J. (2023, August 7). How the Graphical User Interface Was Invented. IEEE Spectrum. https://spectrum.ieee.org/graphical-user-interfaceGerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1),6.Ravera, A., & Gena, C. (2025). On the usability of generative AI: Human generative AI. arXiv preprint arXiv:2502.17714.Co-constructing intent with AI agents was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    Like
    Love
    Wow
    Sad
    Angry
    195
    · 0 Kommentare ·0 Geteilt
  • These Smart Rings Are Being Pulled From the Market
    lifehacker.com
    We may earn a commission from links on this page.Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.Oura, maker of smart rings, recently won a patent suit against competitors Ultrahuman and RingConn. As a result, those companies have been told they need to pull their rings from the market within 60 days. The rings are all still available for now. Below, Ill break down the legal situation and what the companies are planning in the coming months.How do Ouras, Ultrahumans, and RingConns rings compare?Oura is the biggest name in the smart ring space. The latest model of their ring costs between $349 and $499 (depending on color) and you need a $5.99/month subscription to make use of the data it collects. It can track data like your heart rate during sleep and exercise, and the app provides analysis like sleep scores and suggested bedtimes. Ive used the Oura ring for years; I like it, but it also has its limitations compared to watch-based trackers. Heres my review of the current model, the gen 4.Ultrahumans ring is $349 regardless of color, and doesnt require a subscription. Ultrahumans app trends toward the biohack-y, for example suggesting an optimal caffeine permissible window based on your sleep schedule. Like the Oura ring, it can track data like your heart rate during sleep and exercise. Ultrahuman also sells glucose monitors, home air quality monitors, and a blood testing service. Some features of the rings app, called power plugs, require a separate subscription fee to activate. Ive been wearing an Ultrahuman ring to review it; expect to be able to read that review soon. In the meantime, heres a review from ZDNet.RingConn sells two versions of their ring, a $299 Gen 2, and a $199 Gen 2 Air. RingConn bills their rings as the thinnest and lightest on the market. These rings also track data such as your heart rate during sleep and exercise. Like Ultrahuman, RingConn rings dont require a subscription. You can read a ZDNet review of the Gen 2 here.Why a recent court ruling means Ultrahuman and RingConn will be pulled from the marketOura brought a patent infringement claim against both Ultrahuman and RingConn with the U.S. International Trade Commission, or ITC. The ITC ruled that Ultrahuman and RingConn infringed Ouras patents and must be pulled from the market. Oura posted a public version of their full filing from April 2025 here.The patent at issue is this one, which describes a finger-worn wearable ring device with a battery and sensors in a certain configuration. Oura applied for the patent in 2023 and it was issued in 2024. It seems to describe the gen 4 (current) version of the ring, with the smooth interior, rather than the gen 3s sensor bumps.As a result of the ITC decision, various divisoins of Ultrahuman and RingConns were sent cease-and-desist letters that block them from selling, importing, distributing, or marketing rings that infringe on the patent.The rings will still be available until at least Oct. 21The cease-and-desist letters specify that the companies can continue selling the rings during the 60-day period in which the decision is under review. That means that the rings are expected to stay on the market until Oct. 21, 2025. If you want to buy an Ultrahuman or RingConn ring, do it before then.After that date, resellers who have the rings in stock will still be able to sell what they have, so long as Ultrahuman and RingConn arent involved in that process (as I understand it).Ultrahuman has also said that they are fast-tracking a redesigned Ring that they expect to be able to sell without restriction.What the companies have to say about thisI contacted all three companies for more information. An Oura spokesperson linked me this blog post about the decision and provided a statement, which read, in part:"URA achieved a decisive legal victory with the International Trade Commission (ITC) ruling that URAs intellectual property is valid, and that both Ultrahuman and RingConn infringed on URAs IP and are subject to exclusion and cease and desist orders. This decision affirms the strength and validity of URAs innovations and our unwavering commitment to protecting our technology in the U.S. market."An Ultrahuman spokesperson told me that Ultrahuman is suing Oura for patent infringement in India, and also linked me to this Ultrahuman blog post arguing that Ouras patent is too obvious to be enforceable. Here is an excerpt from the companys official statement:"We welcome the ITCs recognition of consumer-protective exemptions and its rejection of attempts to block the access of U.S. consumers. Customers can continue purchasing and importing Ring AIR directly from us through October 21, 2025, and at retailers beyond this date. Whats more, our software application and charging accessories remain fully available, after the Commission rejected Ouras request to restrict them.While we respectfully disagree with the Commissions ruling on U.S. Patent No. 11,868,178, its validity is already under review by the USPTOs Patent Trial and Appeal Board (PTAB) on the grounds of obviousness. Public reporting has raised questions about Ouras business practices, and its reliance on litigation to limit competition."I havent heard back from RingConn, but will update this piece if I do.
    Like
    Love
    Wow
    Sad
    Angry
    363
    · 0 Kommentare ·0 Geteilt
  • Meta is launching a California super PAC
    www.engadget.com
    Meta is throwing its resources behind a new super PAC in California. According to Politico, the group will support state-level political candidates who espouse tech-friendly policies, particularly those with a loose approach to regulating artificial intelligence. The budget behind the social media company's new super PAC, dubbed Mobilizing Economic Transformation Across (Meta) California, is reported to be in the tens of millions of dollars, but no exact figure has been disclosed.California has made several efforts, with varying degrees of success, to enact protections against potentially harmful AI use cases. The state passed a law protecting the digital likenesses of actors in 2024, but has faced challenges to a bill that blocked election misinformation deepfakes and to one that more broadly sought protections against "critical harm" caused by AI.This creation of the super PAC puts Meta into a prominent position to influence races in 2026, when California will have midterm elections and vote for a new governor. "Sacramentos regulatory environment could stifle innovation, block AI progress, and put Californias technology leadership at risk," said Brian Rice, vice president of public policy at Meta. Politico reported that Rice and Meta policy executive Greg Maurer are likely to lead the political fundraiser.Meta hasn't been shy about throwing money into politics to advance its business interests. According to OpenSecrets, the company has spent $13.7 million on lobbying to date this year. Its roughly $8 million lobbying spend in the first quarter of 2025 vastly outpaced that of other tech majors.This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-launching-a-california-super-pac-193007814.html?src=rss
    Like
    Love
    Wow
    Sad
    Angry
    361
    · 0 Kommentare ·0 Geteilt
CGShares https://cgshares.com