• You can now sell MetaHumans, or use them in Unity or Godot

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The MetaHuman client reel. Epic Games’ framework for generating realistic 3D characters for games is out of early access, and can now be used with any DCC app or game engine.

    Epic Games has officially launched MetaHuman, its framework for generating realistic 3D characters for games, animation and VFX work, after four years in early access.The core applications, MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator, are now integrated into Unreal Engine 5.6, the latest version of the game engine.
    In addition, Epic has updated the licensing for MetaHuman characters, making it possible to use them in any game engine or DCC application, including in commercial projects.
    There are also two new free plugins, MetaHuman for Maya and MetaHuman for Houdini, intended to streamline the process of editing MetaHumans in Maya and Houdini.
    A suite of tools for generating and animating realistic real-time 3D characters

    First launched in early access in 2021, MetaHuman is a framework of tools for generating realistic 3D characters for next-gen games, animation, virtual production and VFX.The first component, MetaHuman Creator, enables users to design realistic digital humans.
    Users can generate new characters by blending between presets, then adjusting the proportions of the face by hand, and customising readymade hairstyles and clothing.
    The second component, Mesh to MetaHuman, makes it possible to create MetaHumans matching 3D scans or facial models created in other DCC apps.
    The final component, MetaHuman Animator, streamlines the process of transferring the facial performance of an actor from video footage to a MetaHuman character.
    MetaHuman Creator was originally a cloud-based tool, while Mesh to MetaHuman and MetaHuman Animator were available via the old MetaHuman plugin for Unreal Engine.
    Now integrated directly into Unreal Engine 5.6

    That changes with the end of early access, with MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator all now integrated directly into Unreal Engine itself.Integration – available in Unreal Engine 5.6, the latest version of the engine – is intended to simplify character creation and asset management worklows.
    Studios also get access to the MetaHuman source code, since Unreal Engine itself comes with full C++ source code access.
    However, the tools still cannot be run entirely locally: according to Epic, in-editor workflow is “enhanced by cloud services that deliver autorigging and texture synthesis”.


    Users can now adjust MetaHumans’ bodies, with a new unified Outfit Asset making it possible to create 3D clothing that adjusts automatically to bodily proportions.

    Updates to both MetaHuman Creator and MetaHuman Animator

    In addition, the official release introduces new features, with MetaHuman Creator’s parametric system for creating faces now extended to body shapes.Users can now adjust proportions like height, chest and waist measurements, and leg length, rather than simply selecting preset body types.
    Similarly, a new unified Outfit Asset makes it possible to author custom 3D clothing, rather than selecting readymade presets, with garments resizing to characters’ body shapes.
    MetaHuman Animator – which previously required footage from stereo head-mounted cameras or iPhones – now supports footage from mono cameras like webcams.
    The toolset can also now generate facial animation – both lip sync and head movement – solely from audio recordings, as well as from video footage.
    You can find fuller descriptions of the new features in Epic Games’ blog post.
    Use MetaHumans in Unity or Godot games, or sell them on online marketplaces

    Equally significantly, Epic has changed the licensing for MetaHumans.The MetaHuman toolset is now covered by the standard Unreal Engine EULA, meaning that it can be used for free by any artist or studio with under million/year in revenue.
    MetaHuman characters and clothing can also now be sold on online marketplaces, or used in commercial projects created with other DCC apps or game engines.
    The only exception is for AI: you can use MetaHumans in “workflows that incorporate artificial intelligence technology”, but not to train or enhance the AI models themselves.
    Studios earning more than million/year from projects that use MetaHuman characters need Unreal Engine seat licenses, with currently cost /year.
    However, since MetaHuman characters and animations are classed as ‘non-engine products’, they can be used in games created in other engines, like Unity or Godot, without incurring the 5% cut of the revenue that Epic takes from Unreal Engine games.

    The free MetaHuman for Maya plugin lets you edit MetaHumans with Maya’s native tools.

    New plugins streamline editing MetaHumans in Maya and Houdini

    Last but not least, Epic Games has released new free add-ons intended to streamline the process of editing MetaHumans in other DCC software.The MetaHuman for Maya plugin makes it possible to manipulate the MetaHuman mesh directly with Maya’s standard mesh-editing and sculpting tools.
    Users can also create MetaHuman-compatible hair grooms using Maya’s XGen toolset, and export them in Alembic format.
    The MetaHuman for Houdini plugin seems to be confined to grooming, with users able to create hairstyles using Houdini’s native tools, and export them in Alembic format.
    The plugins themselves are supplemented by MetaHuman Groom Starter Kits for Maya and Houdini, which provide readymade sample files for generating grooms.
    Price, licensing and system requirements

    MetaHuman Creator and MetaHuman Animator are integrated into Unreal Engine 5.6. The Unreal Editor is compatible with Windows 10+, macOS 14.0+ and RHEL/Rocky Linux 8+.The MetaHuman plugin for Maya is compatible with Maya 2022-2025. The MetaHuman for Houdini plugin is compatible with Houdini 20.5 with SideFX Labs installed.
    All of the software is free to use, including for commercial projects, if you earn under million/year. You can find more information on licensing in the story above.
    Read an overview of the changes to the MetaHuman software on Epic Games’ blog
    Download the free MetaHuman for Maya and Houdini plugins and starter kits
    Read Epic Games’ FAQs about the changes to licensing for MetaHumans

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #you #can #now #sell #metahumans
    You can now sell MetaHumans, or use them in Unity or Godot
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The MetaHuman client reel. Epic Games’ framework for generating realistic 3D characters for games is out of early access, and can now be used with any DCC app or game engine. Epic Games has officially launched MetaHuman, its framework for generating realistic 3D characters for games, animation and VFX work, after four years in early access.The core applications, MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator, are now integrated into Unreal Engine 5.6, the latest version of the game engine. In addition, Epic has updated the licensing for MetaHuman characters, making it possible to use them in any game engine or DCC application, including in commercial projects. There are also two new free plugins, MetaHuman for Maya and MetaHuman for Houdini, intended to streamline the process of editing MetaHumans in Maya and Houdini. A suite of tools for generating and animating realistic real-time 3D characters First launched in early access in 2021, MetaHuman is a framework of tools for generating realistic 3D characters for next-gen games, animation, virtual production and VFX.The first component, MetaHuman Creator, enables users to design realistic digital humans. Users can generate new characters by blending between presets, then adjusting the proportions of the face by hand, and customising readymade hairstyles and clothing. The second component, Mesh to MetaHuman, makes it possible to create MetaHumans matching 3D scans or facial models created in other DCC apps. The final component, MetaHuman Animator, streamlines the process of transferring the facial performance of an actor from video footage to a MetaHuman character. MetaHuman Creator was originally a cloud-based tool, while Mesh to MetaHuman and MetaHuman Animator were available via the old MetaHuman plugin for Unreal Engine. Now integrated directly into Unreal Engine 5.6 That changes with the end of early access, with MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator all now integrated directly into Unreal Engine itself.Integration – available in Unreal Engine 5.6, the latest version of the engine – is intended to simplify character creation and asset management worklows. Studios also get access to the MetaHuman source code, since Unreal Engine itself comes with full C++ source code access. However, the tools still cannot be run entirely locally: according to Epic, in-editor workflow is “enhanced by cloud services that deliver autorigging and texture synthesis”. Users can now adjust MetaHumans’ bodies, with a new unified Outfit Asset making it possible to create 3D clothing that adjusts automatically to bodily proportions. Updates to both MetaHuman Creator and MetaHuman Animator In addition, the official release introduces new features, with MetaHuman Creator’s parametric system for creating faces now extended to body shapes.Users can now adjust proportions like height, chest and waist measurements, and leg length, rather than simply selecting preset body types. Similarly, a new unified Outfit Asset makes it possible to author custom 3D clothing, rather than selecting readymade presets, with garments resizing to characters’ body shapes. MetaHuman Animator – which previously required footage from stereo head-mounted cameras or iPhones – now supports footage from mono cameras like webcams. The toolset can also now generate facial animation – both lip sync and head movement – solely from audio recordings, as well as from video footage. You can find fuller descriptions of the new features in Epic Games’ blog post. Use MetaHumans in Unity or Godot games, or sell them on online marketplaces Equally significantly, Epic has changed the licensing for MetaHumans.The MetaHuman toolset is now covered by the standard Unreal Engine EULA, meaning that it can be used for free by any artist or studio with under million/year in revenue. MetaHuman characters and clothing can also now be sold on online marketplaces, or used in commercial projects created with other DCC apps or game engines. The only exception is for AI: you can use MetaHumans in “workflows that incorporate artificial intelligence technology”, but not to train or enhance the AI models themselves. Studios earning more than million/year from projects that use MetaHuman characters need Unreal Engine seat licenses, with currently cost /year. However, since MetaHuman characters and animations are classed as ‘non-engine products’, they can be used in games created in other engines, like Unity or Godot, without incurring the 5% cut of the revenue that Epic takes from Unreal Engine games. The free MetaHuman for Maya plugin lets you edit MetaHumans with Maya’s native tools. New plugins streamline editing MetaHumans in Maya and Houdini Last but not least, Epic Games has released new free add-ons intended to streamline the process of editing MetaHumans in other DCC software.The MetaHuman for Maya plugin makes it possible to manipulate the MetaHuman mesh directly with Maya’s standard mesh-editing and sculpting tools. Users can also create MetaHuman-compatible hair grooms using Maya’s XGen toolset, and export them in Alembic format. The MetaHuman for Houdini plugin seems to be confined to grooming, with users able to create hairstyles using Houdini’s native tools, and export them in Alembic format. The plugins themselves are supplemented by MetaHuman Groom Starter Kits for Maya and Houdini, which provide readymade sample files for generating grooms. Price, licensing and system requirements MetaHuman Creator and MetaHuman Animator are integrated into Unreal Engine 5.6. The Unreal Editor is compatible with Windows 10+, macOS 14.0+ and RHEL/Rocky Linux 8+.The MetaHuman plugin for Maya is compatible with Maya 2022-2025. The MetaHuman for Houdini plugin is compatible with Houdini 20.5 with SideFX Labs installed. All of the software is free to use, including for commercial projects, if you earn under million/year. You can find more information on licensing in the story above. Read an overview of the changes to the MetaHuman software on Epic Games’ blog Download the free MetaHuman for Maya and Houdini plugins and starter kits Read Epic Games’ FAQs about the changes to licensing for MetaHumans Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #you #can #now #sell #metahumans
    You can now sell MetaHumans, or use them in Unity or Godot
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The MetaHuman client reel. Epic Games’ framework for generating realistic 3D characters for games is out of early access, and can now be used with any DCC app or game engine. Epic Games has officially launched MetaHuman, its framework for generating realistic 3D characters for games, animation and VFX work, after four years in early access.The core applications, MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator, are now integrated into Unreal Engine 5.6, the latest version of the game engine. In addition, Epic has updated the licensing for MetaHuman characters, making it possible to use them in any game engine or DCC application, including in commercial projects. There are also two new free plugins, MetaHuman for Maya and MetaHuman for Houdini, intended to streamline the process of editing MetaHumans in Maya and Houdini. A suite of tools for generating and animating realistic real-time 3D characters First launched in early access in 2021, MetaHuman is a framework of tools for generating realistic 3D characters for next-gen games, animation, virtual production and VFX.The first component, MetaHuman Creator, enables users to design realistic digital humans. Users can generate new characters by blending between presets, then adjusting the proportions of the face by hand, and customising readymade hairstyles and clothing. The second component, Mesh to MetaHuman, makes it possible to create MetaHumans matching 3D scans or facial models created in other DCC apps. The final component, MetaHuman Animator, streamlines the process of transferring the facial performance of an actor from video footage to a MetaHuman character. MetaHuman Creator was originally a cloud-based tool, while Mesh to MetaHuman and MetaHuman Animator were available via the old MetaHuman plugin for Unreal Engine. Now integrated directly into Unreal Engine 5.6 That changes with the end of early access, with MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator all now integrated directly into Unreal Engine itself.Integration – available in Unreal Engine 5.6, the latest version of the engine – is intended to simplify character creation and asset management worklows. Studios also get access to the MetaHuman source code, since Unreal Engine itself comes with full C++ source code access. However, the tools still cannot be run entirely locally: according to Epic, in-editor workflow is “enhanced by cloud services that deliver autorigging and texture synthesis”. https://www.cgchannel.com/wp-content/uploads/2025/06/250604_MetaHumanOfficialLaunch_LicensingChanges_UnifiedClothing.mp4 Users can now adjust MetaHumans’ bodies, with a new unified Outfit Asset making it possible to create 3D clothing that adjusts automatically to bodily proportions. Updates to both MetaHuman Creator and MetaHuman Animator In addition, the official release introduces new features, with MetaHuman Creator’s parametric system for creating faces now extended to body shapes.Users can now adjust proportions like height, chest and waist measurements, and leg length, rather than simply selecting preset body types. Similarly, a new unified Outfit Asset makes it possible to author custom 3D clothing, rather than selecting readymade presets, with garments resizing to characters’ body shapes. MetaHuman Animator – which previously required footage from stereo head-mounted cameras or iPhones – now supports footage from mono cameras like webcams. The toolset can also now generate facial animation – both lip sync and head movement – solely from audio recordings, as well as from video footage. You can find fuller descriptions of the new features in Epic Games’ blog post. Use MetaHumans in Unity or Godot games, or sell them on online marketplaces Equally significantly, Epic has changed the licensing for MetaHumans.The MetaHuman toolset is now covered by the standard Unreal Engine EULA, meaning that it can be used for free by any artist or studio with under $1 million/year in revenue. MetaHuman characters and clothing can also now be sold on online marketplaces, or used in commercial projects created with other DCC apps or game engines. The only exception is for AI: you can use MetaHumans in “workflows that incorporate artificial intelligence technology”, but not to train or enhance the AI models themselves. Studios earning more than $1 million/year from projects that use MetaHuman characters need Unreal Engine seat licenses, with currently cost $1,850/year. However, since MetaHuman characters and animations are classed as ‘non-engine products’, they can be used in games created in other engines, like Unity or Godot, without incurring the 5% cut of the revenue that Epic takes from Unreal Engine games. https://www.cgchannel.com/wp-content/uploads/2025/06/250604_MetaHumanOfficialLaunch_LicensingChanges_MetaHumanForMaya.mp4 The free MetaHuman for Maya plugin lets you edit MetaHumans with Maya’s native tools. New plugins streamline editing MetaHumans in Maya and Houdini Last but not least, Epic Games has released new free add-ons intended to streamline the process of editing MetaHumans in other DCC software.The MetaHuman for Maya plugin makes it possible to manipulate the MetaHuman mesh directly with Maya’s standard mesh-editing and sculpting tools. Users can also create MetaHuman-compatible hair grooms using Maya’s XGen toolset, and export them in Alembic format. The MetaHuman for Houdini plugin seems to be confined to grooming, with users able to create hairstyles using Houdini’s native tools, and export them in Alembic format. The plugins themselves are supplemented by MetaHuman Groom Starter Kits for Maya and Houdini, which provide readymade sample files for generating grooms. Price, licensing and system requirements MetaHuman Creator and MetaHuman Animator are integrated into Unreal Engine 5.6. The Unreal Editor is compatible with Windows 10+, macOS 14.0+ and RHEL/Rocky Linux 8+.The MetaHuman plugin for Maya is compatible with Maya 2022-2025. The MetaHuman for Houdini plugin is compatible with Houdini 20.5 with SideFX Labs installed. All of the software is free to use, including for commercial projects, if you earn under $1 million/year. You can find more information on licensing in the story above. Read an overview of the changes to the MetaHuman software on Epic Games’ blog Download the free MetaHuman for Maya and Houdini plugins and starter kits Read Epic Games’ FAQs about the changes to licensing for MetaHumans Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    184
    0 Комментарии 0 Поделились
  • A new movie taking on the tech bros

    Hi, friends! Welcome to Installer No. 85, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sean Evans and music fraud and ayahuasca, playing with the new Obsidian Bases feature, obsessing over every Cliche” more times than I’m proud of, installing some Elgato Key Lights to improve my WFH camera look, digging the latest beta of Artifacts, and downloading every podcast I can find because I have 20 hours of driving to do this weekend.I also have for you a very funny new movie about tech CEOs, a new place to WhatsApp, a great new accessory for your phone, a helpful crypto politics explainer, and much more. Short week this week, but still lots going on. Let’s do it.The DropMountainhead. I mean, is there a more me-coded pitch than “Succession vibes, but about tech bros?” It’s about a bunch ofbillionaires who more or less run the world and are also more or less ruining it. You’ll either find this hilarious, way too close to home, or both. WhatsApp for iPad. I will never, ever understand why Meta hates building iPad apps. But it finally launched the most important one! The app itself is extremely fine and exactly what you’d think it would be, but whatever. It exists! DO INSTAGRAM NEXT.Post Games.Polygon, all about video games. It’s only a couple episodes deep, but so far I love the format: it’s really smart and extremely thoughtful, but it’s also very silly in spots. Big fan.The Popsockets Kick-Out Grip. I am a longtime, die-hard Popsockets user and evangelist, and the new model fixes my one gripe with the thing by working as both a landscape and portrait kickstand. is highway robbery for a phone holder, but this is exactly the thing I wanted.“Dance with Sabrina.” A new, real-time competitive rhythm game inside of Fortnite, in which you try to do well enough to earn the right to actually help create the show itself. Super fun concept, though all these games are better with pads, guitars, or really anything but a normal controller.Lazy 2.0. Lazy is a stealthy but fascinating note-taking tool, and it does an unusually good job of integrating with files and apps. The new version is very AI-forward, basically bringing a personalized chatbot and all your notes to your whole computer. Neat!Elden Ring Nightreign. A multiplayer-heavy spinoff of the game that I cannot get my gamer friends to shut up about, even years after it came out. I’ve seen a few people call the game a bit small and repetitive, but next to Elden Ring I suppose most things are.The Tapo DL100 Smart Deadbolt Door Lock. A door lock with, as far as I can tell, every feature I want in a smart lock: a keypad, physical keys, super long battery life, and lots of assistant integrations. It does look… huge? But it’s pretty bland-looking, which is a good thing.Implosion: The Titanic Sub Disaster. One of a few Titan-related documentaries coming this summer, meant to try and explain what led to the awful events of a couple years ago. I haven’t seen this one yet, but the reviews are solid — and the story seems even sadder and more infuriating than we thought.“The growing scandal of $TRUMP.” I love a good Zeke Faux take on crypto, whether it’s a book or a Search Engine episode. This interview with Ezra Klein is a great explainer of how the Trump family got so into crypto and how it’s being used to move money in deeply confusing and clearly corrupt ways. Cameron Faulkner isn’t technically new to The Verge, he’s just newly back at The Verge. In addition to being a commerce editor on our team, he also wrote one of the deepest dives into webcams you’ll ever find, plays a lot of games, has more thoughts about monitors than any reasonable person should, and is extremely my kind of person. Since he’s now so very back, I asked Cam to share his homescreen with us, as I always try to do with new people here. Here it is, plus some info on the apps he uses and why:The phone: Pixel 9 Pro.The wallpaper: It’s an “Emoji Workshop” creation, which is a feature that’s built into Android 14 and more recent updates. It mashes together emoji into the patterns and colors of your choosing. I picked this one because I like sushi, and I love melon / coral color tones.The apps: Google Keep, Settings, Clock, Phone, Chrome, Pocket Casts, Messages, Spotify.I haven’t downloaded a new app in ages. What’s shown on my homescreen has been there, unmoved, for longer than I can remember. I have digital light switches, a to-do list with the greatStuff widget, a simple Google Fit widget to show me how much I moved today, and a couple Google Photos widgets of my lovely wife and son. I could probably function just fine if every app shuffled its location on my homescreen, except for the bottom row. That’s set in stone, never to be fiddled with.I also asked Cameron to share a few things he’s into right now. Here’s what he sent back:Righteous Gemstones on HBO Max. It’s a much smarter comedy than I had assumed, and I’m delighted to have four seasons to catch up on. I’m really digging Clair Obscur: Expedition 33, which achieves the feat of breakneck pacingand a style that rivals Persona 5, which is high praise. I have accrued well over a dozen Switch 2 accessories, and I’m excited to put them to the test once I get a console on launch day.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now, as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“The Devil’s Plan. This Netflix original South Korean reality show locks 14 contestants in a windowless living space that’s part mansion, part prison, part room escape, and challenges them to eliminate each other in a series of complicated tabletop games.” — Travis“If you’re a fan of Drive to Survive, I’m happy to report that the latest season of Netflix’s series on NASCAR is finally good, and a reasonable substitute for that show once you’ve finished it.” — Christopher“I switched to a Pixel 9 Pro XL and Pixel Watch 3 from an iPhone and Apple Watch about 6 months ago and found Open Bubbles, an open source alternative to BlueBubbles that does need a Mac but doesn’t need that Mac to remain on, You just need a one-time hardware identifier from it, then it gives you full iMessage, Find My, FaceTime, and iCloud shared albums on Android and Windows using an email address. So long as you can get your contacts to iMessage your email instead of your number, it works great.” — Tim“Playing Mario Kart 8 Deluxe for the last time before Mario Kart World arrives next week and takes over my life!” — Ravi“With Pocket being killed off I’ve started using my RSS reader — which is Inoreader — instead as a suitable replacement. I only switched over to Pocket after Omnivore shut down.” — James“I just got a Boox Go 10.3 for my birthday and love it. The lack of front lighting is the biggest downfall. It is also only on Android 12 so I cannot load a corporate profile. It feels good to write on just, almost as good as my cheaper fountain pen and paper. It is helping me organize multiple notebooks and scraps of paper.” — Sean“Giving Tweek a bit of a go, and for a lightweight weekly planner it’s beautiful. I also currently use Motion for project management of personal tasks and when I was doing my Master’s. I really like the Gantt view to map out long term personal and study projects.” — Astrid“Might I suggest Elle Griffin’s work at The Elysian? How she’s thinking through speculative futures and a cooperative media system is fascinating.” — Zach“GeForce Now on Steam Deck!” — SteveSigning offOne of the reasons I like making this newsletter with all of you is that it’s a weekly reminder that, hey, actually, there’s a lot of awesome people doing awesome stuff out there on the internet. I spend a lot of my time talking to people who say AI is going to change everything, and we’re all going to just AI ourselves into oblivion and be thrilled about it — a theory I increasingly think is both wrong and horrifying.And then this week I read a blog post from the great Dan Sinker, who called this moment “the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.” You should read the whole thing, but here’s a bit I really loved:“Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous. If you do, the illusion falls apart pretty quickly. The fact that the userbase for AI chatbots has exploded exponentially demonstrates that good enough is, in fact, good enough for most people. Because most people don’t care.”I don’t think this describes everything and everyone, and neither does Sinker, but I do think it’s more true than it should be. And I increasingly think our job, maybe our method of rebellion, is to be people who care, who have taste, who like and share and look for good things, who read and watch and look at those things on purpose instead of just staring slackjawed at whatever slop is placed between the ads they hope we won’t really notice. I think there are a lot of fascinating ways that AI can be useful, but we can’t let it train us to accept slop just because it’s there. Sorry, this got more existential than I anticipated. But I’ve been thinking about it a lot, and I’m going to try and point Installer even more at the stuff that matters, made by people who care. I hope you’ll hold me to that.See you next week!See More:
    #new #movie #taking #tech #bros
    A new movie taking on the tech bros
    Hi, friends! Welcome to Installer No. 85, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sean Evans and music fraud and ayahuasca, playing with the new Obsidian Bases feature, obsessing over every Cliche” more times than I’m proud of, installing some Elgato Key Lights to improve my WFH camera look, digging the latest beta of Artifacts, and downloading every podcast I can find because I have 20 hours of driving to do this weekend.I also have for you a very funny new movie about tech CEOs, a new place to WhatsApp, a great new accessory for your phone, a helpful crypto politics explainer, and much more. Short week this week, but still lots going on. Let’s do it.The DropMountainhead. I mean, is there a more me-coded pitch than “Succession vibes, but about tech bros?” It’s about a bunch ofbillionaires who more or less run the world and are also more or less ruining it. You’ll either find this hilarious, way too close to home, or both. WhatsApp for iPad. I will never, ever understand why Meta hates building iPad apps. But it finally launched the most important one! The app itself is extremely fine and exactly what you’d think it would be, but whatever. It exists! DO INSTAGRAM NEXT.Post Games.Polygon, all about video games. It’s only a couple episodes deep, but so far I love the format: it’s really smart and extremely thoughtful, but it’s also very silly in spots. Big fan.The Popsockets Kick-Out Grip. I am a longtime, die-hard Popsockets user and evangelist, and the new model fixes my one gripe with the thing by working as both a landscape and portrait kickstand. is highway robbery for a phone holder, but this is exactly the thing I wanted.“Dance with Sabrina.” A new, real-time competitive rhythm game inside of Fortnite, in which you try to do well enough to earn the right to actually help create the show itself. Super fun concept, though all these games are better with pads, guitars, or really anything but a normal controller.Lazy 2.0. Lazy is a stealthy but fascinating note-taking tool, and it does an unusually good job of integrating with files and apps. The new version is very AI-forward, basically bringing a personalized chatbot and all your notes to your whole computer. Neat!Elden Ring Nightreign. A multiplayer-heavy spinoff of the game that I cannot get my gamer friends to shut up about, even years after it came out. I’ve seen a few people call the game a bit small and repetitive, but next to Elden Ring I suppose most things are.The Tapo DL100 Smart Deadbolt Door Lock. A door lock with, as far as I can tell, every feature I want in a smart lock: a keypad, physical keys, super long battery life, and lots of assistant integrations. It does look… huge? But it’s pretty bland-looking, which is a good thing.Implosion: The Titanic Sub Disaster. One of a few Titan-related documentaries coming this summer, meant to try and explain what led to the awful events of a couple years ago. I haven’t seen this one yet, but the reviews are solid — and the story seems even sadder and more infuriating than we thought.“The growing scandal of $TRUMP.” I love a good Zeke Faux take on crypto, whether it’s a book or a Search Engine episode. This interview with Ezra Klein is a great explainer of how the Trump family got so into crypto and how it’s being used to move money in deeply confusing and clearly corrupt ways. Cameron Faulkner isn’t technically new to The Verge, he’s just newly back at The Verge. In addition to being a commerce editor on our team, he also wrote one of the deepest dives into webcams you’ll ever find, plays a lot of games, has more thoughts about monitors than any reasonable person should, and is extremely my kind of person. Since he’s now so very back, I asked Cam to share his homescreen with us, as I always try to do with new people here. Here it is, plus some info on the apps he uses and why:The phone: Pixel 9 Pro.The wallpaper: It’s an “Emoji Workshop” creation, which is a feature that’s built into Android 14 and more recent updates. It mashes together emoji into the patterns and colors of your choosing. I picked this one because I like sushi, and I love melon / coral color tones.The apps: Google Keep, Settings, Clock, Phone, Chrome, Pocket Casts, Messages, Spotify.I haven’t downloaded a new app in ages. What’s shown on my homescreen has been there, unmoved, for longer than I can remember. I have digital light switches, a to-do list with the greatStuff widget, a simple Google Fit widget to show me how much I moved today, and a couple Google Photos widgets of my lovely wife and son. I could probably function just fine if every app shuffled its location on my homescreen, except for the bottom row. That’s set in stone, never to be fiddled with.I also asked Cameron to share a few things he’s into right now. Here’s what he sent back:Righteous Gemstones on HBO Max. It’s a much smarter comedy than I had assumed, and I’m delighted to have four seasons to catch up on. I’m really digging Clair Obscur: Expedition 33, which achieves the feat of breakneck pacingand a style that rivals Persona 5, which is high praise. I have accrued well over a dozen Switch 2 accessories, and I’m excited to put them to the test once I get a console on launch day.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now, as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“The Devil’s Plan. This Netflix original South Korean reality show locks 14 contestants in a windowless living space that’s part mansion, part prison, part room escape, and challenges them to eliminate each other in a series of complicated tabletop games.” — Travis“If you’re a fan of Drive to Survive, I’m happy to report that the latest season of Netflix’s series on NASCAR is finally good, and a reasonable substitute for that show once you’ve finished it.” — Christopher“I switched to a Pixel 9 Pro XL and Pixel Watch 3 from an iPhone and Apple Watch about 6 months ago and found Open Bubbles, an open source alternative to BlueBubbles that does need a Mac but doesn’t need that Mac to remain on, You just need a one-time hardware identifier from it, then it gives you full iMessage, Find My, FaceTime, and iCloud shared albums on Android and Windows using an email address. So long as you can get your contacts to iMessage your email instead of your number, it works great.” — Tim“Playing Mario Kart 8 Deluxe for the last time before Mario Kart World arrives next week and takes over my life!” — Ravi“With Pocket being killed off I’ve started using my RSS reader — which is Inoreader — instead as a suitable replacement. I only switched over to Pocket after Omnivore shut down.” — James“I just got a Boox Go 10.3 for my birthday and love it. The lack of front lighting is the biggest downfall. It is also only on Android 12 so I cannot load a corporate profile. It feels good to write on just, almost as good as my cheaper fountain pen and paper. It is helping me organize multiple notebooks and scraps of paper.” — Sean“Giving Tweek a bit of a go, and for a lightweight weekly planner it’s beautiful. I also currently use Motion for project management of personal tasks and when I was doing my Master’s. I really like the Gantt view to map out long term personal and study projects.” — Astrid“Might I suggest Elle Griffin’s work at The Elysian? How she’s thinking through speculative futures and a cooperative media system is fascinating.” — Zach“GeForce Now on Steam Deck!” — SteveSigning offOne of the reasons I like making this newsletter with all of you is that it’s a weekly reminder that, hey, actually, there’s a lot of awesome people doing awesome stuff out there on the internet. I spend a lot of my time talking to people who say AI is going to change everything, and we’re all going to just AI ourselves into oblivion and be thrilled about it — a theory I increasingly think is both wrong and horrifying.And then this week I read a blog post from the great Dan Sinker, who called this moment “the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.” You should read the whole thing, but here’s a bit I really loved:“Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous. If you do, the illusion falls apart pretty quickly. The fact that the userbase for AI chatbots has exploded exponentially demonstrates that good enough is, in fact, good enough for most people. Because most people don’t care.”I don’t think this describes everything and everyone, and neither does Sinker, but I do think it’s more true than it should be. And I increasingly think our job, maybe our method of rebellion, is to be people who care, who have taste, who like and share and look for good things, who read and watch and look at those things on purpose instead of just staring slackjawed at whatever slop is placed between the ads they hope we won’t really notice. I think there are a lot of fascinating ways that AI can be useful, but we can’t let it train us to accept slop just because it’s there. Sorry, this got more existential than I anticipated. But I’ve been thinking about it a lot, and I’m going to try and point Installer even more at the stuff that matters, made by people who care. I hope you’ll hold me to that.See you next week!See More: #new #movie #taking #tech #bros
    WWW.THEVERGE.COM
    A new movie taking on the tech bros
    Hi, friends! Welcome to Installer No. 85, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, sorry in advance that this week is a tiny bit politics-y, and also you can read all the old editions at the Installer homepage.) This week, I’ve been reading about Sean Evans and music fraud and ayahuasca, playing with the new Obsidian Bases feature, obsessing over every Cliche” more times than I’m proud of, installing some Elgato Key Lights to improve my WFH camera look, digging the latest beta of Artifacts, and downloading every podcast I can find because I have 20 hours of driving to do this weekend.I also have for you a very funny new movie about tech CEOs, a new place to WhatsApp, a great new accessory for your phone, a helpful crypto politics explainer, and much more. Short week this week, but still lots going on. Let’s do it.(As always, the best part of Installer is your ideas and tips. What are you reading / playing / watching / listening to / shopping for / doing with a Raspberry Pi this week? Tell me everything: installer@theverge.com. And if you know someone else who might enjoy Installer, tell them to subscribe here. And if you haven’t subscribed, you should! You’ll get every issue for free, a day early, in your inbox.)The DropMountainhead. I mean, is there a more me-coded pitch than “Succession vibes, but about tech bros?” It’s about a bunch of (pretty recognizable) billionaires who more or less run the world and are also more or less ruining it. You’ll either find this hilarious, way too close to home, or both. WhatsApp for iPad. I will never, ever understand why Meta hates building iPad apps. But it finally launched the most important one! The app itself is extremely fine and exactly what you’d think it would be, but whatever. It exists! DO INSTAGRAM NEXT.Post Games.Polygon, all about video games. It’s only a couple episodes deep, but so far I love the format: it’s really smart and extremely thoughtful, but it’s also very silly in spots. Big fan.The Popsockets Kick-Out Grip. I am a longtime, die-hard Popsockets user and evangelist, and the new model fixes my one gripe with the thing by working as both a landscape and portrait kickstand. $40 is highway robbery for a phone holder, but this is exactly the thing I wanted.“Dance with Sabrina.” A new, real-time competitive rhythm game inside of Fortnite, in which you try to do well enough to earn the right to actually help create the show itself. Super fun concept, though all these games are better with pads, guitars, or really anything but a normal controller.Lazy 2.0. Lazy is a stealthy but fascinating note-taking tool, and it does an unusually good job of integrating with files and apps. The new version is very AI-forward, basically bringing a personalized chatbot and all your notes to your whole computer. Neat!Elden Ring Nightreign. A multiplayer-heavy spinoff of the game that I cannot get my gamer friends to shut up about, even years after it came out. I’ve seen a few people call the game a bit small and repetitive, but next to Elden Ring I suppose most things are.The Tapo DL100 Smart Deadbolt Door Lock. A $70 door lock with, as far as I can tell, every feature I want in a smart lock: a keypad, physical keys, super long battery life, and lots of assistant integrations. It does look… huge? But it’s pretty bland-looking, which is a good thing.Implosion: The Titanic Sub Disaster. One of a few Titan-related documentaries coming this summer, meant to try and explain what led to the awful events of a couple years ago. I haven’t seen this one yet, but the reviews are solid — and the story seems even sadder and more infuriating than we thought.“The growing scandal of $TRUMP.” I love a good Zeke Faux take on crypto, whether it’s a book or a Search Engine episode. This interview with Ezra Klein is a great explainer of how the Trump family got so into crypto and how it’s being used to move money in deeply confusing and clearly corrupt ways. Cameron Faulkner isn’t technically new to The Verge, he’s just newly back at The Verge. In addition to being a commerce editor on our team, he also wrote one of the deepest dives into webcams you’ll ever find, plays a lot of games, has more thoughts about monitors than any reasonable person should, and is extremely my kind of person. Since he’s now so very back, I asked Cam to share his homescreen with us, as I always try to do with new people here. Here it is, plus some info on the apps he uses and why:The phone: Pixel 9 Pro.The wallpaper: It’s an “Emoji Workshop” creation, which is a feature that’s built into Android 14 and more recent updates. It mashes together emoji into the patterns and colors of your choosing. I picked this one because I like sushi, and I love melon / coral color tones.The apps: Google Keep, Settings, Clock, Phone, Chrome, Pocket Casts, Messages, Spotify.I haven’t downloaded a new app in ages. What’s shown on my homescreen has been there, unmoved, for longer than I can remember. I have digital light switches, a to-do list with the great (but paid) Stuff widget, a simple Google Fit widget to show me how much I moved today, and a couple Google Photos widgets of my lovely wife and son. I could probably function just fine if every app shuffled its location on my homescreen, except for the bottom row. That’s set in stone, never to be fiddled with.I also asked Cameron to share a few things he’s into right now. Here’s what he sent back:Righteous Gemstones on HBO Max. It’s a much smarter comedy than I had assumed (but it’s still dumb in the best ways), and I’m delighted to have four seasons to catch up on. I’m really digging Clair Obscur: Expedition 33, which achieves the feat of breakneck pacing (the game equivalent of a page-turner) and a style that rivals Persona 5, which is high praise. I have accrued well over a dozen Switch 2 accessories, and I’m excited to put them to the test once I get a console on launch day.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now, as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“The Devil’s Plan. This Netflix original South Korean reality show locks 14 contestants in a windowless living space that’s part mansion, part prison, part room escape, and challenges them to eliminate each other in a series of complicated tabletop games. (If this sounds familiar, it’s a spiritual successor to the beloved series The Genius from the mid-2010s.)” — Travis“If you’re a fan of Drive to Survive, I’m happy to report that the latest season of Netflix’s series on NASCAR is finally good, and a reasonable substitute for that show once you’ve finished it.” — Christopher“I switched to a Pixel 9 Pro XL and Pixel Watch 3 from an iPhone and Apple Watch about 6 months ago and found Open Bubbles, an open source alternative to BlueBubbles that does need a Mac but doesn’t need that Mac to remain on, You just need a one-time hardware identifier from it, then it gives you full iMessage, Find My, FaceTime, and iCloud shared albums on Android and Windows using an email address. So long as you can get your contacts to iMessage your email instead of your number, it works great.” — Tim“Playing Mario Kart 8 Deluxe for the last time before Mario Kart World arrives next week and takes over my life!” — Ravi“With Pocket being killed off I’ve started using my RSS reader — which is Inoreader — instead as a suitable replacement. I only switched over to Pocket after Omnivore shut down.” — James“I just got a Boox Go 10.3 for my birthday and love it. The lack of front lighting is the biggest downfall. It is also only on Android 12 so I cannot load a corporate profile. It feels good to write on just, almost as good as my cheaper fountain pen and paper. It is helping me organize multiple notebooks and scraps of paper.” — Sean“Giving Tweek a bit of a go, and for a lightweight weekly planner it’s beautiful. I also currently use Motion for project management of personal tasks and when I was doing my Master’s. I really like the Gantt view to map out long term personal and study projects. (I also got a student discount for Motion, but it’s still expensive.)” — Astrid“Might I suggest Elle Griffin’s work at The Elysian? How she’s thinking through speculative futures and a cooperative media system is fascinating.” — Zach“GeForce Now on Steam Deck!” — SteveSigning offOne of the reasons I like making this newsletter with all of you is that it’s a weekly reminder that, hey, actually, there’s a lot of awesome people doing awesome stuff out there on the internet. I spend a lot of my time talking to people who say AI is going to change everything, and we’re all going to just AI ourselves into oblivion and be thrilled about it — a theory I increasingly think is both wrong and horrifying.And then this week I read a blog post from the great Dan Sinker, who called this moment “the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.” You should read the whole thing, but here’s a bit I really loved:“Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous. If you do, the illusion falls apart pretty quickly. The fact that the userbase for AI chatbots has exploded exponentially demonstrates that good enough is, in fact, good enough for most people. Because most people don’t care.”I don’t think this describes everything and everyone, and neither does Sinker, but I do think it’s more true than it should be. And I increasingly think our job, maybe our method of rebellion, is to be people who care, who have taste, who like and share and look for good things, who read and watch and look at those things on purpose instead of just staring slackjawed at whatever slop is placed between the ads they hope we won’t really notice. I think there are a lot of fascinating ways that AI can be useful, but we can’t let it train us to accept slop just because it’s there. Sorry, this got more existential than I anticipated. But I’ve been thinking about it a lot, and I’m going to try and point Installer even more at the stuff that matters, made by people who care. I hope you’ll hold me to that.See you next week!See More:
    0 Комментарии 0 Поделились
  • NASA Wants Your Help to Study These Rare, High-Altitude Clouds That Appear to Glow at Sunrise and Sunset

    NASA Wants Your Help to Study These Rare, High-Altitude Clouds That Appear to Glow at Sunrise and Sunset
    Noctilucent clouds usually form close to the poles, but in recent decades, they’re being spotted closer to the Equator

    Noctilucent clouds over the Baltic Sea, as seen from Germany in 2019. Typically seen in polar regions, the clouds are increasingly appearing at mid- and low latitudes.
    Matthias Süßen via Wikimedia Commons under CC BY-SA 4.0

    While astronomy and clouds don’t usually mix, a rare type of atmospheric phenomenon, called noctilucent or night-shining clouds, offers an exception, as reported by Space.com’s Anthony Wood.
    Noctilucent clouds are wispy, silvery-blue clouds found at much higher altitudes than usual. While most clouds form in the troposphere between Earth’s surface and about 11 miles above the ground, these appear in the mesosphere, up to 50 miles high. They become visible right after sunset or right before sunrise, when the sun illuminates them from beyond the horizon. The rare clouds likely come into being as ice crystallizes on meteor dust, when the mesosphere is rich with water and very cold, per EarthSky.
    Paradoxically, noctilucent clouds usually form in the summer. That’s because during the hottest months of the year, surface air warms up and rises, expanding and cooling as it does so. Alongside other processes, this cools the mesosphere to temperatures as low as minus 210 degrees Fahrenheit, according to EarthSky. This happens around Earth’s poles, so people are thus most likely to see noctilucent clouds in polar regions during the summer—for the Northern Hemisphere, the clouds tend to appear between late May and mid-August.

    What are noctilucent clouds and how can you see them?
    Watch on

    More recently, however, the rare phenomenon has been spotted closer and closer to the Equator. As such, NASA published a statement earlier this month asking citizen scientists to submit their observations and photographs of noctilucent clouds—even pictures taken in the past—to a crowdsourced research project called Space Cloud Watch.
    “Combined with satellite data and model simulations, your data can help us figure out why these noctilucent clouds are more frequently appearing at mid-low latitudes,” the agency explains.
    According to Royal Museums Greenwich, some scholars suggest climate change is responsible for that shift, as well as rocket launches. Rocket exhaust trails consist of small ice particles and water vapor, which can contribute to noctilucent clouds once they reach the mesosphere.
    Last summer, Europe saw especially spectacular noctilucent clouds, despite the fact that strong solar activity—such as what we’re experiencing now at the solar maximum—would normally have dissipated them, as reported by Spaceweather.com. This might be partially explained by the underwater volcano near Tonga that erupted in 2022 and threw a plume of water vapor into the atmosphere.
    “That was two years ago, but it takes about two years for the vapor to circulate up to the mesosphere, where NLCsform. Water is a key ingredient for NLCs, so Tonga’s moisture could be turbocharging the clouds,” Spaceweather.com reported in June 2024, adding that the 124 rocket launches so far that year likely also played a role.
    If you’re hoping to spot the night-shining clouds yourself this summer, the good news is that they’re visible to the naked eye. The bad news is that the only ways to predict their natural appearance are by checking for ideal upper atmospheric conditions and keeping an eye on north-facing webcams in regions east of your location, as reported by Stuart Atkinson for BBC Sky at Night Magazine last year. If noctilucent clouds are forming in those eastern skies, where the sun sets earlier, chances are that they might be soon visible in your area.
    And if you do happen to catch a glimpse, don’t forget to snap a photo for science. “I find these clouds fascinating and can’t wait to see the amazing pictures,” Space Cloud Watch project lead Chihoko Cullens, a research scientist at the University of Colorado, Boulder, says in NASA’s statement.

    Get the latest stories in your inbox every weekday.
    #nasa #wants #your #help #study
    NASA Wants Your Help to Study These Rare, High-Altitude Clouds That Appear to Glow at Sunrise and Sunset
    NASA Wants Your Help to Study These Rare, High-Altitude Clouds That Appear to Glow at Sunrise and Sunset Noctilucent clouds usually form close to the poles, but in recent decades, they’re being spotted closer to the Equator Noctilucent clouds over the Baltic Sea, as seen from Germany in 2019. Typically seen in polar regions, the clouds are increasingly appearing at mid- and low latitudes. Matthias Süßen via Wikimedia Commons under CC BY-SA 4.0 While astronomy and clouds don’t usually mix, a rare type of atmospheric phenomenon, called noctilucent or night-shining clouds, offers an exception, as reported by Space.com’s Anthony Wood. Noctilucent clouds are wispy, silvery-blue clouds found at much higher altitudes than usual. While most clouds form in the troposphere between Earth’s surface and about 11 miles above the ground, these appear in the mesosphere, up to 50 miles high. They become visible right after sunset or right before sunrise, when the sun illuminates them from beyond the horizon. The rare clouds likely come into being as ice crystallizes on meteor dust, when the mesosphere is rich with water and very cold, per EarthSky. Paradoxically, noctilucent clouds usually form in the summer. That’s because during the hottest months of the year, surface air warms up and rises, expanding and cooling as it does so. Alongside other processes, this cools the mesosphere to temperatures as low as minus 210 degrees Fahrenheit, according to EarthSky. This happens around Earth’s poles, so people are thus most likely to see noctilucent clouds in polar regions during the summer—for the Northern Hemisphere, the clouds tend to appear between late May and mid-August. What are noctilucent clouds and how can you see them? Watch on More recently, however, the rare phenomenon has been spotted closer and closer to the Equator. As such, NASA published a statement earlier this month asking citizen scientists to submit their observations and photographs of noctilucent clouds—even pictures taken in the past—to a crowdsourced research project called Space Cloud Watch. “Combined with satellite data and model simulations, your data can help us figure out why these noctilucent clouds are more frequently appearing at mid-low latitudes,” the agency explains. According to Royal Museums Greenwich, some scholars suggest climate change is responsible for that shift, as well as rocket launches. Rocket exhaust trails consist of small ice particles and water vapor, which can contribute to noctilucent clouds once they reach the mesosphere. Last summer, Europe saw especially spectacular noctilucent clouds, despite the fact that strong solar activity—such as what we’re experiencing now at the solar maximum—would normally have dissipated them, as reported by Spaceweather.com. This might be partially explained by the underwater volcano near Tonga that erupted in 2022 and threw a plume of water vapor into the atmosphere. “That was two years ago, but it takes about two years for the vapor to circulate up to the mesosphere, where NLCsform. Water is a key ingredient for NLCs, so Tonga’s moisture could be turbocharging the clouds,” Spaceweather.com reported in June 2024, adding that the 124 rocket launches so far that year likely also played a role. If you’re hoping to spot the night-shining clouds yourself this summer, the good news is that they’re visible to the naked eye. The bad news is that the only ways to predict their natural appearance are by checking for ideal upper atmospheric conditions and keeping an eye on north-facing webcams in regions east of your location, as reported by Stuart Atkinson for BBC Sky at Night Magazine last year. If noctilucent clouds are forming in those eastern skies, where the sun sets earlier, chances are that they might be soon visible in your area. And if you do happen to catch a glimpse, don’t forget to snap a photo for science. “I find these clouds fascinating and can’t wait to see the amazing pictures,” Space Cloud Watch project lead Chihoko Cullens, a research scientist at the University of Colorado, Boulder, says in NASA’s statement. Get the latest stories in your inbox every weekday. #nasa #wants #your #help #study
    WWW.SMITHSONIANMAG.COM
    NASA Wants Your Help to Study These Rare, High-Altitude Clouds That Appear to Glow at Sunrise and Sunset
    NASA Wants Your Help to Study These Rare, High-Altitude Clouds That Appear to Glow at Sunrise and Sunset Noctilucent clouds usually form close to the poles, but in recent decades, they’re being spotted closer to the Equator Noctilucent clouds over the Baltic Sea, as seen from Germany in 2019. Typically seen in polar regions, the clouds are increasingly appearing at mid- and low latitudes. Matthias Süßen via Wikimedia Commons under CC BY-SA 4.0 While astronomy and clouds don’t usually mix, a rare type of atmospheric phenomenon, called noctilucent or night-shining clouds, offers an exception, as reported by Space.com’s Anthony Wood. Noctilucent clouds are wispy, silvery-blue clouds found at much higher altitudes than usual. While most clouds form in the troposphere between Earth’s surface and about 11 miles above the ground, these appear in the mesosphere, up to 50 miles high. They become visible right after sunset or right before sunrise, when the sun illuminates them from beyond the horizon. The rare clouds likely come into being as ice crystallizes on meteor dust, when the mesosphere is rich with water and very cold, per EarthSky. Paradoxically, noctilucent clouds usually form in the summer. That’s because during the hottest months of the year, surface air warms up and rises, expanding and cooling as it does so. Alongside other processes, this cools the mesosphere to temperatures as low as minus 210 degrees Fahrenheit, according to EarthSky. This happens around Earth’s poles, so people are thus most likely to see noctilucent clouds in polar regions during the summer—for the Northern Hemisphere, the clouds tend to appear between late May and mid-August. What are noctilucent clouds and how can you see them? Watch on More recently, however, the rare phenomenon has been spotted closer and closer to the Equator. As such, NASA published a statement earlier this month asking citizen scientists to submit their observations and photographs of noctilucent clouds—even pictures taken in the past—to a crowdsourced research project called Space Cloud Watch. “Combined with satellite data and model simulations, your data can help us figure out why these noctilucent clouds are more frequently appearing at mid-low latitudes,” the agency explains. According to Royal Museums Greenwich, some scholars suggest climate change is responsible for that shift, as well as rocket launches. Rocket exhaust trails consist of small ice particles and water vapor, which can contribute to noctilucent clouds once they reach the mesosphere. Last summer, Europe saw especially spectacular noctilucent clouds, despite the fact that strong solar activity—such as what we’re experiencing now at the solar maximum—would normally have dissipated them, as reported by Spaceweather.com. This might be partially explained by the underwater volcano near Tonga that erupted in 2022 and threw a plume of water vapor into the atmosphere. “That was two years ago, but it takes about two years for the vapor to circulate up to the mesosphere, where NLCs [noctilucent clouds] form. Water is a key ingredient for NLCs, so Tonga’s moisture could be turbocharging the clouds,” Spaceweather.com reported in June 2024, adding that the 124 rocket launches so far that year likely also played a role. If you’re hoping to spot the night-shining clouds yourself this summer, the good news is that they’re visible to the naked eye. The bad news is that the only ways to predict their natural appearance are by checking for ideal upper atmospheric conditions and keeping an eye on north-facing webcams in regions east of your location, as reported by Stuart Atkinson for BBC Sky at Night Magazine last year. If noctilucent clouds are forming in those eastern skies, where the sun sets earlier, chances are that they might be soon visible in your area. And if you do happen to catch a glimpse, don’t forget to snap a photo for science. “I find these clouds fascinating and can’t wait to see the amazing pictures,” Space Cloud Watch project lead Chihoko Cullens, a research scientist at the University of Colorado, Boulder, says in NASA’s statement. Get the latest stories in your inbox every weekday.
    3 Комментарии 0 Поделились
  • 6 Best Webcams (2025), Tested and Reviewed

    You might see your coworkers in only two dimensions, but don’t let that stop you from looking your best.
    #best #webcams #tested #reviewed
    6 Best Webcams (2025), Tested and Reviewed
    You might see your coworkers in only two dimensions, but don’t let that stop you from looking your best. #best #webcams #tested #reviewed
    WWW.WIRED.COM
    6 Best Webcams (2025), Tested and Reviewed
    You might see your coworkers in only two dimensions, but don’t let that stop you from looking your best.
    0 Комментарии 0 Поделились
  • Acer Expands Swift X Series with New AI-Enhanced Creator Laptops

    Acer has introduced two new laptops under its Switch X series aimed at content creators and creative professionals: the Switch X 14 AU and the Switch X 14. Both models are equipped with NVIDIA GeForce RTX 5070 Laptop GPU and 3K OLED touch displays, with AI-powered enhancements and hardware designed to support intensive workloads.

    Acer Swift X 14 AI

    The Switch X14 AI is powered by AMD's new Ryzen AI 300 series processors, including up to Ryzen AI 9 365, offering 50 TOPs of NPU performance. As a Copilot+ PC, it supports a range of Microsoft AI features like Recall, Cocreator, and Windows Studio Effects. In contrast, the standard Switch X 14 is built around Intel's Core Ultra 9 285H processor and features Intel AI boost for up to 13 TOPS. Both models are validated with NVIDIA Studio Drivers and optimized for AI-assisted tasks like video editing, 3D rendering, and image generation.
    Each device includes a 14.5" Calman-verified OLED display with 2880x1800 resolution, DCI-P3 100% color gamut, and VESA DisplayHDR TrueBlack 500 certification. This displays also feature Corning Gorilla Glass protection and are offered in touchscreen variants.Recommended by Our Editors

    Acer Switch X 14

    ​​​​​​

    The laptops are fitted with up to 32GB of LPDDR4X memory and 2TB PCIe Gen 4 SSD storage. They support a full suite of ports including USB-C, HDMI, USB-A and a microSD card reader. For creators who draw or annotate, both models support stylus with MPP 2.5 tilt-enabled input.
    Thermal performance is handled through dual-fan system, copper heat pipes, and an air intel keyboard. Additional features include 1080p IR webcams with Windows Hello support, biometric login, and Acer User Sensing for auto-lock and wake functionality. Battery life is rated at up to 16hrs for the Intel variant and 11 hours for the AMD-based mode, based on video playback tests.
    The Acer Swift X 14 AI and Swift X 14 will be available EMEA markets starting July 2025, with prices beginning at USD 1,799.
    #acer #expands #swift #series #with
    Acer Expands Swift X Series with New AI-Enhanced Creator Laptops
    Acer has introduced two new laptops under its Switch X series aimed at content creators and creative professionals: the Switch X 14 AU and the Switch X 14. Both models are equipped with NVIDIA GeForce RTX 5070 Laptop GPU and 3K OLED touch displays, with AI-powered enhancements and hardware designed to support intensive workloads. Acer Swift X 14 AI The Switch X14 AI is powered by AMD's new Ryzen AI 300 series processors, including up to Ryzen AI 9 365, offering 50 TOPs of NPU performance. As a Copilot+ PC, it supports a range of Microsoft AI features like Recall, Cocreator, and Windows Studio Effects. In contrast, the standard Switch X 14 is built around Intel's Core Ultra 9 285H processor and features Intel AI boost for up to 13 TOPS. Both models are validated with NVIDIA Studio Drivers and optimized for AI-assisted tasks like video editing, 3D rendering, and image generation. Each device includes a 14.5" Calman-verified OLED display with 2880x1800 resolution, DCI-P3 100% color gamut, and VESA DisplayHDR TrueBlack 500 certification. This displays also feature Corning Gorilla Glass protection and are offered in touchscreen variants.Recommended by Our Editors Acer Switch X 14 ​​​​​​ The laptops are fitted with up to 32GB of LPDDR4X memory and 2TB PCIe Gen 4 SSD storage. They support a full suite of ports including USB-C, HDMI, USB-A and a microSD card reader. For creators who draw or annotate, both models support stylus with MPP 2.5 tilt-enabled input. Thermal performance is handled through dual-fan system, copper heat pipes, and an air intel keyboard. Additional features include 1080p IR webcams with Windows Hello support, biometric login, and Acer User Sensing for auto-lock and wake functionality. Battery life is rated at up to 16hrs for the Intel variant and 11 hours for the AMD-based mode, based on video playback tests. The Acer Swift X 14 AI and Swift X 14 will be available EMEA markets starting July 2025, with prices beginning at USD 1,799. #acer #expands #swift #series #with
    ME.PCMAG.COM
    Acer Expands Swift X Series with New AI-Enhanced Creator Laptops
    Acer has introduced two new laptops under its Switch X series aimed at content creators and creative professionals: the Switch X 14 AU and the Switch X 14. Both models are equipped with NVIDIA GeForce RTX 5070 Laptop GPU and 3K OLED touch displays, with AI-powered enhancements and hardware designed to support intensive workloads. Acer Swift X 14 AI The Switch X14 AI is powered by AMD's new Ryzen AI 300 series processors, including up to Ryzen AI 9 365, offering 50 TOPs of NPU performance. As a Copilot+ PC, it supports a range of Microsoft AI features like Recall, Cocreator, and Windows Studio Effects. In contrast, the standard Switch X 14 is built around Intel's Core Ultra 9 285H processor and features Intel AI boost for up to 13 TOPS. Both models are validated with NVIDIA Studio Drivers and optimized for AI-assisted tasks like video editing, 3D rendering, and image generation. Each device includes a 14.5" Calman-verified OLED display with 2880x1800 resolution, DCI-P3 100% color gamut, and VESA DisplayHDR TrueBlack 500 certification. This displays also feature Corning Gorilla Glass protection and are offered in touchscreen variants.Recommended by Our Editors Acer Switch X 14 ​​​​​​ The laptops are fitted with up to 32GB of LPDDR4X memory and 2TB PCIe Gen 4 SSD storage. They support a full suite of ports including USB-C, HDMI, USB-A and a microSD card reader. For creators who draw or annotate, both models support stylus with MPP 2.5 tilt-enabled input. Thermal performance is handled through dual-fan system, copper heat pipes, and an air intel keyboard. Additional features include 1080p IR webcams with Windows Hello support, biometric login, and Acer User Sensing for auto-lock and wake functionality. Battery life is rated at up to 16hrs for the Intel variant and 11 hours for the AMD-based mode, based on video playback tests. The Acer Swift X 14 AI and Swift X 14 will be available EMEA markets starting July 2025, with prices beginning at USD 1,799.
    0 Комментарии 0 Поделились
  • Huawei’s MateBook Fold Ultimate Design Redefines Mobile Computing with World’s First 18-inch Foldable Display

    Huawei just shattered our expectations of what a laptop can be. The new MateBook Fold Ultimate Design doesn’t just push boundaries. It obliterates them.
    Designer: Huawei
    Unveiled on May 19, this groundbreaking device introduces the world’s first 18-inch foldable display in a laptop form factor. But calling it merely a laptop feels almost reductive. When unfolded, you’re looking at a stunning 18-inch canvas that somehow weighs less than many 13-inch ultrabooks. When folded, it transforms into a compact 13-inch device that slides effortlessly into a bag.

    What makes this design achievement particularly impressive isn’t just the folding display itself. It’s how Huawei solved the countless engineering challenges that have prevented others from creating something this ambitious.
    The innovation extends beyond mere technical specifications. Huawei has reimagined the fundamental relationship between users and their computing devices, creating something that adapts to various workflows rather than forcing users to adapt to rigid form factors.
    Engineering Marvel: The Hinge
    The hinge deserves special attention. Stretching 285mm across the device, Huawei calls it the “world’s largest basalt water drop hinge.” This isn’t marketing hyperbole. The three-stage shaft with mortise and tenon structure delivers a 400% increase in hovering torque compared to standard designs. What does this mean for users? Exceptional stability at viewing angles between 30° and 150°, while maintaining smooth operation at shallow angles between 0-20 degrees.

    When unfolded, the MateBook measures a mere 7.3mm thick. For perspective, that’s thinner than many smartphones. Even when folded, it maintains a relatively svelte 14.9mm profile while weighing just 1.16kg. The exterior combines premium leather and metal elements, available in Black, Blue, and White colorways.
    The integrated kickstand on the rear panel adds another dimension of versatility. Position the device in landscape or portrait orientation at various angles for different use cases. Present to clients, watch content, sketch ideas, or type documents. The physical form adapts to your needs rather than forcing you to adapt to it.

    This level of engineering precision didn’t happen overnight. Huawei claims thousands of prototypes were tested before arriving at this final design, with particular attention paid to the durability of the folding mechanism. The company promises the hinge will maintain structural integrity through thousands of folding cycles.
    Display Technology
    But the true star is undoubtedly the display itself. The dual-layer LTPO OLED panel delivers an immersive visual experience with a 92% screen-to-body ratio. When fully expanded, you’re looking at an 18-inch canvas with 4:3 aspect ratio and 3.3K resolution. Fold it, and you have a more conventional 13-inch display with 3:2 aspect ratio.

    This isn’t just any OLED panel. Huawei implemented the first commercial laptop application of LTPOtechnology, reducing power consumption by 30% while enabling adaptive refresh rates. The 2,000,000:1 contrast ratio ensures deep blacks and vibrant colors across the P3 wide color gamut, while peak brightness reaches an impressive 1600 nits.
    For those concerned about eye strain during extended use, the screen incorporates 1440Hz high-frequency PWM dimming and carries TÜV Rheinland Eye Comfort 3.0 certification.
    Color accuracy hasn’t been overlooked either. Huawei claims each display is factory calibrated to achieve a Delta E of less than 1, making it suitable for professional creative work. The anti-reflective coating helps maintain visibility even in challenging lighting conditions.
    Thermal Innovation
    The revolutionary design extends beyond the visible elements. Cooling such powerful components in an ultra-thin chassis required innovative solutions. Huawei engineered diamond aluminum dual fans and an ultra-thin antigravity vapor chamber heat sink. The copper-steel composite 3D vapor chamber and distributed component layout optimize thermal performance without excessive fan noise.

    Traditional cooling systems simply wouldn’t work in a device this thin. Huawei’s approach involves separating heat-generating components across the chassis to prevent hotspots. The vapor chamber technology efficiently transfers heat away from critical components to maintain performance during intensive tasks.
    Fan noise has been carefully tuned to remain below 28dB during typical usage scenarios. This makes the MateBook Fold Ultimate suitable for quiet environments like libraries and meeting rooms where traditional laptop fans might prove distracting.
    Performance and Connectivity
    Despite its slim profile, performance hasn’t been compromised. The MateBook Fold Ultimate comes equipped with 32GB of RAM and storage options of either 1TB or 2TB SSD. While Huawei hasn’t explicitly confirmed the processor in all materials, some sources indicate it uses their own Kirin X90 chipset, a fully Chinese-manufactured ARM processor.

    A 74.69Wh battery powers the device, with support for fast charging through the included 140W USB-C charger. Connectivity includes strategically placed USB-C ports, one on top and one on the side, along with dual-band Wi-Fi 6 and Bluetooth 5.2.
    The decision to position USB-C ports on different edges of the device shows thoughtful design consideration. This arrangement allows for convenient charging regardless of how the device is positioned or folded. The absence of legacy ports might disappoint some users, but reflects the forward-looking design philosophy behind the entire product.
    Audio-Visual Experience
    The audio experience matches the visual excellence with six speakers in total. Three 2W speakers work alongside three 1W speakers, enhanced by Huawei Sound technology. For video conferencing, an 8MP front-facing camera works alongside four microphones to ensure clear communication.

    Speaker placement has been carefully considered to maintain audio quality regardless of the device’s orientation. Whether used as a tablet, laptop, or in presentation mode, the sound remains clear and directional. The multi-microphone array uses AI-powered noise cancellation to isolate voices from background noise during calls.
    The camera quality represents a significant upgrade from typical laptop webcams. The 8MP sensor captures more detail than the standard 720p cameras found in most laptops, while the wide-angle lens ensures you stay in frame even when moving during calls.
    HarmonyOS 5: A New Computing Paradigm
    Perhaps the most intriguing aspect beyond the hardware is the software. The MateBook Fold Ultimate runs HarmonyOS 5, marking the first time this operating system appears on a Huawei laptop. This represents a significant departure from Windows, offering users a third major OS option alongside Windows and macOS.

    HarmonyOS 5 is designed specifically for this unique form factor. Intuitive gestures include three-finger swipes to move windows across screens and five-finger spreads to maximize applications. When positioned at a 90-degree angle like a traditional laptop, the bottom half can function as a virtual keyboard with customizable skins, adjustable key spacing, and haptic feedback through a linear motor.

    The operating system adapts intelligently to different usage scenarios. When folded, it automatically adjusts the interface for a more traditional laptop experience. When fully opened, it transforms into a tablet-like environment optimized for touch interaction. This contextual awareness extends to connected peripherals as well, with the interface changing based on whether the physical keyboard is attached.
    Input Options
    For those who prefer physical keys, Huawei includes an ultra-thin 5mm wireless keyboard weighing just 290g. This keyboard features 1.5mm key travel, lasts up to 24 days on a single charge, and magnetically attaches to the back of the device when not in use.

    The keyboard design deserves special mention. Despite its ultra-thin profile, Huawei has managed to deliver a surprisingly satisfying typing experience. The keys offer tactile feedback that rivals much thicker keyboards, while the full-size layout prevents the cramped feeling often associated with portable keyboards.
    Touch input has been optimized as well. The display supports 10-point multi-touch with pressure sensitivity, making it suitable for digital art and note-taking. Palm rejection technology works remarkably well, allowing users to rest their hand on the screen while writing or drawing without causing unwanted input.
    Versatility and Use Cases
    The versatility of the MateBook Fold Ultimate is perhaps its greatest strength. It transitions seamlessly between tablet mode, laptop configuration, and presentation setup. The built-in kickstand allows positioning at various angles in both portrait and landscape orientations.

    Creative professionals will appreciate the large canvas for digital art and design work. The 18-inch display provides ample space for complex projects, while the foldable nature means you can still take this capability on the road. Business users can leverage the presentation mode for client meetings, with the large screen eliminating the need for external displays in many scenarios.
    Students might find the combination of note-taking capabilities and full-size keyboard particularly appealing. The ability to fold the device partially creates a natural reading angle for digital textbooks, while the performance specifications handle research and productivity applications with ease.
    Market Position
    Priced at CNY 23,999for the 1TB model and CNY 26,999for the 2TB variant, the MateBook Fold Ultimate Design positions itself firmly in the premium market. It will initially launch in China on June 6, with international availability planned for later dates.

    While foldable laptops aren’t entirely new, Lenovo pioneered the concept years ago, Huawei’s implementation represents a significant leap forward. The larger screen, thinner profile, innovative hinge mechanism, and comprehensive ecosystem integration through HarmonyOS demonstrate what’s possible when design and engineering excellence converge.
    The pricing strategy places this device in competition with high-end laptops and creative workstations rather than mainstream consumer devices. Huawei is clearly targeting professionals and enthusiasts who value cutting-edge technology and are willing to invest in unique capabilities not found elsewhere.
    Future Implications
    The MateBook Fold Ultimate Design doesn’t just represent another iterative step in laptop evolution. It reimagines what portable computing can be. Whether this specific implementation becomes the new standard remains to be seen, but Huawei has undoubtedly expanded our understanding of what’s possible in mobile computing design.

    As with most breakthrough technologies, we can expect the concepts pioneered here to eventually trickle down to more affordable devices. The engineering solutions developed for this premium device will likely inform future products across various price points, potentially making foldable displays a common feature in laptops within the next few years.

    The introduction of HarmonyOS to the laptop form factor also signals Huawei’s ambitions beyond smartphones and tablets. Creating a cohesive ecosystem across all computing devices could position the company as a more comprehensive alternative to established players in the personal computing space.The post Huawei’s MateBook Fold Ultimate Design Redefines Mobile Computing with World’s First 18-inch Foldable Display first appeared on Yanko Design.
    #huaweis #matebook #fold #ultimate #design
    Huawei’s MateBook Fold Ultimate Design Redefines Mobile Computing with World’s First 18-inch Foldable Display
    Huawei just shattered our expectations of what a laptop can be. The new MateBook Fold Ultimate Design doesn’t just push boundaries. It obliterates them. Designer: Huawei Unveiled on May 19, this groundbreaking device introduces the world’s first 18-inch foldable display in a laptop form factor. But calling it merely a laptop feels almost reductive. When unfolded, you’re looking at a stunning 18-inch canvas that somehow weighs less than many 13-inch ultrabooks. When folded, it transforms into a compact 13-inch device that slides effortlessly into a bag. What makes this design achievement particularly impressive isn’t just the folding display itself. It’s how Huawei solved the countless engineering challenges that have prevented others from creating something this ambitious. The innovation extends beyond mere technical specifications. Huawei has reimagined the fundamental relationship between users and their computing devices, creating something that adapts to various workflows rather than forcing users to adapt to rigid form factors. Engineering Marvel: The Hinge The hinge deserves special attention. Stretching 285mm across the device, Huawei calls it the “world’s largest basalt water drop hinge.” This isn’t marketing hyperbole. The three-stage shaft with mortise and tenon structure delivers a 400% increase in hovering torque compared to standard designs. What does this mean for users? Exceptional stability at viewing angles between 30° and 150°, while maintaining smooth operation at shallow angles between 0-20 degrees. When unfolded, the MateBook measures a mere 7.3mm thick. For perspective, that’s thinner than many smartphones. Even when folded, it maintains a relatively svelte 14.9mm profile while weighing just 1.16kg. The exterior combines premium leather and metal elements, available in Black, Blue, and White colorways. The integrated kickstand on the rear panel adds another dimension of versatility. Position the device in landscape or portrait orientation at various angles for different use cases. Present to clients, watch content, sketch ideas, or type documents. The physical form adapts to your needs rather than forcing you to adapt to it. This level of engineering precision didn’t happen overnight. Huawei claims thousands of prototypes were tested before arriving at this final design, with particular attention paid to the durability of the folding mechanism. The company promises the hinge will maintain structural integrity through thousands of folding cycles. Display Technology But the true star is undoubtedly the display itself. The dual-layer LTPO OLED panel delivers an immersive visual experience with a 92% screen-to-body ratio. When fully expanded, you’re looking at an 18-inch canvas with 4:3 aspect ratio and 3.3K resolution. Fold it, and you have a more conventional 13-inch display with 3:2 aspect ratio. This isn’t just any OLED panel. Huawei implemented the first commercial laptop application of LTPOtechnology, reducing power consumption by 30% while enabling adaptive refresh rates. The 2,000,000:1 contrast ratio ensures deep blacks and vibrant colors across the P3 wide color gamut, while peak brightness reaches an impressive 1600 nits. For those concerned about eye strain during extended use, the screen incorporates 1440Hz high-frequency PWM dimming and carries TÜV Rheinland Eye Comfort 3.0 certification. Color accuracy hasn’t been overlooked either. Huawei claims each display is factory calibrated to achieve a Delta E of less than 1, making it suitable for professional creative work. The anti-reflective coating helps maintain visibility even in challenging lighting conditions. Thermal Innovation The revolutionary design extends beyond the visible elements. Cooling such powerful components in an ultra-thin chassis required innovative solutions. Huawei engineered diamond aluminum dual fans and an ultra-thin antigravity vapor chamber heat sink. The copper-steel composite 3D vapor chamber and distributed component layout optimize thermal performance without excessive fan noise. Traditional cooling systems simply wouldn’t work in a device this thin. Huawei’s approach involves separating heat-generating components across the chassis to prevent hotspots. The vapor chamber technology efficiently transfers heat away from critical components to maintain performance during intensive tasks. Fan noise has been carefully tuned to remain below 28dB during typical usage scenarios. This makes the MateBook Fold Ultimate suitable for quiet environments like libraries and meeting rooms where traditional laptop fans might prove distracting. Performance and Connectivity Despite its slim profile, performance hasn’t been compromised. The MateBook Fold Ultimate comes equipped with 32GB of RAM and storage options of either 1TB or 2TB SSD. While Huawei hasn’t explicitly confirmed the processor in all materials, some sources indicate it uses their own Kirin X90 chipset, a fully Chinese-manufactured ARM processor. A 74.69Wh battery powers the device, with support for fast charging through the included 140W USB-C charger. Connectivity includes strategically placed USB-C ports, one on top and one on the side, along with dual-band Wi-Fi 6 and Bluetooth 5.2. The decision to position USB-C ports on different edges of the device shows thoughtful design consideration. This arrangement allows for convenient charging regardless of how the device is positioned or folded. The absence of legacy ports might disappoint some users, but reflects the forward-looking design philosophy behind the entire product. Audio-Visual Experience The audio experience matches the visual excellence with six speakers in total. Three 2W speakers work alongside three 1W speakers, enhanced by Huawei Sound technology. For video conferencing, an 8MP front-facing camera works alongside four microphones to ensure clear communication. Speaker placement has been carefully considered to maintain audio quality regardless of the device’s orientation. Whether used as a tablet, laptop, or in presentation mode, the sound remains clear and directional. The multi-microphone array uses AI-powered noise cancellation to isolate voices from background noise during calls. The camera quality represents a significant upgrade from typical laptop webcams. The 8MP sensor captures more detail than the standard 720p cameras found in most laptops, while the wide-angle lens ensures you stay in frame even when moving during calls. HarmonyOS 5: A New Computing Paradigm Perhaps the most intriguing aspect beyond the hardware is the software. The MateBook Fold Ultimate runs HarmonyOS 5, marking the first time this operating system appears on a Huawei laptop. This represents a significant departure from Windows, offering users a third major OS option alongside Windows and macOS. HarmonyOS 5 is designed specifically for this unique form factor. Intuitive gestures include three-finger swipes to move windows across screens and five-finger spreads to maximize applications. When positioned at a 90-degree angle like a traditional laptop, the bottom half can function as a virtual keyboard with customizable skins, adjustable key spacing, and haptic feedback through a linear motor. The operating system adapts intelligently to different usage scenarios. When folded, it automatically adjusts the interface for a more traditional laptop experience. When fully opened, it transforms into a tablet-like environment optimized for touch interaction. This contextual awareness extends to connected peripherals as well, with the interface changing based on whether the physical keyboard is attached. Input Options For those who prefer physical keys, Huawei includes an ultra-thin 5mm wireless keyboard weighing just 290g. This keyboard features 1.5mm key travel, lasts up to 24 days on a single charge, and magnetically attaches to the back of the device when not in use. The keyboard design deserves special mention. Despite its ultra-thin profile, Huawei has managed to deliver a surprisingly satisfying typing experience. The keys offer tactile feedback that rivals much thicker keyboards, while the full-size layout prevents the cramped feeling often associated with portable keyboards. Touch input has been optimized as well. The display supports 10-point multi-touch with pressure sensitivity, making it suitable for digital art and note-taking. Palm rejection technology works remarkably well, allowing users to rest their hand on the screen while writing or drawing without causing unwanted input. Versatility and Use Cases The versatility of the MateBook Fold Ultimate is perhaps its greatest strength. It transitions seamlessly between tablet mode, laptop configuration, and presentation setup. The built-in kickstand allows positioning at various angles in both portrait and landscape orientations. Creative professionals will appreciate the large canvas for digital art and design work. The 18-inch display provides ample space for complex projects, while the foldable nature means you can still take this capability on the road. Business users can leverage the presentation mode for client meetings, with the large screen eliminating the need for external displays in many scenarios. Students might find the combination of note-taking capabilities and full-size keyboard particularly appealing. The ability to fold the device partially creates a natural reading angle for digital textbooks, while the performance specifications handle research and productivity applications with ease. Market Position Priced at CNY 23,999for the 1TB model and CNY 26,999for the 2TB variant, the MateBook Fold Ultimate Design positions itself firmly in the premium market. It will initially launch in China on June 6, with international availability planned for later dates. While foldable laptops aren’t entirely new, Lenovo pioneered the concept years ago, Huawei’s implementation represents a significant leap forward. The larger screen, thinner profile, innovative hinge mechanism, and comprehensive ecosystem integration through HarmonyOS demonstrate what’s possible when design and engineering excellence converge. The pricing strategy places this device in competition with high-end laptops and creative workstations rather than mainstream consumer devices. Huawei is clearly targeting professionals and enthusiasts who value cutting-edge technology and are willing to invest in unique capabilities not found elsewhere. Future Implications The MateBook Fold Ultimate Design doesn’t just represent another iterative step in laptop evolution. It reimagines what portable computing can be. Whether this specific implementation becomes the new standard remains to be seen, but Huawei has undoubtedly expanded our understanding of what’s possible in mobile computing design. As with most breakthrough technologies, we can expect the concepts pioneered here to eventually trickle down to more affordable devices. The engineering solutions developed for this premium device will likely inform future products across various price points, potentially making foldable displays a common feature in laptops within the next few years. The introduction of HarmonyOS to the laptop form factor also signals Huawei’s ambitions beyond smartphones and tablets. Creating a cohesive ecosystem across all computing devices could position the company as a more comprehensive alternative to established players in the personal computing space.The post Huawei’s MateBook Fold Ultimate Design Redefines Mobile Computing with World’s First 18-inch Foldable Display first appeared on Yanko Design. #huaweis #matebook #fold #ultimate #design
    WWW.YANKODESIGN.COM
    Huawei’s MateBook Fold Ultimate Design Redefines Mobile Computing with World’s First 18-inch Foldable Display
    Huawei just shattered our expectations of what a laptop can be. The new MateBook Fold Ultimate Design doesn’t just push boundaries. It obliterates them. Designer: Huawei Unveiled on May 19, this groundbreaking device introduces the world’s first 18-inch foldable display in a laptop form factor. But calling it merely a laptop feels almost reductive. When unfolded, you’re looking at a stunning 18-inch canvas that somehow weighs less than many 13-inch ultrabooks. When folded, it transforms into a compact 13-inch device that slides effortlessly into a bag. What makes this design achievement particularly impressive isn’t just the folding display itself. It’s how Huawei solved the countless engineering challenges that have prevented others from creating something this ambitious. The innovation extends beyond mere technical specifications. Huawei has reimagined the fundamental relationship between users and their computing devices, creating something that adapts to various workflows rather than forcing users to adapt to rigid form factors. Engineering Marvel: The Hinge The hinge deserves special attention. Stretching 285mm across the device, Huawei calls it the “world’s largest basalt water drop hinge.” This isn’t marketing hyperbole. The three-stage shaft with mortise and tenon structure delivers a 400% increase in hovering torque compared to standard designs. What does this mean for users? Exceptional stability at viewing angles between 30° and 150°, while maintaining smooth operation at shallow angles between 0-20 degrees. When unfolded, the MateBook measures a mere 7.3mm thick. For perspective, that’s thinner than many smartphones. Even when folded, it maintains a relatively svelte 14.9mm profile while weighing just 1.16kg. The exterior combines premium leather and metal elements, available in Black, Blue, and White colorways. The integrated kickstand on the rear panel adds another dimension of versatility. Position the device in landscape or portrait orientation at various angles for different use cases. Present to clients, watch content, sketch ideas, or type documents. The physical form adapts to your needs rather than forcing you to adapt to it. This level of engineering precision didn’t happen overnight. Huawei claims thousands of prototypes were tested before arriving at this final design, with particular attention paid to the durability of the folding mechanism. The company promises the hinge will maintain structural integrity through thousands of folding cycles. Display Technology But the true star is undoubtedly the display itself. The dual-layer LTPO OLED panel delivers an immersive visual experience with a 92% screen-to-body ratio. When fully expanded, you’re looking at an 18-inch canvas with 4:3 aspect ratio and 3.3K resolution (3296 × 2472 pixels). Fold it, and you have a more conventional 13-inch display with 3:2 aspect ratio (2472 × 1648 pixels). This isn’t just any OLED panel. Huawei implemented the first commercial laptop application of LTPO (Low-Temperature Polycrystalline Oxide) technology, reducing power consumption by 30% while enabling adaptive refresh rates. The 2,000,000:1 contrast ratio ensures deep blacks and vibrant colors across the P3 wide color gamut, while peak brightness reaches an impressive 1600 nits. For those concerned about eye strain during extended use, the screen incorporates 1440Hz high-frequency PWM dimming and carries TÜV Rheinland Eye Comfort 3.0 certification. Color accuracy hasn’t been overlooked either. Huawei claims each display is factory calibrated to achieve a Delta E of less than 1, making it suitable for professional creative work. The anti-reflective coating helps maintain visibility even in challenging lighting conditions. Thermal Innovation The revolutionary design extends beyond the visible elements. Cooling such powerful components in an ultra-thin chassis required innovative solutions. Huawei engineered diamond aluminum dual fans and an ultra-thin antigravity vapor chamber heat sink. The copper-steel composite 3D vapor chamber and distributed component layout optimize thermal performance without excessive fan noise. Traditional cooling systems simply wouldn’t work in a device this thin. Huawei’s approach involves separating heat-generating components across the chassis to prevent hotspots. The vapor chamber technology efficiently transfers heat away from critical components to maintain performance during intensive tasks. Fan noise has been carefully tuned to remain below 28dB during typical usage scenarios. This makes the MateBook Fold Ultimate suitable for quiet environments like libraries and meeting rooms where traditional laptop fans might prove distracting. Performance and Connectivity Despite its slim profile, performance hasn’t been compromised. The MateBook Fold Ultimate comes equipped with 32GB of RAM and storage options of either 1TB or 2TB SSD. While Huawei hasn’t explicitly confirmed the processor in all materials, some sources indicate it uses their own Kirin X90 chipset, a fully Chinese-manufactured ARM processor. A 74.69Wh battery powers the device, with support for fast charging through the included 140W USB-C charger. Connectivity includes strategically placed USB-C ports, one on top and one on the side, along with dual-band Wi-Fi 6 and Bluetooth 5.2. The decision to position USB-C ports on different edges of the device shows thoughtful design consideration. This arrangement allows for convenient charging regardless of how the device is positioned or folded. The absence of legacy ports might disappoint some users, but reflects the forward-looking design philosophy behind the entire product. Audio-Visual Experience The audio experience matches the visual excellence with six speakers in total. Three 2W speakers work alongside three 1W speakers, enhanced by Huawei Sound technology. For video conferencing, an 8MP front-facing camera works alongside four microphones to ensure clear communication. Speaker placement has been carefully considered to maintain audio quality regardless of the device’s orientation. Whether used as a tablet, laptop, or in presentation mode, the sound remains clear and directional. The multi-microphone array uses AI-powered noise cancellation to isolate voices from background noise during calls. The camera quality represents a significant upgrade from typical laptop webcams. The 8MP sensor captures more detail than the standard 720p cameras found in most laptops, while the wide-angle lens ensures you stay in frame even when moving during calls. HarmonyOS 5: A New Computing Paradigm Perhaps the most intriguing aspect beyond the hardware is the software. The MateBook Fold Ultimate runs HarmonyOS 5, marking the first time this operating system appears on a Huawei laptop. This represents a significant departure from Windows, offering users a third major OS option alongside Windows and macOS. HarmonyOS 5 is designed specifically for this unique form factor. Intuitive gestures include three-finger swipes to move windows across screens and five-finger spreads to maximize applications. When positioned at a 90-degree angle like a traditional laptop, the bottom half can function as a virtual keyboard with customizable skins, adjustable key spacing, and haptic feedback through a linear motor. The operating system adapts intelligently to different usage scenarios. When folded, it automatically adjusts the interface for a more traditional laptop experience. When fully opened, it transforms into a tablet-like environment optimized for touch interaction. This contextual awareness extends to connected peripherals as well, with the interface changing based on whether the physical keyboard is attached. Input Options For those who prefer physical keys, Huawei includes an ultra-thin 5mm wireless keyboard weighing just 290g. This keyboard features 1.5mm key travel, lasts up to 24 days on a single charge, and magnetically attaches to the back of the device when not in use. The keyboard design deserves special mention. Despite its ultra-thin profile, Huawei has managed to deliver a surprisingly satisfying typing experience. The keys offer tactile feedback that rivals much thicker keyboards, while the full-size layout prevents the cramped feeling often associated with portable keyboards. Touch input has been optimized as well. The display supports 10-point multi-touch with pressure sensitivity, making it suitable for digital art and note-taking. Palm rejection technology works remarkably well, allowing users to rest their hand on the screen while writing or drawing without causing unwanted input. Versatility and Use Cases The versatility of the MateBook Fold Ultimate is perhaps its greatest strength. It transitions seamlessly between tablet mode, laptop configuration, and presentation setup. The built-in kickstand allows positioning at various angles in both portrait and landscape orientations. Creative professionals will appreciate the large canvas for digital art and design work. The 18-inch display provides ample space for complex projects, while the foldable nature means you can still take this capability on the road. Business users can leverage the presentation mode for client meetings, with the large screen eliminating the need for external displays in many scenarios. Students might find the combination of note-taking capabilities and full-size keyboard particularly appealing. The ability to fold the device partially creates a natural reading angle for digital textbooks, while the performance specifications handle research and productivity applications with ease. Market Position Priced at CNY 23,999 (approximately $3,300) for the 1TB model and CNY 26,999 (roughly $3,700) for the 2TB variant, the MateBook Fold Ultimate Design positions itself firmly in the premium market. It will initially launch in China on June 6, with international availability planned for later dates. While foldable laptops aren’t entirely new, Lenovo pioneered the concept years ago, Huawei’s implementation represents a significant leap forward. The larger screen, thinner profile, innovative hinge mechanism, and comprehensive ecosystem integration through HarmonyOS demonstrate what’s possible when design and engineering excellence converge. The pricing strategy places this device in competition with high-end laptops and creative workstations rather than mainstream consumer devices. Huawei is clearly targeting professionals and enthusiasts who value cutting-edge technology and are willing to invest in unique capabilities not found elsewhere. Future Implications The MateBook Fold Ultimate Design doesn’t just represent another iterative step in laptop evolution. It reimagines what portable computing can be. Whether this specific implementation becomes the new standard remains to be seen, but Huawei has undoubtedly expanded our understanding of what’s possible in mobile computing design. As with most breakthrough technologies, we can expect the concepts pioneered here to eventually trickle down to more affordable devices. The engineering solutions developed for this premium device will likely inform future products across various price points, potentially making foldable displays a common feature in laptops within the next few years. The introduction of HarmonyOS to the laptop form factor also signals Huawei’s ambitions beyond smartphones and tablets. Creating a cohesive ecosystem across all computing devices could position the company as a more comprehensive alternative to established players in the personal computing space.The post Huawei’s MateBook Fold Ultimate Design Redefines Mobile Computing with World’s First 18-inch Foldable Display first appeared on Yanko Design.
    0 Комментарии 0 Поделились
  • Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents

    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers.
    To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners.
    NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data.
    Blueprints for Engaging, Insightful AI Agents
    Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities.
    Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work.
    The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions.
    These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data.

    AI Agents Boost People’s Productivity
    Customers around the world are already using these AI agent solutions.
    At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation.
    Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions.
    The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression.
    “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.”

    Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice. 
    Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service.
    Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech.
    Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times.

    Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans.
    Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey.
    “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.”
    This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care.
    Software Platforms Support Agents on AI Factory Infrastructure 
    AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations.
    The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications.
    Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data.
    With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro.
    The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace.
    See notice regarding software product information.
    #talk #nvidia #partners #boost #people
    Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents
    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers. To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners. NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data. Blueprints for Engaging, Insightful AI Agents Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities. Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work. The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions. These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data. AI Agents Boost People’s Productivity Customers around the world are already using these AI agent solutions. At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation. Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions. The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression. “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.” Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice.  Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service. Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech. Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times. Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans. Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey. “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.” This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care. Software Platforms Support Agents on AI Factory Infrastructure  AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations. The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications. Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data. With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro. The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace. See notice regarding software product information. #talk #nvidia #partners #boost #people
    BLOGS.NVIDIA.COM
    Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents
    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers. To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners. NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data. Blueprints for Engaging, Insightful AI Agents Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities. Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work. The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions. These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data. AI Agents Boost People’s Productivity Customers around the world are already using these AI agent solutions. At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation. Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions. The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression. “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.” Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice.  Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service. Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech. Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times. Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans. Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey. “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.” This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care. Software Platforms Support Agents on AI Factory Infrastructure  AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations. The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications. Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data. With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro. The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace. See notice regarding software product information.
    0 Комментарии 0 Поделились
  • Google Introduces Beam, an AI-Driven Communication Platform That Turns 2D Video Into 3D Experiences

    Google is rebranding its Project Starline and turning it into a new 3D video communication platform, the company announced at its annual I/O developer conference on Tuesday. Dubbed Beam, the platform enables users to connect with each other in a more intuitive manner by turning 2D video streams into 3D experiences. It leverages the Google Cloud platform along with the company's AI prowess to deliver enterprise-grade reliability and compatibility with the existing workflow. Google says Beam may receive support for speech translation in real time and will be available in the market starting with HP devices later this year.Google Beam FeaturesGoogle detailed its new Beam platform in a blog post. It uses an array of different webcams to capture the user from different angles. Then, AI is used to merge the video streams together and render a 3D light field display. Google says it also has head tracking capabilities which are claimed to be accurate down to the millimetre and at 60 frames per second.Google Beam takes advantage of an AI volumetric video model to turn standard 2D video streams into realistic experiences which appear in 3D from any perspective. It, along with the light field display, develop a sense of dimensionality and depth, enabling you to make eye contact and read subtle cues.As per the company, its new platform replaces Project Starline which was initially announced at Google I/O in 2021 with the aim of providing users with a new video communication platform that was capable of showing them in 3D at natural scale, along with eye contact and spatially accurate audio capabilities. While this project did not completely materialise, it was repurposed to create the new 3D video communication platform which is now known as Google Beam.Photo Credit: GoogleFor enhanced communication, Google is exploring plans of bringing speech translation in real time to Beam. Additionally, the capability will also be available in Google Meet starting today.

    Google says it is working in collaboration with HP to introduce the first Google Beam devices in the market with select customers later this year. Further, the first Google Beam products from the original equipment manufacturerwill be made available via InfoComm 2025 which takes place in June.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    Google Beam, Google, Project Starline, AI, Artificial Intelligence, video conferencing, Google IO 2025

    Shaurya Tomer

    Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence, he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and
    ...More

    Related Stories
    #google #introduces #beam #aidriven #communication
    Google Introduces Beam, an AI-Driven Communication Platform That Turns 2D Video Into 3D Experiences
    Google is rebranding its Project Starline and turning it into a new 3D video communication platform, the company announced at its annual I/O developer conference on Tuesday. Dubbed Beam, the platform enables users to connect with each other in a more intuitive manner by turning 2D video streams into 3D experiences. It leverages the Google Cloud platform along with the company's AI prowess to deliver enterprise-grade reliability and compatibility with the existing workflow. Google says Beam may receive support for speech translation in real time and will be available in the market starting with HP devices later this year.Google Beam FeaturesGoogle detailed its new Beam platform in a blog post. It uses an array of different webcams to capture the user from different angles. Then, AI is used to merge the video streams together and render a 3D light field display. Google says it also has head tracking capabilities which are claimed to be accurate down to the millimetre and at 60 frames per second.Google Beam takes advantage of an AI volumetric video model to turn standard 2D video streams into realistic experiences which appear in 3D from any perspective. It, along with the light field display, develop a sense of dimensionality and depth, enabling you to make eye contact and read subtle cues.As per the company, its new platform replaces Project Starline which was initially announced at Google I/O in 2021 with the aim of providing users with a new video communication platform that was capable of showing them in 3D at natural scale, along with eye contact and spatially accurate audio capabilities. While this project did not completely materialise, it was repurposed to create the new 3D video communication platform which is now known as Google Beam.Photo Credit: GoogleFor enhanced communication, Google is exploring plans of bringing speech translation in real time to Beam. Additionally, the capability will also be available in Google Meet starting today. Google says it is working in collaboration with HP to introduce the first Google Beam devices in the market with select customers later this year. Further, the first Google Beam products from the original equipment manufacturerwill be made available via InfoComm 2025 which takes place in June. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Google Beam, Google, Project Starline, AI, Artificial Intelligence, video conferencing, Google IO 2025 Shaurya Tomer Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence, he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and ...More Related Stories #google #introduces #beam #aidriven #communication
    WWW.GADGETS360.COM
    Google Introduces Beam, an AI-Driven Communication Platform That Turns 2D Video Into 3D Experiences
    Google is rebranding its Project Starline and turning it into a new 3D video communication platform, the company announced at its annual I/O developer conference on Tuesday. Dubbed Beam, the platform enables users to connect with each other in a more intuitive manner by turning 2D video streams into 3D experiences. It leverages the Google Cloud platform along with the company's AI prowess to deliver enterprise-grade reliability and compatibility with the existing workflow. Google says Beam may receive support for speech translation in real time and will be available in the market starting with HP devices later this year.Google Beam FeaturesGoogle detailed its new Beam platform in a blog post. It uses an array of different webcams to capture the user from different angles. Then, AI is used to merge the video streams together and render a 3D light field display. Google says it also has head tracking capabilities which are claimed to be accurate down to the millimetre and at 60 frames per second (fps).Google Beam takes advantage of an AI volumetric video model to turn standard 2D video streams into realistic experiences which appear in 3D from any perspective. It, along with the light field display, develop a sense of dimensionality and depth, enabling you to make eye contact and read subtle cues.As per the company, its new platform replaces Project Starline which was initially announced at Google I/O in 2021 with the aim of providing users with a new video communication platform that was capable of showing them in 3D at natural scale, along with eye contact and spatially accurate audio capabilities. While this project did not completely materialise, it was repurposed to create the new 3D video communication platform which is now known as Google Beam.Photo Credit: GoogleFor enhanced communication, Google is exploring plans of bringing speech translation in real time to Beam. Additionally, the capability will also be available in Google Meet starting today. Google says it is working in collaboration with HP to introduce the first Google Beam devices in the market with select customers later this year. Further, the first Google Beam products from the original equipment manufacturer (OEM) will be made available via InfoComm 2025 which takes place in June. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Google Beam, Google, Project Starline, AI, Artificial Intelligence, video conferencing, Google IO 2025 Shaurya Tomer Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence (AI), he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and ...More Related Stories
    0 Комментарии 0 Поделились
  • The best Apple tech that’s gone forever

    Macworld

    Throughout its history, Apple has introduced many industry-changing technologies, some of which are still present in today’s devices, like MagSafe on MacBooks. However, some of the technologies introduced by Apple have been discontinued over time for different reasons.

    While most of these decisions are understandable, some technologies were ahead of their time, and I really miss them today. Let’s look back at some of Apple’s best ideas that were discontinued but ahead of their time and deserve to come back.

    3D Touch

    Although 3D Touch was first introduced with the original Apple Watch as “Force Touch,” Apple later added it to the iPhone. I remember seeing the iPhone 6s introduction video in 2015 with Jony Ive explaining how 3D Touch worked, and it sounded impressive. Indeed, it was.

    Essentially, Apple put pressure sensors under the iPhone’s screen so that it could distinguish a soft press from a hard press with extreme precision. 3D Touch could be used in many ways, such as pressing harder on an icon to reveal more options or gently pressing a link on a webpage to see a preview of it. Game developers could also adopt 3D Touch for custom controls.

    You can still perform the 3D Touch action on today’s iPhones, but the result is different.Filipe Esposito

    The technology blew my mind, and I loved how precise it was. Not only that, but 3D Touch worked very well with the iPhone’s haptic feedback. It was so satisfying. But in practice, only a few apps added support for 3D Touch, and it was a very complex and expensive technology to build.

    Apple ended up discontinuing 3D Touch, starting with the iPhone XR, and replaced some of the actions with a simple long press on the screen. I still dream of future iPhones having 3D Touch back. Luckily, the trackpads on MacBooks still have Force Touch, so that’s something.

    AirPort

    AirPort was Apple’s lineup of wireless routers. First introduced in 1999, AirPort devices always had a futuristic design and made it easy for users to plug in an internet cable and have a Wi-Fi connection at home.

    While the first version looked like a spaceship, one of the most interesting versions of the AirPort was the model that looked like a giant MacBook charger that had an ethernet port on it. There was also the sleek and modern AirPort Extreme, which was essentially a mini-tower made of polished white plastic. That thing was beautiful.

    The AirPort line once had a tower form factorand a design that was like a MacBook power adapter.Apple

    Apple’s AirPorts gained many cool features over time. For example, you could plug a USB stick, printer, or external hard drive into it, and all your Apple devices would have wireless access to it. It could also turn wired sound systems into AirPlay speakers.

    AirPort also resulted in another product called Time Capsule, which had an internal hard drive to wirelessly back up your Mac.

    Apple stopped selling AirPorts in 2018, and while rumors suggest that Apple has no plans to introduce another Wi-Fi router anytime soon, the company has reportedly been exploring the idea of making Apple TVs and HomePods work as Wi-Fi signal extenders. I really hope that’s true.

    iPod

    A lot has changed since Apple introduced the first iPod in 2001. iPod wasn’t just a good product, it also made anyone using it look cool. I remember upgrading from a Discman to an iPod, and I felt like I was in the future. “A thousand songs in your pocket,” as Apple’s slogan went.

    The third-generation iPod nano. Filipe Esposito

    There have been many different versions of the iPod over the years, including the iPod touch, but the most iconic ones will always be the iPods with a Click Wheel. There was a unique and satisfying feeling in turning your finger on that wheel to navigate the interface. It was unlike anything else at the time.

    And although the product was once a hit, it ended up losing the battle to smartphones. Apple knew this, which is why the company has always promoted the iPhone as “the best iPod ever.” Suddenly, our phones could also work as great iPods, so having a dedicated music player no longer made sense for most people.

    Apple stopped selling the iPod shuffle and iPod nano in 2017, while the iPod touch remained in the lineup until 2022. The company said at the time that the iPod still lives on in its other products, and that’s true. Thanks to the iPod, we now have the iPhone, AirPods, and the HomePod.

    While some people are nostalgic about the iPod and want it back, I really believe that it served its purpose and that there are better alternatives now, like using an Apple Watch with AirPods if you want to listen to music without having your phone nearby.

    Cover Flow

    How could I forget Cover Flow? The cool iPod interface that lets users explore their music library by its art covers? Interestingly, Cover Flow was created by a third-party developer, and the idea was so good that Apple acquired it to implement it in iTunes. The feature was eventually added to the iPod, iPhone, and even Finder on the Mac to browse files with large previews.

    I remember using the original iPod touch for the first time, rotating the screen with the Music app open, and seeing the Cover Flow interface. That was probably the moment I fell in love with that product.

    Cover Flow on an iPod touch.Filipe Esposito

    Cover Flow went away with iOS 7, when Apple completely redesigned the iPhone’s operating system in favor of a flat-design interface with less skeuomorphism. But Apple should bring Cover Flow back. It would be a good feature to reintroduce with iOS 19, as the company is rumored to be changing the user interface all over again.

    Front Row

    Similar to Cover Flow, Front Row was an interface that Apple created to use the Mac as a multimedia center. The app made it easier to navigate between the user’s music, videos, and photos. The experience of using Front Row was really cool, especially when the Mac was connected to a big monitor or TV.

    With Front Row, it was easy to imagine what a multimedia devicefrom Apple would look like. Apple never launched a DVD player, but it did announce the first Apple TV in 2007–it had the same interface as Front Row, but now in a standalone device.

    Front Row was later removed from the Mac, probably because Apple wants you to buy an Apple TV instead. Still, I miss Front Row, especially now with Apple Music and Apple TV+.

    Front Row provided an easy-to-use interface for using your Mac as an entertainment center.Apple

    iSight camera

    Pretty much every Apple device today has a built-in front-facing camera, but there was a time when that wasn’t a thing, and Apple had to create its own webcam for the Mac.

    Called iSight, Apple’s webcam had a beautiful design for its time. It was definitely something you’d look at and think, “that’s an Apple product.” The iSight had decent specs for a 2003 webcam, with a three-element lens with autofocus and 480p resolution at 30fps. It also had built-in microphones with noise cancellation and a cool way to cover the lens for better privacy.

    We now have 1080p and even 4K webcams built into our devices, but there’s still a market for external webcams with larger sensors and better microphones for those who videoconference every day. Although this would be very niche, I would like to see Apple reintroduce iSight with a large 4K sensor for vloggers. Unfortunately, this seems quite unlikely to happen since Apple itself has released a feature that lets users turn their iPhone into a webcam for their Mac or Apple TV.

    The iSight webcam design would look great in today’s Apple lineup.Christopher Phin

    Back to the future

    Apple is known for not reviving things from the past very often, even when it’s something its users really love. Still, we can dream of some of these things coming back in the future–just look at MagSafe, which was removed from MacBooks and then added back years later. And even though some of these technologies may not seem relevant these days, they certainly contributed to improving the Apple ecosystem at the time.
    #best #apple #tech #thats #gone
    The best Apple tech that’s gone forever
    Macworld Throughout its history, Apple has introduced many industry-changing technologies, some of which are still present in today’s devices, like MagSafe on MacBooks. However, some of the technologies introduced by Apple have been discontinued over time for different reasons. While most of these decisions are understandable, some technologies were ahead of their time, and I really miss them today. Let’s look back at some of Apple’s best ideas that were discontinued but ahead of their time and deserve to come back. 3D Touch Although 3D Touch was first introduced with the original Apple Watch as “Force Touch,” Apple later added it to the iPhone. I remember seeing the iPhone 6s introduction video in 2015 with Jony Ive explaining how 3D Touch worked, and it sounded impressive. Indeed, it was. Essentially, Apple put pressure sensors under the iPhone’s screen so that it could distinguish a soft press from a hard press with extreme precision. 3D Touch could be used in many ways, such as pressing harder on an icon to reveal more options or gently pressing a link on a webpage to see a preview of it. Game developers could also adopt 3D Touch for custom controls. You can still perform the 3D Touch action on today’s iPhones, but the result is different.Filipe Esposito The technology blew my mind, and I loved how precise it was. Not only that, but 3D Touch worked very well with the iPhone’s haptic feedback. It was so satisfying. But in practice, only a few apps added support for 3D Touch, and it was a very complex and expensive technology to build. Apple ended up discontinuing 3D Touch, starting with the iPhone XR, and replaced some of the actions with a simple long press on the screen. I still dream of future iPhones having 3D Touch back. Luckily, the trackpads on MacBooks still have Force Touch, so that’s something. AirPort AirPort was Apple’s lineup of wireless routers. First introduced in 1999, AirPort devices always had a futuristic design and made it easy for users to plug in an internet cable and have a Wi-Fi connection at home. While the first version looked like a spaceship, one of the most interesting versions of the AirPort was the model that looked like a giant MacBook charger that had an ethernet port on it. There was also the sleek and modern AirPort Extreme, which was essentially a mini-tower made of polished white plastic. That thing was beautiful. The AirPort line once had a tower form factorand a design that was like a MacBook power adapter.Apple Apple’s AirPorts gained many cool features over time. For example, you could plug a USB stick, printer, or external hard drive into it, and all your Apple devices would have wireless access to it. It could also turn wired sound systems into AirPlay speakers. AirPort also resulted in another product called Time Capsule, which had an internal hard drive to wirelessly back up your Mac. Apple stopped selling AirPorts in 2018, and while rumors suggest that Apple has no plans to introduce another Wi-Fi router anytime soon, the company has reportedly been exploring the idea of making Apple TVs and HomePods work as Wi-Fi signal extenders. I really hope that’s true. iPod A lot has changed since Apple introduced the first iPod in 2001. iPod wasn’t just a good product, it also made anyone using it look cool. I remember upgrading from a Discman to an iPod, and I felt like I was in the future. “A thousand songs in your pocket,” as Apple’s slogan went. The third-generation iPod nano. Filipe Esposito There have been many different versions of the iPod over the years, including the iPod touch, but the most iconic ones will always be the iPods with a Click Wheel. There was a unique and satisfying feeling in turning your finger on that wheel to navigate the interface. It was unlike anything else at the time. And although the product was once a hit, it ended up losing the battle to smartphones. Apple knew this, which is why the company has always promoted the iPhone as “the best iPod ever.” Suddenly, our phones could also work as great iPods, so having a dedicated music player no longer made sense for most people. Apple stopped selling the iPod shuffle and iPod nano in 2017, while the iPod touch remained in the lineup until 2022. The company said at the time that the iPod still lives on in its other products, and that’s true. Thanks to the iPod, we now have the iPhone, AirPods, and the HomePod. While some people are nostalgic about the iPod and want it back, I really believe that it served its purpose and that there are better alternatives now, like using an Apple Watch with AirPods if you want to listen to music without having your phone nearby. Cover Flow How could I forget Cover Flow? The cool iPod interface that lets users explore their music library by its art covers? Interestingly, Cover Flow was created by a third-party developer, and the idea was so good that Apple acquired it to implement it in iTunes. The feature was eventually added to the iPod, iPhone, and even Finder on the Mac to browse files with large previews. I remember using the original iPod touch for the first time, rotating the screen with the Music app open, and seeing the Cover Flow interface. That was probably the moment I fell in love with that product. Cover Flow on an iPod touch.Filipe Esposito Cover Flow went away with iOS 7, when Apple completely redesigned the iPhone’s operating system in favor of a flat-design interface with less skeuomorphism. But Apple should bring Cover Flow back. It would be a good feature to reintroduce with iOS 19, as the company is rumored to be changing the user interface all over again. Front Row Similar to Cover Flow, Front Row was an interface that Apple created to use the Mac as a multimedia center. The app made it easier to navigate between the user’s music, videos, and photos. The experience of using Front Row was really cool, especially when the Mac was connected to a big monitor or TV. With Front Row, it was easy to imagine what a multimedia devicefrom Apple would look like. Apple never launched a DVD player, but it did announce the first Apple TV in 2007–it had the same interface as Front Row, but now in a standalone device. Front Row was later removed from the Mac, probably because Apple wants you to buy an Apple TV instead. Still, I miss Front Row, especially now with Apple Music and Apple TV+. Front Row provided an easy-to-use interface for using your Mac as an entertainment center.Apple iSight camera Pretty much every Apple device today has a built-in front-facing camera, but there was a time when that wasn’t a thing, and Apple had to create its own webcam for the Mac. Called iSight, Apple’s webcam had a beautiful design for its time. It was definitely something you’d look at and think, “that’s an Apple product.” The iSight had decent specs for a 2003 webcam, with a three-element lens with autofocus and 480p resolution at 30fps. It also had built-in microphones with noise cancellation and a cool way to cover the lens for better privacy. We now have 1080p and even 4K webcams built into our devices, but there’s still a market for external webcams with larger sensors and better microphones for those who videoconference every day. Although this would be very niche, I would like to see Apple reintroduce iSight with a large 4K sensor for vloggers. Unfortunately, this seems quite unlikely to happen since Apple itself has released a feature that lets users turn their iPhone into a webcam for their Mac or Apple TV. The iSight webcam design would look great in today’s Apple lineup.Christopher Phin Back to the future Apple is known for not reviving things from the past very often, even when it’s something its users really love. Still, we can dream of some of these things coming back in the future–just look at MagSafe, which was removed from MacBooks and then added back years later. And even though some of these technologies may not seem relevant these days, they certainly contributed to improving the Apple ecosystem at the time. #best #apple #tech #thats #gone
    WWW.MACWORLD.COM
    The best Apple tech that’s gone forever
    Macworld Throughout its history, Apple has introduced many industry-changing technologies, some of which are still present in today’s devices, like MagSafe on MacBooks. However, some of the technologies introduced by Apple have been discontinued over time for different reasons. While most of these decisions are understandable, some technologies were ahead of their time, and I really miss them today. Let’s look back at some of Apple’s best ideas that were discontinued but ahead of their time and deserve to come back. 3D Touch Although 3D Touch was first introduced with the original Apple Watch as “Force Touch,” Apple later added it to the iPhone. I remember seeing the iPhone 6s introduction video in 2015 with Jony Ive explaining how 3D Touch worked, and it sounded impressive. Indeed, it was. Essentially, Apple put pressure sensors under the iPhone’s screen so that it could distinguish a soft press from a hard press with extreme precision. 3D Touch could be used in many ways, such as pressing harder on an icon to reveal more options or gently pressing a link on a webpage to see a preview of it. Game developers could also adopt 3D Touch for custom controls. You can still perform the 3D Touch action on today’s iPhones, but the result is different.Filipe Esposito The technology blew my mind, and I loved how precise it was. Not only that, but 3D Touch worked very well with the iPhone’s haptic feedback. It was so satisfying. But in practice, only a few apps added support for 3D Touch, and it was a very complex and expensive technology to build. Apple ended up discontinuing 3D Touch, starting with the iPhone XR, and replaced some of the actions with a simple long press on the screen. I still dream of future iPhones having 3D Touch back. Luckily, the trackpads on MacBooks still have Force Touch, so that’s something. AirPort AirPort was Apple’s lineup of wireless routers. First introduced in 1999, AirPort devices always had a futuristic design and made it easy for users to plug in an internet cable and have a Wi-Fi connection at home. While the first version looked like a spaceship, one of the most interesting versions of the AirPort was the model that looked like a giant MacBook charger that had an ethernet port on it. There was also the sleek and modern AirPort Extreme, which was essentially a mini-tower made of polished white plastic. That thing was beautiful. The AirPort line once had a tower form factor (left) and a design that was like a MacBook power adapter (left).Apple Apple’s AirPorts gained many cool features over time. For example, you could plug a USB stick, printer, or external hard drive into it, and all your Apple devices would have wireless access to it. It could also turn wired sound systems into AirPlay speakers. AirPort also resulted in another product called Time Capsule, which had an internal hard drive to wirelessly back up your Mac. Apple stopped selling AirPorts in 2018, and while rumors suggest that Apple has no plans to introduce another Wi-Fi router anytime soon, the company has reportedly been exploring the idea of making Apple TVs and HomePods work as Wi-Fi signal extenders. I really hope that’s true. iPod A lot has changed since Apple introduced the first iPod in 2001. iPod wasn’t just a good product, it also made anyone using it look cool. I remember upgrading from a Discman to an iPod, and I felt like I was in the future. “A thousand songs in your pocket,” as Apple’s slogan went. The third-generation iPod nano. Filipe Esposito There have been many different versions of the iPod over the years, including the iPod touch, but the most iconic ones will always be the iPods with a Click Wheel. There was a unique and satisfying feeling in turning your finger on that wheel to navigate the interface. It was unlike anything else at the time. And although the product was once a hit, it ended up losing the battle to smartphones. Apple knew this, which is why the company has always promoted the iPhone as “the best iPod ever.” Suddenly, our phones could also work as great iPods, so having a dedicated music player no longer made sense for most people. Apple stopped selling the iPod shuffle and iPod nano in 2017, while the iPod touch remained in the lineup until 2022. The company said at the time that the iPod still lives on in its other products, and that’s true. Thanks to the iPod, we now have the iPhone, AirPods, and the HomePod. While some people are nostalgic about the iPod and want it back, I really believe that it served its purpose and that there are better alternatives now, like using an Apple Watch with AirPods if you want to listen to music without having your phone nearby. Cover Flow How could I forget Cover Flow? The cool iPod interface that lets users explore their music library by its art covers? Interestingly, Cover Flow was created by a third-party developer, and the idea was so good that Apple acquired it to implement it in iTunes. The feature was eventually added to the iPod, iPhone, and even Finder on the Mac to browse files with large previews. I remember using the original iPod touch for the first time, rotating the screen with the Music app open, and seeing the Cover Flow interface. That was probably the moment I fell in love with that product. Cover Flow on an iPod touch.Filipe Esposito Cover Flow went away with iOS 7, when Apple completely redesigned the iPhone’s operating system in favor of a flat-design interface with less skeuomorphism. But Apple should bring Cover Flow back. It would be a good feature to reintroduce with iOS 19, as the company is rumored to be changing the user interface all over again. Front Row Similar to Cover Flow, Front Row was an interface that Apple created to use the Mac as a multimedia center. The app made it easier to navigate between the user’s music, videos, and photos. The experience of using Front Row was really cool, especially when the Mac was connected to a big monitor or TV. With Front Row, it was easy to imagine what a multimedia device (like a DVD player) from Apple would look like. Apple never launched a DVD player, but it did announce the first Apple TV in 2007–it had the same interface as Front Row, but now in a standalone device. Front Row was later removed from the Mac, probably because Apple wants you to buy an Apple TV instead. Still, I miss Front Row, especially now with Apple Music and Apple TV+. Front Row provided an easy-to-use interface for using your Mac as an entertainment center.Apple iSight camera Pretty much every Apple device today has a built-in front-facing camera, but there was a time when that wasn’t a thing, and Apple had to create its own webcam for the Mac. Called iSight, Apple’s webcam had a beautiful design for its time (which still looks modern). It was definitely something you’d look at and think, “that’s an Apple product.” The iSight had decent specs for a 2003 webcam, with a three-element lens with autofocus and 480p resolution at 30fps. It also had built-in microphones with noise cancellation and a cool way to cover the lens for better privacy. We now have 1080p and even 4K webcams built into our devices, but there’s still a market for external webcams with larger sensors and better microphones for those who videoconference every day. Although this would be very niche, I would like to see Apple reintroduce iSight with a large 4K sensor for vloggers. Unfortunately, this seems quite unlikely to happen since Apple itself has released a feature that lets users turn their iPhone into a webcam for their Mac or Apple TV. The iSight webcam design would look great in today’s Apple lineup.Christopher Phin Back to the future Apple is known for not reviving things from the past very often, even when it’s something its users really love. Still, we can dream of some of these things coming back in the future–just look at MagSafe, which was removed from MacBooks and then added back years later. And even though some of these technologies may not seem relevant these days, they certainly contributed to improving the Apple ecosystem at the time.
    0 Комментарии 0 Поделились