• Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler

    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production.
    Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below.
    Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder.
    In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session.
    From Concept to Completion
    To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms.
    For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI.
    ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated.
    Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY.
    NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU.
    ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images.
    Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost.
    LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY.
    “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY 

    Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models.
    Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch.
    To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x.
    Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started.
    Photorealistic renders. Image courtesy of FITY.
    Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time.
    Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY.
    “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY

    Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #startup #uses #nvidia #rtxpowered #generative
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #startup #uses #nvidia #rtxpowered #generative
    BLOGS.NVIDIA.COM
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. Read more about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from $999. GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. Save the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptation (LoRA) models — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 Комментарии 0 Поделились
  • BougeRV water heater review: hot showers to go

    Hot water is like internet connectivity for most Verge readers: you just expect it to be there. But that’s unlikely to be the case this summer when tent camping at a music festival or road-tripping into the great unknown. That’s where BougeRV’s battery-powered shower comes in. The “Portable Propane Outdoor Camping Water Heater” from BougeRV is not only optimized for search engine discovery, it also delivers a luxurious spray of hot steaming water to the unwashed, be they human, canine, or stubborn pots and pans. Charge up the battery, attach a propane canister, drop the pump into a jug of water, and you’re ready to get sudsing.It’s so useful and flexible that I’ve ditched my plans to install a permanent shower cabin and expensive hot water system inside my adventure van, even if I don’t completely trust it.8Verge ScoreThe GoodBattery-powered portabilityTemperature controlAdjustable flow to save waterLots of safety featuresThe BadLots of hoses and cables to snagWeak shower head holderNo bag to carry all the accessoriesLongevity concernsat BougeRVHow we rate and review productsMy current portable shower consists of an 11-liter water bag, a manual foot pump, and a spray nozzle. To make it hot, I have to heat water on the stove or hang the bag in the sun for several hours, yet it still costs over For the BougeRV heated shower seems like a bargain.The BougeRV system can produce a maximum heat output of 20,500 BTUs — about half of a typical residential gas water heater. It measures 15.75 x 6.7 x 14.57 inchesand weighs 13.2 pounds, making it compact and fairly lightweight with two big handles for easy carry. The hoses and cabling make it a little unwieldy — capable of chaos inside a small space unless handled with care.Assembly starts with screwing in an easy to find one poundpropane canister that attaches at the rear of the unit. That’s the size BougeRV recommends, but you wouldn’t be the first to instead run a hose from your RV’s existing propane tank to the pressure regulator on the water heater. Two quick-connect water hoses — labeled blue and red for idiot-proof attachment — route the water from your chosen receptacle, through that gas furnace, and out through the showerhead. The long 2.5mshower hose allows for flexible placement of the heater.The small water pump measures just 2.24 inchesacross, so it easily fits through the opening of standard jerry cans. The pump is electrically powered by the BougeRV unit, which is powered by its rechargeable battery, an AC wall jack, or 12V adapter that plugs into the cigarette jack of your vehicle or solar generator.My outdoor shower using a standard jerry can for water. Magnets hold the towel in place and I’d buy a magnetic shower head holder to complete the setup. Photo by Thomas Ricker / The VergeCan place the BougeRV system on my sliding tray for a gear cleaning station. A long press on the pump button bypasses the heater to save gas. Photo by Thomas Ricker / The VergeA makeshift outdoor sink. The included holder is too weak to hold the shower head in more extreme positions. Photo by Thomas Ricker / The VergeHank hates getting hosed off with cold water but enjoyed this lush heated rinse.Photo by Thomas Ricker / The VergeThe 2500mAh / 12Vintegrated Lithium-ion battery takes about three hours to charge from the included charger. A full battery and one-poundcanister of liquid propane gas can pump out about an hour’s worth of hot water before both run dry. The shower’s gas consumption rate is 20MJ/h. Alternatively, you can save gas with a long press on the pump button to put the shower into cold water mode — ideal for rinsing off your mountain bike, hiking shoes, or wet suit, for example.The dial on the front of the heater controls the size of the flame. I did a handful of tests, starting with water measuring between 13 and 16 degrees Celsiusaccording to the display on the BougeRV water heater. With the dial turned all the way to the left, the water pouring from the shower head rose to 23–25Cafter just a few seconds. Turned all the way to the right, the temperature maxed out at a steamy 34–41Cin about 30 seconds.Recycling the water can make it even hotter, if you dareRecycling the water can make it even hotter, if you dare. After two or three cycles on max, the heater boosted the temperature above 51Cbefore the unit shut down with an error, by design. It’s not meant to exceed an average water temperature above 50C. A simple on/off reset the E6 error.Water flow is between 2.2 and 3 liters per minute — well below what you can expect from a 9 to 12 L/min flow of a modern home shower. That’s still acceptable, in my opinion, and far superior to nothing, which is the typical alternative when camping away from home. The shower head has a rocker switch to toggle between hardish, mixed, and soft water flow rates as well as an on/off limiter button to help conserve water between lathers.It’s surprisingly quiet even with the pump turned on. There’s some rapid clicking to ignite the gaswhenever the flow of water returns, and the pump produces a low-level hum that’s quickly drowned out by the sound of spraying water.The water heater is also protected from tilts, bumps, and an empty water source. When I leaned my review unit over about 30 degrees, the unit shut off. It also shut off automatically after two minutes of trying to pump from an empty bucket. A master override on/off switch on the button prevents the unit from turning on accidentally if the on/off button on the front is bumped during transport or storage.I’m impressed by BougeRV’s water heater, but I’m a little concerned about its durability over time. After using it on the beach on a windy day, I ran into trouble once I returned inside: the heater didn’t heat and the water was reduced to a trickle out of the showerhead. It’s possible that some sediment trapped in the lines reduced the flow rate below the 1.2L/min required for ignition. Nevertheless, the issue was resolved after a few minutes of fiddling with the hoses and filters, and turning the unit on and off again. BougeRV offers a two-year warranty and says the water heater is rated at IPX4. So while it’s resistant to splashing water, there’s no assurance offered against dust and blowing sand. I do have a few other gripes. Those hoses can be a tripping and snagging hazard, and the plastic clip meant to hold the showerhead to one of the lifting handles is too weak to keep it from rotating and spraying your surroundings. I also wish BougeRV bundled the heater with an accessory bag to carry all the power adapters and hoses. And when putting the device away, you have to tip it forward to drain all the collected water from the inlet and outlet — there’s no automatic expulsion mechanism.But really, these are trivial issues for what the unit does at this price.1/8A cold water option is great for cleaning gear.Prior to this review, I had been in the late planning stages of having a shower cabin, water pump, gas heater, extra-large water tank, and all necessary plumbing installed in my Sprinter van. Total cost: about I’m now convinced that a portable system like what BougeRV offers is a better option. Why pay so much for something so permanent that’s only used a few minutes each week, for maybe half the year?Instead, BougeRV’s portable water heater can function as an outdoor shower during the summer months or be moved insidewhen coupled with a portable shower curtain and basin, all for less than That sounds like a better use of my money, and probably yours if you’re an aspiring vanlifer.And when the van is parked, I can bring those hotjets of water anywhere my adventures might take me: to clean up after mountain biking in the muddy forest or kitesurfing in the salty sea, to wash the dog outside after rolling in shit again, or to take a refreshing shower during a sweaty four-day music festival.A near-identical water heater is sold under the Ranien and Camplux brands, but those have larger 4000mAhbatteries and list for between and So it might pay to shop around.Photos by Thomas Ricker / The VergeSee More:
    #bougerv #water #heater #review #hot
    BougeRV water heater review: hot showers to go
    Hot water is like internet connectivity for most Verge readers: you just expect it to be there. But that’s unlikely to be the case this summer when tent camping at a music festival or road-tripping into the great unknown. That’s where BougeRV’s battery-powered shower comes in. The “Portable Propane Outdoor Camping Water Heater” from BougeRV is not only optimized for search engine discovery, it also delivers a luxurious spray of hot steaming water to the unwashed, be they human, canine, or stubborn pots and pans. Charge up the battery, attach a propane canister, drop the pump into a jug of water, and you’re ready to get sudsing.It’s so useful and flexible that I’ve ditched my plans to install a permanent shower cabin and expensive hot water system inside my adventure van, even if I don’t completely trust it.8Verge ScoreThe GoodBattery-powered portabilityTemperature controlAdjustable flow to save waterLots of safety featuresThe BadLots of hoses and cables to snagWeak shower head holderNo bag to carry all the accessoriesLongevity concernsat BougeRVHow we rate and review productsMy current portable shower consists of an 11-liter water bag, a manual foot pump, and a spray nozzle. To make it hot, I have to heat water on the stove or hang the bag in the sun for several hours, yet it still costs over For the BougeRV heated shower seems like a bargain.The BougeRV system can produce a maximum heat output of 20,500 BTUs — about half of a typical residential gas water heater. It measures 15.75 x 6.7 x 14.57 inchesand weighs 13.2 pounds, making it compact and fairly lightweight with two big handles for easy carry. The hoses and cabling make it a little unwieldy — capable of chaos inside a small space unless handled with care.Assembly starts with screwing in an easy to find one poundpropane canister that attaches at the rear of the unit. That’s the size BougeRV recommends, but you wouldn’t be the first to instead run a hose from your RV’s existing propane tank to the pressure regulator on the water heater. Two quick-connect water hoses — labeled blue and red for idiot-proof attachment — route the water from your chosen receptacle, through that gas furnace, and out through the showerhead. The long 2.5mshower hose allows for flexible placement of the heater.The small water pump measures just 2.24 inchesacross, so it easily fits through the opening of standard jerry cans. The pump is electrically powered by the BougeRV unit, which is powered by its rechargeable battery, an AC wall jack, or 12V adapter that plugs into the cigarette jack of your vehicle or solar generator.My outdoor shower using a standard jerry can for water. Magnets hold the towel in place and I’d buy a magnetic shower head holder to complete the setup. Photo by Thomas Ricker / The VergeCan place the BougeRV system on my sliding tray for a gear cleaning station. A long press on the pump button bypasses the heater to save gas. Photo by Thomas Ricker / The VergeA makeshift outdoor sink. The included holder is too weak to hold the shower head in more extreme positions. Photo by Thomas Ricker / The VergeHank hates getting hosed off with cold water but enjoyed this lush heated rinse.Photo by Thomas Ricker / The VergeThe 2500mAh / 12Vintegrated Lithium-ion battery takes about three hours to charge from the included charger. A full battery and one-poundcanister of liquid propane gas can pump out about an hour’s worth of hot water before both run dry. The shower’s gas consumption rate is 20MJ/h. Alternatively, you can save gas with a long press on the pump button to put the shower into cold water mode — ideal for rinsing off your mountain bike, hiking shoes, or wet suit, for example.The dial on the front of the heater controls the size of the flame. I did a handful of tests, starting with water measuring between 13 and 16 degrees Celsiusaccording to the display on the BougeRV water heater. With the dial turned all the way to the left, the water pouring from the shower head rose to 23–25Cafter just a few seconds. Turned all the way to the right, the temperature maxed out at a steamy 34–41Cin about 30 seconds.Recycling the water can make it even hotter, if you dareRecycling the water can make it even hotter, if you dare. After two or three cycles on max, the heater boosted the temperature above 51Cbefore the unit shut down with an error, by design. It’s not meant to exceed an average water temperature above 50C. A simple on/off reset the E6 error.Water flow is between 2.2 and 3 liters per minute — well below what you can expect from a 9 to 12 L/min flow of a modern home shower. That’s still acceptable, in my opinion, and far superior to nothing, which is the typical alternative when camping away from home. The shower head has a rocker switch to toggle between hardish, mixed, and soft water flow rates as well as an on/off limiter button to help conserve water between lathers.It’s surprisingly quiet even with the pump turned on. There’s some rapid clicking to ignite the gaswhenever the flow of water returns, and the pump produces a low-level hum that’s quickly drowned out by the sound of spraying water.The water heater is also protected from tilts, bumps, and an empty water source. When I leaned my review unit over about 30 degrees, the unit shut off. It also shut off automatically after two minutes of trying to pump from an empty bucket. A master override on/off switch on the button prevents the unit from turning on accidentally if the on/off button on the front is bumped during transport or storage.I’m impressed by BougeRV’s water heater, but I’m a little concerned about its durability over time. After using it on the beach on a windy day, I ran into trouble once I returned inside: the heater didn’t heat and the water was reduced to a trickle out of the showerhead. It’s possible that some sediment trapped in the lines reduced the flow rate below the 1.2L/min required for ignition. Nevertheless, the issue was resolved after a few minutes of fiddling with the hoses and filters, and turning the unit on and off again. BougeRV offers a two-year warranty and says the water heater is rated at IPX4. So while it’s resistant to splashing water, there’s no assurance offered against dust and blowing sand. I do have a few other gripes. Those hoses can be a tripping and snagging hazard, and the plastic clip meant to hold the showerhead to one of the lifting handles is too weak to keep it from rotating and spraying your surroundings. I also wish BougeRV bundled the heater with an accessory bag to carry all the power adapters and hoses. And when putting the device away, you have to tip it forward to drain all the collected water from the inlet and outlet — there’s no automatic expulsion mechanism.But really, these are trivial issues for what the unit does at this price.1/8A cold water option is great for cleaning gear.Prior to this review, I had been in the late planning stages of having a shower cabin, water pump, gas heater, extra-large water tank, and all necessary plumbing installed in my Sprinter van. Total cost: about I’m now convinced that a portable system like what BougeRV offers is a better option. Why pay so much for something so permanent that’s only used a few minutes each week, for maybe half the year?Instead, BougeRV’s portable water heater can function as an outdoor shower during the summer months or be moved insidewhen coupled with a portable shower curtain and basin, all for less than That sounds like a better use of my money, and probably yours if you’re an aspiring vanlifer.And when the van is parked, I can bring those hotjets of water anywhere my adventures might take me: to clean up after mountain biking in the muddy forest or kitesurfing in the salty sea, to wash the dog outside after rolling in shit again, or to take a refreshing shower during a sweaty four-day music festival.A near-identical water heater is sold under the Ranien and Camplux brands, but those have larger 4000mAhbatteries and list for between and So it might pay to shop around.Photos by Thomas Ricker / The VergeSee More: #bougerv #water #heater #review #hot
    WWW.THEVERGE.COM
    BougeRV water heater review: hot showers to go
    Hot water is like internet connectivity for most Verge readers: you just expect it to be there. But that’s unlikely to be the case this summer when tent camping at a music festival or road-tripping into the great unknown. That’s where BougeRV’s battery-powered shower comes in. The $310 “Portable Propane Outdoor Camping Water Heater” from BougeRV is not only optimized for search engine discovery, it also delivers a luxurious spray of hot steaming water to the unwashed, be they human, canine, or stubborn pots and pans. Charge up the battery, attach a propane canister, drop the pump into a jug of water, and you’re ready to get sudsing.It’s so useful and flexible that I’ve ditched my plans to install a permanent shower cabin and expensive hot water system inside my adventure van, even if I don’t completely trust it.8Verge Score$310The GoodBattery-powered portabilityTemperature controlAdjustable flow to save waterLots of safety featuresThe BadLots of hoses and cables to snagWeak shower head holderNo bag to carry all the accessoriesLongevity concerns$310 at BougeRVHow we rate and review productsMy current portable shower consists of an 11-liter water bag, a manual foot pump, and a spray nozzle. To make it hot, I have to heat water on the stove or hang the bag in the sun for several hours, yet it still costs over $150. For $310, the BougeRV heated shower seems like a bargain.The BougeRV system can produce a maximum heat output of 20,500 BTUs — about half of a typical residential gas water heater. It measures 15.75 x 6.7 x 14.57 inches (40 x 17 x 31cm) and weighs 13.2 pounds (6.21kg), making it compact and fairly lightweight with two big handles for easy carry. The hoses and cabling make it a little unwieldy — capable of chaos inside a small space unless handled with care.Assembly starts with screwing in an easy to find one pound (454g) propane canister that attaches at the rear of the unit. That’s the size BougeRV recommends, but you wouldn’t be the first to instead run a hose from your RV’s existing propane tank to the pressure regulator on the water heater. Two quick-connect water hoses — labeled blue and red for idiot-proof attachment — route the water from your chosen receptacle, through that gas furnace, and out through the showerhead. The long 2.5m (8.2 feet) shower hose allows for flexible placement of the heater.The small water pump measures just 2.24 inches (5.7cm) across, so it easily fits through the opening of standard jerry cans. The pump is electrically powered by the BougeRV unit, which is powered by its rechargeable battery, an AC wall jack, or 12V adapter that plugs into the cigarette jack of your vehicle or solar generator.My outdoor shower using a standard jerry can for water. Magnets hold the towel in place and I’d buy a magnetic shower head holder to complete the setup. Photo by Thomas Ricker / The VergeCan place the BougeRV system on my sliding tray for a gear cleaning station. A long press on the pump button bypasses the heater to save gas. Photo by Thomas Ricker / The VergeA makeshift outdoor sink. The included holder is too weak to hold the shower head in more extreme positions. Photo by Thomas Ricker / The VergeHank hates getting hosed off with cold water but enjoyed this lush heated rinse. (He rolled in dirt immediately after.) Photo by Thomas Ricker / The VergeThe 2500mAh / 12V (30Wh) integrated Lithium-ion battery takes about three hours to charge from the included charger. A full battery and one-pound (454g) canister of liquid propane gas can pump out about an hour’s worth of hot water before both run dry. The shower’s gas consumption rate is 20MJ/h. Alternatively, you can save gas with a long press on the pump button to put the shower into cold water mode — ideal for rinsing off your mountain bike, hiking shoes, or wet suit, for example.The dial on the front of the heater controls the size of the flame. I did a handful of tests, starting with water measuring between 13 and 16 degrees Celsius (55–61 degrees Fahrenheit) according to the display on the BougeRV water heater. With the dial turned all the way to the left, the water pouring from the shower head rose to 23–25C (73–77F) after just a few seconds. Turned all the way to the right, the temperature maxed out at a steamy 34–41C (93–105F) in about 30 seconds.Recycling the water can make it even hotter, if you dareRecycling the water can make it even hotter, if you dare. After two or three cycles on max, the heater boosted the temperature above 51C (124F) before the unit shut down with an error, by design. It’s not meant to exceed an average water temperature above 50C (122F). A simple on/off reset the E6 error.Water flow is between 2.2 and 3 liters per minute — well below what you can expect from a 9 to 12 L/min flow of a modern home shower. That’s still acceptable, in my opinion, and far superior to nothing, which is the typical alternative when camping away from home. The shower head has a rocker switch to toggle between hardish, mixed, and soft water flow rates as well as an on/off limiter button to help conserve water between lathers.It’s surprisingly quiet even with the pump turned on. There’s some rapid clicking to ignite the gas (followed by a whoosh of flame) whenever the flow of water returns, and the pump produces a low-level hum that’s quickly drowned out by the sound of spraying water.The water heater is also protected from tilts, bumps, and an empty water source. When I leaned my review unit over about 30 degrees, the unit shut off. It also shut off automatically after two minutes of trying to pump from an empty bucket. A master override on/off switch on the button prevents the unit from turning on accidentally if the on/off button on the front is bumped during transport or storage.I’m impressed by BougeRV’s water heater, but I’m a little concerned about its durability over time. After using it on the beach on a windy day, I ran into trouble once I returned inside: the heater didn’t heat and the water was reduced to a trickle out of the showerhead. It’s possible that some sediment trapped in the lines reduced the flow rate below the 1.2L/min required for ignition. Nevertheless, the issue was resolved after a few minutes of fiddling with the hoses and filters, and turning the unit on and off again. BougeRV offers a two-year warranty and says the water heater is rated at IPX4. So while it’s resistant to splashing water, there’s no assurance offered against dust and blowing sand. I do have a few other gripes. Those hoses can be a tripping and snagging hazard, and the plastic clip meant to hold the showerhead to one of the lifting handles is too weak to keep it from rotating and spraying your surroundings. I also wish BougeRV bundled the heater with an accessory bag to carry all the power adapters and hoses. And when putting the device away, you have to tip it forward to drain all the collected water from the inlet and outlet — there’s no automatic expulsion mechanism.But really, these are trivial issues for what the unit does at this price.1/8A cold water option is great for cleaning gear.Prior to this review, I had been in the late planning stages of having a shower cabin, water pump, gas heater, extra-large water tank, and all necessary plumbing installed in my Sprinter van. Total cost: about $4,000. I’m now convinced that a portable system like what BougeRV offers is a better option. Why pay so much for something so permanent that’s only used a few minutes each week, for maybe half the year?Instead, BougeRV’s $310 portable water heater can function as an outdoor shower during the summer months or be moved inside (with ventilation) when coupled with a portable shower curtain and basin, all for less than $600. That sounds like a better use of my money, and probably yours if you’re an aspiring vanlifer.And when the van is parked, I can bring those hot (or cold) jets of water anywhere my adventures might take me: to clean up after mountain biking in the muddy forest or kitesurfing in the salty sea, to wash the dog outside after rolling in shit again, or to take a refreshing shower during a sweaty four-day music festival.A near-identical water heater is sold under the Ranien and Camplux brands, but those have larger 4000mAh (48Wh) batteries and list for between $349 and $399. So it might pay to shop around.Photos by Thomas Ricker / The VergeSee More:
    0 Комментарии 0 Поделились
  • Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design

    Why revisit the Lenovo ThinkPad X1 Fold in 2025? The answer lies in the rapid evolution of foldable computing. When Lenovo introduced its second-generation foldable PC last year, it represented the pinnacle of what was possible in this emerging category. The device combined a versatile 16.3-inch OLED display with robust engineering and the familiar Windows ecosystem. It set benchmarks for build quality, display technology, and adaptability that competitors would need to surpass.
    Designer: Lenovo
    Designer: Huawei
    Fast forward to today, and the landscape has shifted dramatically. Huawei has unveiled its MateBook Fold Ultimate Design, a device that challenges our understanding of what foldable laptops can achieve. With an 18-inch display that folds to a 13-inch form factor, a chassis measuring just 7.3mm when open, and a proprietary operating system built specifically for foldable hardware, Huawei has raised the stakes considerably.
    This comparison arrives at a pivotal moment for foldable computing. The category has matured beyond proof-of-concept to deliver genuinely useful productivity tools. Now that we have seen what Lenovo accomplished with the X1 Fold 2024, let us examine how Huawei’s MateBook Fold Ultimate Design responds and potentially redefines the future of portable computing.

    Design Philosophy and Physical Presence
    The Lenovo ThinkPad X1 Fold 2024 embodies the ThinkPad ethos of reliability and purposeful design. Its magnesium alloy frame and recycled PET woven fabric cover create a device that feels substantial and durable. The fold-flat hinge eliminates gaps when closed, protecting the display while maintaining a clean profile. At 8.6mm when open and 17.4mm when closed, the X1 Fold is not the thinnest laptop available, but its construction inspires confidence. The device weighs approximately 2.9 pounds without accessories, increasing to 4.3 pounds with the keyboard and stand attached. This weight reflects Lenovo’s prioritization of durability over absolute portability.

    Huawei takes a dramatically different approach with the MateBook Fold Ultimate Design. The device measures an astonishing 7.3mm when open and 14.9mm when closed, making it significantly thinner than the X1 Fold. At just 1.16kgfor the base unit and 1.45kg with the keyboard, the MateBook Fold is remarkably light for a device with an 18-inch display. This achievement comes from Huawei’s use of carbon fiber reinforcement and a zirconium-based liquid metal hinge. The 285mm “water-drop” hinge design provides smooth folding action and increased durability, with Huawei claiming a 400% improvement in hovering torque compared to conventional designs.
    The most significant physical difference between these devices becomes apparent in their approach to accessories. Lenovo requires a separate kickstand for desk use, adding bulk and complexity to the overall package. Huawei integrates a sturdy kickstand directly into the MateBook Fold, eliminating the need for additional accessories and streamlining the user experience. This built-in solution allows for more versatile positioning and reduces the number of components users need to manage.

    Both devices transform between multiple modes, but their physical dimensions create distinct experiences. When folded, the X1 Fold becomes a 12-inch laptop, which many users find cramped for serious multitasking. The MateBook Fold offers a more generous 13-inch workspace in laptop mode, providing additional screen real estate for productivity tasks. This difference may seem small on paper, but it significantly impacts the practical usability of these devices in their folded configurations.

    The materials chosen for each device reveal different priorities. Lenovo emphasizes sustainability with its recycled PET fabric cover and plastic-free packaging. This approach aligns with growing corporate environmental concerns and provides a tactile warmth that distinguishes the X1 Fold from typical metal-clad laptops. Huawei focuses on premium materials that enable extreme thinness, using advanced alloys and composites throughout the chassis. Both approaches result in distinctive aesthetics that will appeal to different user preferences.
    Display Technology and Visual Experience
    Display technology represents the heart of any foldable device, and both manufacturers have made significant investments in this critical component. The Lenovo ThinkPad X1 Fold features a 16.3-inch OLED panel with a resolution of 2560 x 2024 and a 4:3 aspect ratio. This display delivers 400 nits of brightness for standard content, increasing to 600 nits for HDR material. The panel supports DisplayHDR True Black 600 certification and Dolby Vision, covering 100% of the DCI-P3 color gamut. An anti-smudge coating helps maintain visual clarity during extended use.

    Huawei pushes display technology further with the MateBook Fold Ultimate Design. Its 18-inch LTPO OLED screen boasts a resolution of 3296 x 2472, maintaining the same 4:3 aspect ratio as the Lenovo. However, the MateBook Fold achieves a peak brightness of 1600 nits, more than double that of the X1 Fold. The dual-layer LTPO technology reduces power consumption by 30% compared to standard OLED panels while supporting adaptive refresh rates from 1Hz to 120Hz. This combination of size, brightness, and efficiency creates a visual experience that surpasses the X1 Fold in nearly every measurable aspect.
    Both displays exhibit a visible crease at the fold, though the severity varies. Lenovo’s hinge design minimizes the crease when the device is fully open, but it becomes more noticeable at certain viewing angles. Huawei claims its water-drop hinge reduces crease visibility, though independent verification is limited. In practical use, both creases become less distracting over time as users adapt to the form factor.
    Color accuracy and visual impact favor the MateBook Fold, with its higher brightness and contrast ratio of 2,000,000:1 creating more vibrant images and videos. The X1 Fold delivers excellent color reproduction but cannot match the visual punch of Huawei’s display. For creative professionals and media consumers, this difference could be decisive when choosing between these devices.

    The touch response and pen input capabilities of both displays deserve consideration. Lenovo’s display works seamlessly with the Precision Pen, offering pressure sensitivity that makes note-taking and sketching feel natural. The anti-smudge coating balances fingerprint resistance with smooth touch response. Huawei provides similar functionality, though detailed specifications about pressure sensitivity levels and palm rejection capabilities are not yet widely available. Both devices support multi-touch gestures for navigation and manipulation of on-screen elements.
    The 4:3 aspect ratio on both devices proves ideal for productivity applications, providing more vertical space than typical 16:9 laptop displays. This ratio works particularly well for document editing, web browsing, and coding. When watching widescreen video content, both devices display black bars at the top and bottom, but the overall screen size still delivers an immersive viewing experience, especially on the larger MateBook Fold.
    Performance and Hardware Capabilities
    The performance profiles of these devices reflect their different design philosophies. Lenovo equips the ThinkPad X1 Fold with 12th Generation Intel processors, ranging from the Core i5-1230U to the Core i7-1260U vPro. These 10-core, 12-thread chips provide adequate performance for productivity tasks but represent previous-generation technology in 2025. The X1 Fold supports up to 32GB of LPDDR5 RAM and 1TB of PCIe Gen 4 SSD storage. Intel Iris Xe integrated graphics handle visual processing, delivering sufficient power for office applications but struggling with demanding creative workloads.

    Huawei takes a different approach with its Kirin X90 ARM-based chipset. This custom silicon is specifically optimized for HarmonyOS and the foldable form factor. The MateBook Fold includes 32GB of RAM and offers storage options up to 2TB. While direct performance comparisons are difficult due to the different architectures, the Kirin X90 delivers responsive performance for HarmonyOS applications and benefits from tight hardware-software integration.
    Thermal management represents another point of divergence. Lenovo employs a fanless design in the X1 Fold, prioritizing silent operation over sustained performance. This approach leads to thermal throttling during extended workloads, limiting the device’s capabilities for processor-intensive tasks. Huawei incorporates a vapor chamber cooling system with diamond aluminum dual fans in the MateBook Fold, enabling 28W sustained performance without excessive heat or noise. This advanced cooling solution allows the MateBook Fold to maintain peak performance during demanding tasks, despite its thinner profile.

    Battery life reflects both hardware choices and software optimization. The X1 Fold includes a dual-battery design totaling 64Wh, delivering approximately 8 hours and 51 minutes in laptop mode and 7 hours and 27 minutes in tablet mode under real-world conditions. The MateBook Fold features a larger 74.69Wh battery, and its LTPO display technology reduces power consumption significantly. While independent verification of Huawei’s “all-day” battery claims is not yet available, the combination of a larger battery and more efficient display technology suggests the MateBook Fold should offer superior battery life in comparable usage scenarios.
    The storage subsystems in both devices utilize high-speed solid-state technology, but with different implementations. Lenovo’s PCIe Gen 4 SSD delivers sequential read speeds up to 5,000MB/s, providing quick access to large files and rapid application loading. Huawei has not published detailed storage performance metrics, but contemporary flagship devices typically feature similar high-performance storage solutions. Both devices offer sufficient storage capacity for professional workloads, with options ranging from 256GB to 2TB depending on configuration.
    Memory configurations play a crucial role in multitasking performance. Both devices offer 32GB in their top configurations, which provides ample headroom for demanding productivity workflows. Neither device allows for user-upgradable memory, as both use soldered RAM to maintain their slim profiles. This limitation means buyers must carefully consider their memory needs at purchase, as future upgrades are not possible.
    Operating Systems and Software Experience
    The most fundamental difference between these devices lies in their operating systems. The Lenovo ThinkPad X1 Fold runs Windows 11 Pro, providing access to the vast Windows software ecosystem and familiar productivity tools. Windows offers broad compatibility with business applications and enterprise management systems, making the X1 Fold a natural choice for corporate environments. However, Windows 11 still struggles with optimization for foldable form factors. Mode switching can be inconsistent, and the operating system sometimes fails to properly scale applications when transitioning between configurations.

    Huawei’s MateBook Fold runs HarmonyOS 5, a proprietary operating system designed specifically for the company’s ecosystem of devices. HarmonyOS offers several advantages for foldable hardware, including faster boot times, more efficient resource management, and seamless integration with other Huawei products. The operating system includes AI-powered features like document summarization, real-time translation, and context-aware suggestions through the Xiaoyi assistant. HarmonyOS also enables advanced multi-device collaboration, allowing users to transfer running apps between Huawei phones, tablets, and the MateBook Fold without interruption.
    The software ecosystem represents a significant consideration for potential buyers. Windows provides access to millions of applications, including industry-standard productivity, creative, and development tools. HarmonyOS currently offers over 1,000 optimized applications, with projections for 2,000+ by the end of 2025. While this number is growing rapidly, it remains a fraction of what Windows provides. Additionally, HarmonyOS and its app ecosystem are primarily focused on the Chinese market, limiting its appeal for international users.

    Security features differ between the platforms as well. Lenovo includes its ThinkShield security suite, Windows Hello facial recognition, and optional Computer Vision human-presence detection for privacy and security. Huawei implements its StarShield architecture, which provides security at the kernel level and throughout the operating system stack. Both approaches offer robust protection, but organizations with established Windows security protocols may prefer Lenovo’s more familiar implementation.

    The multitasking capabilities of each operating system deserve special attention for foldable devices. Windows 11 includes Snap Layouts and multiple virtual desktops, which work well on the X1 Fold’s large unfolded display. However, the interface can become cluttered in laptop mode due to the reduced screen size. HarmonyOS 5 features a multitasking system specifically designed for foldable displays, with intuitive gestures for splitting the screen, floating windows, and quick app switching. This optimization creates a more cohesive experience when transitioning between different device configurations.
    Software updates and long-term support policies differ significantly between these platforms. Windows 11 receives regular security updates and feature enhancements from Microsoft, with a well-established support lifecycle. HarmonyOS is newer, with less predictable update patterns, though Huawei has committed to regular improvements. For business users planning multi-year deployments, Windows offers more certainty regarding future compatibility and security maintenance.
    Keyboard, Input, and Accessory Integration
    The keyboard experience significantly impacts productivity on foldable devices, and both manufacturers take different approaches to this challenge. Lenovo offers the ThinkPad Bluetooth TrackPoint Keyboard Folio as an optional accessory. This keyboard maintains the classic ThinkPad feel with good key travel and includes the iconic red TrackPoint nub. However, the keyboard feels cramped compared to standard ThinkPad models, and the haptic touchpad is smaller than ideal for extended use. The keyboard attaches magnetically to the lower half of the folded display but adds 1.38 pounds to the overall weight.

    Huawei includes a 5mm wireless aluminum keyboard with the MateBook Fold. This ultra-thin keyboard offers 1.5mm of key travel and a responsive touchpad. Weighing just 0.64 pounds, it adds minimal bulk to the package while providing a comfortable typing experience. The keyboard connects wirelessly and can be positioned flexibly, allowing users to create a more ergonomic workspace than the fixed position of Lenovo’s solution.
    Stylus support is available on both devices, with Lenovo offering the Precision Pen for note-taking and drawing. The X1 Fold’s pen attaches magnetically to the display, ensuring it remains available when needed. Huawei provides similar stylus functionality, though detailed specifications for its pen accessory are limited in current documentation.
    The most significant accessory difference is the kickstand implementation. Lenovo requires a separate adjustable-angle kickstand for desk use, adding another component to manage and transport. Huawei integrates the kickstand directly into the MateBook Fold, providing immediate stability without additional accessories. This integrated approach streamlines the user experience and reduces setup time when transitioning between usage modes.
    Virtual keyboard implementations provide another input option when physical keyboards are impractical. Both devices can display touch keyboards on the lower portion of the folded screen, creating a laptop-like experience without additional hardware. Lenovo’s implementation relies on Windows 11’s touch keyboard, which offers reasonable accuracy but lacks haptic feedback. Huawei’s virtual keyboard is deeply integrated with HarmonyOS, providing customizable layouts and adaptive suggestions based on user behavior. Neither virtual keyboard fully replaces a physical keyboard for extended typing sessions, but both provide convenient input options for quick tasks.
    The accessory ecosystem extends beyond keyboards and styluses. Lenovo leverages the ThinkPad’s business heritage with a range of compatible docks, cases, and adapters designed for professional use. Huawei focuses on cross-device accessories that work across its product line, creating a cohesive ecosystem for users invested in multiple Huawei products. This difference reflects the broader positioning of each brand, with Lenovo targeting enterprise customers and Huawei pursuing ecosystem-driven consumer experiences.
    Connectivity and Expansion Options
    Connectivity options reflect the different priorities of these manufacturers. The Lenovo ThinkPad X1 Fold includes two Thunderbolt 4 ports and one USB-C 3.2 Gen 2 port, providing versatile connectivity for peripherals and external displays. The device supports Wi-Fi 6E and Bluetooth 5.2, with optional LTE/5G connectivity for truly mobile productivity. This cellular option represents a significant advantage for professionals who need reliable internet access regardless of Wi-Fi availability.
    The Huawei MateBook Fold offers two USB-C ports, Wi-Fi 6, and Bluetooth 5.2. The device does not include cellular connectivity options, limiting its independence from Wi-Fi networks. The reduced port selection compared to the X1 Fold may require additional adapters for users with multiple peripherals or specialized equipment.

    Audio capabilities favor the MateBook Fold, which includes six speakers compared to the X1 Fold’s three. Both devices feature four-array microphones for clear voice capture during video conferences. Camera quality is superior on the MateBook Fold, with an 8MP sensor versus the 5MP camera on the X1 Fold. These differences impact the multimedia experience, particularly for users who frequently participate in video calls or consume media content.
    External display support varies between the devices. Lenovo’s Thunderbolt 4 ports enable connection to multiple high-resolution monitors, supporting sophisticated desktop setups when needed. Huawei’s USB-C ports provide display output capabilities, but with potentially fewer options for multi-monitor configurations. For professionals who regularly connect to external displays, projectors, or specialized peripherals, these connectivity differences could significantly impact workflow efficiency.
    Wireless connectivity standards influence performance in different environments. The X1 Fold’s Wi-Fi 6E support provides access to the less congested 6GHz band, potentially delivering faster and more reliable connections in crowded wireless environments. The MateBook Fold’s Wi-Fi 6 implementation is still capable but lacks access to these additional frequency bands. For users in dense office environments or congested urban areas, this difference could affect day-to-day connectivity performance.
    Future expansion capabilities depend largely on the port selection and standards support. Thunderbolt 4 provides the X1 Fold with a forward-looking connectivity standard that supports a wide range of current and upcoming peripherals. The MateBook Fold’s standard USB-C implementation offers good compatibility but lacks some of the advanced features and bandwidth of Thunderbolt. This distinction may become more relevant as users add peripherals and accessories over the device’s lifespan.
    Price, Availability, and Value Proposition
    The value equation for these devices involves balancing innovation, performance, and accessibility. The Lenovo ThinkPad X1 Fold starts at for the base configuration with a Core i5 processor, 16GB of RAM, and 256GB of storage. Fully equipped models with Core i7 processors, 32GB of RAM, and 1TB of storage approach These prices typically do not include the keyboard and kickstand accessories, which add approximately -300 to the total cost.

    The Huawei MateBook Fold Ultimate Design is priced between CNY 24,000 and 27,000depending on configuration. This pricing includes the wireless keyboard, making the total package cost comparable to a fully equipped X1 Fold with accessories. However, the MateBook Fold is currently available only in China, with no announced plans for international release. This limited availability significantly restricts its potential market impact outside of Asia.
    Global support and service represent another consideration. Lenovo maintains service centers worldwide, providing reliable support for business travelers and international organizations. Huawei’s support network is more limited outside of China, potentially creating challenges for users who experience hardware issues in regions without official service options.
    The target audience for each device influences its value proposition. The X1 Fold appeals to business professionals who prioritize Windows compatibility, global support, and integration with existing enterprise systems. Its ThinkPad branding carries significant weight in corporate environments, where reliability and security take precedence over cutting-edge specifications. The MateBook Fold targets technology enthusiasts and creative professionals who value display quality, design innovation, and ecosystem integration. Its limited availability and HarmonyOS platform make it less suitable for mainstream business adoption but potentially more appealing to users seeking the absolute latest in hardware engineering.
    Financing options and business leasing programs further differentiate these devices in the market. Lenovo offers established enterprise leasing programs that allow organizations to deploy the X1 Fold without significant upfront capital expenditure. These programs typically include service agreements and upgrade paths that align with corporate refresh cycles. Huawei’s business services are less developed outside of China, potentially limiting financing options for international customers interested in the MateBook Fold.
    Conclusion: The Future of Foldable Computing
    The Lenovo ThinkPad X1 Fold 2024 and Huawei MateBook Fold Ultimate Design represent two distinct visions for the future of foldable computing. Lenovo prioritizes durability, Windows compatibility, and global accessibility, creating a device that fits seamlessly into existing business environments. Huawei pushes the boundaries of hardware engineering, delivering a thinner, lighter device with a larger display and custom operating system optimized for the foldable form factor.

    For business users who require Windows compatibility and global support, the X1 Fold remains the more practical choice despite its thicker profile and aging processors. Its proven durability and enterprise-friendly features make it a safer investment for organizations deploying foldable technology. The device excels in versatility, allowing users to switch between tablet, laptop, and desktop modes with minimal compromise.
    Creative professionals and early adopters who prioritize display quality and cutting-edge design may find the MateBook Fold more appealing, provided they can access it in their region and adapt to HarmonyOS. The larger, brighter display and thinner profile create a more futuristic experience, though the limited software ecosystem and regional availability present significant barriers to widespread adoption.
    Looking forward, both devices point toward necessary improvements in the next generation of foldable computers. Future models should incorporate the latest processors with AI acceleration, reduce weight without sacrificing durability, integrate kickstands directly into the chassis, and provide larger, more comfortable keyboards. Display technology should continue to advance, with higher refresh rates, improved crease durability, and enhanced power efficiency. Software must evolve to better support the unique capabilities of foldable hardware, with more intuitive mode switching and optimized multitasking.

    The competition between Lenovo and Huawei benefits consumers by accelerating innovation and highlighting different approaches to solving the challenges of foldable computing. As these technologies mature and prices eventually decrease, foldable devices will transition from executive status symbols to practical tools for a broader range of users. The X1 Fold and MateBook Fold represent important steps in this evolution, each contributing valuable lessons that will shape the next generation of flexible computing devices.
    The ideal foldable device would combine Huawei’s hardware innovations with Lenovo’s software compatibility and global support. It would feature the thinness and display quality of the MateBook Fold, the enterprise security and connectivity options of the X1 Fold, and an operating system that seamlessly adapts to different usage modes. While neither current device achieves this perfect balance, both demonstrate remarkable engineering achievements that push the boundaries of what portable computers can be.

    As we look to the future, the success of foldable computing will depend not just on hardware specifications but on the development of software experiences that truly leverage the unique capabilities of these flexible displays. The device that ultimately dominates this category will be the one that most effectively bridges the gap between technical innovation and practical utility, creating experiences that simply aren’t possible on conventional laptops or tablets. Both Lenovo and Huawei have taken significant steps toward this goal, and their ongoing competition promises to accelerate progress toward truly transformative foldable computers.The post Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design first appeared on Yanko Design.
    #folding #future #lenovo #thinkpad #fold
    Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design
    Why revisit the Lenovo ThinkPad X1 Fold in 2025? The answer lies in the rapid evolution of foldable computing. When Lenovo introduced its second-generation foldable PC last year, it represented the pinnacle of what was possible in this emerging category. The device combined a versatile 16.3-inch OLED display with robust engineering and the familiar Windows ecosystem. It set benchmarks for build quality, display technology, and adaptability that competitors would need to surpass. Designer: Lenovo Designer: Huawei Fast forward to today, and the landscape has shifted dramatically. Huawei has unveiled its MateBook Fold Ultimate Design, a device that challenges our understanding of what foldable laptops can achieve. With an 18-inch display that folds to a 13-inch form factor, a chassis measuring just 7.3mm when open, and a proprietary operating system built specifically for foldable hardware, Huawei has raised the stakes considerably. This comparison arrives at a pivotal moment for foldable computing. The category has matured beyond proof-of-concept to deliver genuinely useful productivity tools. Now that we have seen what Lenovo accomplished with the X1 Fold 2024, let us examine how Huawei’s MateBook Fold Ultimate Design responds and potentially redefines the future of portable computing. Design Philosophy and Physical Presence The Lenovo ThinkPad X1 Fold 2024 embodies the ThinkPad ethos of reliability and purposeful design. Its magnesium alloy frame and recycled PET woven fabric cover create a device that feels substantial and durable. The fold-flat hinge eliminates gaps when closed, protecting the display while maintaining a clean profile. At 8.6mm when open and 17.4mm when closed, the X1 Fold is not the thinnest laptop available, but its construction inspires confidence. The device weighs approximately 2.9 pounds without accessories, increasing to 4.3 pounds with the keyboard and stand attached. This weight reflects Lenovo’s prioritization of durability over absolute portability. Huawei takes a dramatically different approach with the MateBook Fold Ultimate Design. The device measures an astonishing 7.3mm when open and 14.9mm when closed, making it significantly thinner than the X1 Fold. At just 1.16kgfor the base unit and 1.45kg with the keyboard, the MateBook Fold is remarkably light for a device with an 18-inch display. This achievement comes from Huawei’s use of carbon fiber reinforcement and a zirconium-based liquid metal hinge. The 285mm “water-drop” hinge design provides smooth folding action and increased durability, with Huawei claiming a 400% improvement in hovering torque compared to conventional designs. The most significant physical difference between these devices becomes apparent in their approach to accessories. Lenovo requires a separate kickstand for desk use, adding bulk and complexity to the overall package. Huawei integrates a sturdy kickstand directly into the MateBook Fold, eliminating the need for additional accessories and streamlining the user experience. This built-in solution allows for more versatile positioning and reduces the number of components users need to manage. Both devices transform between multiple modes, but their physical dimensions create distinct experiences. When folded, the X1 Fold becomes a 12-inch laptop, which many users find cramped for serious multitasking. The MateBook Fold offers a more generous 13-inch workspace in laptop mode, providing additional screen real estate for productivity tasks. This difference may seem small on paper, but it significantly impacts the practical usability of these devices in their folded configurations. The materials chosen for each device reveal different priorities. Lenovo emphasizes sustainability with its recycled PET fabric cover and plastic-free packaging. This approach aligns with growing corporate environmental concerns and provides a tactile warmth that distinguishes the X1 Fold from typical metal-clad laptops. Huawei focuses on premium materials that enable extreme thinness, using advanced alloys and composites throughout the chassis. Both approaches result in distinctive aesthetics that will appeal to different user preferences. Display Technology and Visual Experience Display technology represents the heart of any foldable device, and both manufacturers have made significant investments in this critical component. The Lenovo ThinkPad X1 Fold features a 16.3-inch OLED panel with a resolution of 2560 x 2024 and a 4:3 aspect ratio. This display delivers 400 nits of brightness for standard content, increasing to 600 nits for HDR material. The panel supports DisplayHDR True Black 600 certification and Dolby Vision, covering 100% of the DCI-P3 color gamut. An anti-smudge coating helps maintain visual clarity during extended use. Huawei pushes display technology further with the MateBook Fold Ultimate Design. Its 18-inch LTPO OLED screen boasts a resolution of 3296 x 2472, maintaining the same 4:3 aspect ratio as the Lenovo. However, the MateBook Fold achieves a peak brightness of 1600 nits, more than double that of the X1 Fold. The dual-layer LTPO technology reduces power consumption by 30% compared to standard OLED panels while supporting adaptive refresh rates from 1Hz to 120Hz. This combination of size, brightness, and efficiency creates a visual experience that surpasses the X1 Fold in nearly every measurable aspect. Both displays exhibit a visible crease at the fold, though the severity varies. Lenovo’s hinge design minimizes the crease when the device is fully open, but it becomes more noticeable at certain viewing angles. Huawei claims its water-drop hinge reduces crease visibility, though independent verification is limited. In practical use, both creases become less distracting over time as users adapt to the form factor. Color accuracy and visual impact favor the MateBook Fold, with its higher brightness and contrast ratio of 2,000,000:1 creating more vibrant images and videos. The X1 Fold delivers excellent color reproduction but cannot match the visual punch of Huawei’s display. For creative professionals and media consumers, this difference could be decisive when choosing between these devices. The touch response and pen input capabilities of both displays deserve consideration. Lenovo’s display works seamlessly with the Precision Pen, offering pressure sensitivity that makes note-taking and sketching feel natural. The anti-smudge coating balances fingerprint resistance with smooth touch response. Huawei provides similar functionality, though detailed specifications about pressure sensitivity levels and palm rejection capabilities are not yet widely available. Both devices support multi-touch gestures for navigation and manipulation of on-screen elements. The 4:3 aspect ratio on both devices proves ideal for productivity applications, providing more vertical space than typical 16:9 laptop displays. This ratio works particularly well for document editing, web browsing, and coding. When watching widescreen video content, both devices display black bars at the top and bottom, but the overall screen size still delivers an immersive viewing experience, especially on the larger MateBook Fold. Performance and Hardware Capabilities The performance profiles of these devices reflect their different design philosophies. Lenovo equips the ThinkPad X1 Fold with 12th Generation Intel processors, ranging from the Core i5-1230U to the Core i7-1260U vPro. These 10-core, 12-thread chips provide adequate performance for productivity tasks but represent previous-generation technology in 2025. The X1 Fold supports up to 32GB of LPDDR5 RAM and 1TB of PCIe Gen 4 SSD storage. Intel Iris Xe integrated graphics handle visual processing, delivering sufficient power for office applications but struggling with demanding creative workloads. Huawei takes a different approach with its Kirin X90 ARM-based chipset. This custom silicon is specifically optimized for HarmonyOS and the foldable form factor. The MateBook Fold includes 32GB of RAM and offers storage options up to 2TB. While direct performance comparisons are difficult due to the different architectures, the Kirin X90 delivers responsive performance for HarmonyOS applications and benefits from tight hardware-software integration. Thermal management represents another point of divergence. Lenovo employs a fanless design in the X1 Fold, prioritizing silent operation over sustained performance. This approach leads to thermal throttling during extended workloads, limiting the device’s capabilities for processor-intensive tasks. Huawei incorporates a vapor chamber cooling system with diamond aluminum dual fans in the MateBook Fold, enabling 28W sustained performance without excessive heat or noise. This advanced cooling solution allows the MateBook Fold to maintain peak performance during demanding tasks, despite its thinner profile. Battery life reflects both hardware choices and software optimization. The X1 Fold includes a dual-battery design totaling 64Wh, delivering approximately 8 hours and 51 minutes in laptop mode and 7 hours and 27 minutes in tablet mode under real-world conditions. The MateBook Fold features a larger 74.69Wh battery, and its LTPO display technology reduces power consumption significantly. While independent verification of Huawei’s “all-day” battery claims is not yet available, the combination of a larger battery and more efficient display technology suggests the MateBook Fold should offer superior battery life in comparable usage scenarios. The storage subsystems in both devices utilize high-speed solid-state technology, but with different implementations. Lenovo’s PCIe Gen 4 SSD delivers sequential read speeds up to 5,000MB/s, providing quick access to large files and rapid application loading. Huawei has not published detailed storage performance metrics, but contemporary flagship devices typically feature similar high-performance storage solutions. Both devices offer sufficient storage capacity for professional workloads, with options ranging from 256GB to 2TB depending on configuration. Memory configurations play a crucial role in multitasking performance. Both devices offer 32GB in their top configurations, which provides ample headroom for demanding productivity workflows. Neither device allows for user-upgradable memory, as both use soldered RAM to maintain their slim profiles. This limitation means buyers must carefully consider their memory needs at purchase, as future upgrades are not possible. Operating Systems and Software Experience The most fundamental difference between these devices lies in their operating systems. The Lenovo ThinkPad X1 Fold runs Windows 11 Pro, providing access to the vast Windows software ecosystem and familiar productivity tools. Windows offers broad compatibility with business applications and enterprise management systems, making the X1 Fold a natural choice for corporate environments. However, Windows 11 still struggles with optimization for foldable form factors. Mode switching can be inconsistent, and the operating system sometimes fails to properly scale applications when transitioning between configurations. Huawei’s MateBook Fold runs HarmonyOS 5, a proprietary operating system designed specifically for the company’s ecosystem of devices. HarmonyOS offers several advantages for foldable hardware, including faster boot times, more efficient resource management, and seamless integration with other Huawei products. The operating system includes AI-powered features like document summarization, real-time translation, and context-aware suggestions through the Xiaoyi assistant. HarmonyOS also enables advanced multi-device collaboration, allowing users to transfer running apps between Huawei phones, tablets, and the MateBook Fold without interruption. The software ecosystem represents a significant consideration for potential buyers. Windows provides access to millions of applications, including industry-standard productivity, creative, and development tools. HarmonyOS currently offers over 1,000 optimized applications, with projections for 2,000+ by the end of 2025. While this number is growing rapidly, it remains a fraction of what Windows provides. Additionally, HarmonyOS and its app ecosystem are primarily focused on the Chinese market, limiting its appeal for international users. Security features differ between the platforms as well. Lenovo includes its ThinkShield security suite, Windows Hello facial recognition, and optional Computer Vision human-presence detection for privacy and security. Huawei implements its StarShield architecture, which provides security at the kernel level and throughout the operating system stack. Both approaches offer robust protection, but organizations with established Windows security protocols may prefer Lenovo’s more familiar implementation. The multitasking capabilities of each operating system deserve special attention for foldable devices. Windows 11 includes Snap Layouts and multiple virtual desktops, which work well on the X1 Fold’s large unfolded display. However, the interface can become cluttered in laptop mode due to the reduced screen size. HarmonyOS 5 features a multitasking system specifically designed for foldable displays, with intuitive gestures for splitting the screen, floating windows, and quick app switching. This optimization creates a more cohesive experience when transitioning between different device configurations. Software updates and long-term support policies differ significantly between these platforms. Windows 11 receives regular security updates and feature enhancements from Microsoft, with a well-established support lifecycle. HarmonyOS is newer, with less predictable update patterns, though Huawei has committed to regular improvements. For business users planning multi-year deployments, Windows offers more certainty regarding future compatibility and security maintenance. Keyboard, Input, and Accessory Integration The keyboard experience significantly impacts productivity on foldable devices, and both manufacturers take different approaches to this challenge. Lenovo offers the ThinkPad Bluetooth TrackPoint Keyboard Folio as an optional accessory. This keyboard maintains the classic ThinkPad feel with good key travel and includes the iconic red TrackPoint nub. However, the keyboard feels cramped compared to standard ThinkPad models, and the haptic touchpad is smaller than ideal for extended use. The keyboard attaches magnetically to the lower half of the folded display but adds 1.38 pounds to the overall weight. Huawei includes a 5mm wireless aluminum keyboard with the MateBook Fold. This ultra-thin keyboard offers 1.5mm of key travel and a responsive touchpad. Weighing just 0.64 pounds, it adds minimal bulk to the package while providing a comfortable typing experience. The keyboard connects wirelessly and can be positioned flexibly, allowing users to create a more ergonomic workspace than the fixed position of Lenovo’s solution. Stylus support is available on both devices, with Lenovo offering the Precision Pen for note-taking and drawing. The X1 Fold’s pen attaches magnetically to the display, ensuring it remains available when needed. Huawei provides similar stylus functionality, though detailed specifications for its pen accessory are limited in current documentation. The most significant accessory difference is the kickstand implementation. Lenovo requires a separate adjustable-angle kickstand for desk use, adding another component to manage and transport. Huawei integrates the kickstand directly into the MateBook Fold, providing immediate stability without additional accessories. This integrated approach streamlines the user experience and reduces setup time when transitioning between usage modes. Virtual keyboard implementations provide another input option when physical keyboards are impractical. Both devices can display touch keyboards on the lower portion of the folded screen, creating a laptop-like experience without additional hardware. Lenovo’s implementation relies on Windows 11’s touch keyboard, which offers reasonable accuracy but lacks haptic feedback. Huawei’s virtual keyboard is deeply integrated with HarmonyOS, providing customizable layouts and adaptive suggestions based on user behavior. Neither virtual keyboard fully replaces a physical keyboard for extended typing sessions, but both provide convenient input options for quick tasks. The accessory ecosystem extends beyond keyboards and styluses. Lenovo leverages the ThinkPad’s business heritage with a range of compatible docks, cases, and adapters designed for professional use. Huawei focuses on cross-device accessories that work across its product line, creating a cohesive ecosystem for users invested in multiple Huawei products. This difference reflects the broader positioning of each brand, with Lenovo targeting enterprise customers and Huawei pursuing ecosystem-driven consumer experiences. Connectivity and Expansion Options Connectivity options reflect the different priorities of these manufacturers. The Lenovo ThinkPad X1 Fold includes two Thunderbolt 4 ports and one USB-C 3.2 Gen 2 port, providing versatile connectivity for peripherals and external displays. The device supports Wi-Fi 6E and Bluetooth 5.2, with optional LTE/5G connectivity for truly mobile productivity. This cellular option represents a significant advantage for professionals who need reliable internet access regardless of Wi-Fi availability. The Huawei MateBook Fold offers two USB-C ports, Wi-Fi 6, and Bluetooth 5.2. The device does not include cellular connectivity options, limiting its independence from Wi-Fi networks. The reduced port selection compared to the X1 Fold may require additional adapters for users with multiple peripherals or specialized equipment. Audio capabilities favor the MateBook Fold, which includes six speakers compared to the X1 Fold’s three. Both devices feature four-array microphones for clear voice capture during video conferences. Camera quality is superior on the MateBook Fold, with an 8MP sensor versus the 5MP camera on the X1 Fold. These differences impact the multimedia experience, particularly for users who frequently participate in video calls or consume media content. External display support varies between the devices. Lenovo’s Thunderbolt 4 ports enable connection to multiple high-resolution monitors, supporting sophisticated desktop setups when needed. Huawei’s USB-C ports provide display output capabilities, but with potentially fewer options for multi-monitor configurations. For professionals who regularly connect to external displays, projectors, or specialized peripherals, these connectivity differences could significantly impact workflow efficiency. Wireless connectivity standards influence performance in different environments. The X1 Fold’s Wi-Fi 6E support provides access to the less congested 6GHz band, potentially delivering faster and more reliable connections in crowded wireless environments. The MateBook Fold’s Wi-Fi 6 implementation is still capable but lacks access to these additional frequency bands. For users in dense office environments or congested urban areas, this difference could affect day-to-day connectivity performance. Future expansion capabilities depend largely on the port selection and standards support. Thunderbolt 4 provides the X1 Fold with a forward-looking connectivity standard that supports a wide range of current and upcoming peripherals. The MateBook Fold’s standard USB-C implementation offers good compatibility but lacks some of the advanced features and bandwidth of Thunderbolt. This distinction may become more relevant as users add peripherals and accessories over the device’s lifespan. Price, Availability, and Value Proposition The value equation for these devices involves balancing innovation, performance, and accessibility. The Lenovo ThinkPad X1 Fold starts at for the base configuration with a Core i5 processor, 16GB of RAM, and 256GB of storage. Fully equipped models with Core i7 processors, 32GB of RAM, and 1TB of storage approach These prices typically do not include the keyboard and kickstand accessories, which add approximately -300 to the total cost. The Huawei MateBook Fold Ultimate Design is priced between CNY 24,000 and 27,000depending on configuration. This pricing includes the wireless keyboard, making the total package cost comparable to a fully equipped X1 Fold with accessories. However, the MateBook Fold is currently available only in China, with no announced plans for international release. This limited availability significantly restricts its potential market impact outside of Asia. Global support and service represent another consideration. Lenovo maintains service centers worldwide, providing reliable support for business travelers and international organizations. Huawei’s support network is more limited outside of China, potentially creating challenges for users who experience hardware issues in regions without official service options. The target audience for each device influences its value proposition. The X1 Fold appeals to business professionals who prioritize Windows compatibility, global support, and integration with existing enterprise systems. Its ThinkPad branding carries significant weight in corporate environments, where reliability and security take precedence over cutting-edge specifications. The MateBook Fold targets technology enthusiasts and creative professionals who value display quality, design innovation, and ecosystem integration. Its limited availability and HarmonyOS platform make it less suitable for mainstream business adoption but potentially more appealing to users seeking the absolute latest in hardware engineering. Financing options and business leasing programs further differentiate these devices in the market. Lenovo offers established enterprise leasing programs that allow organizations to deploy the X1 Fold without significant upfront capital expenditure. These programs typically include service agreements and upgrade paths that align with corporate refresh cycles. Huawei’s business services are less developed outside of China, potentially limiting financing options for international customers interested in the MateBook Fold. Conclusion: The Future of Foldable Computing The Lenovo ThinkPad X1 Fold 2024 and Huawei MateBook Fold Ultimate Design represent two distinct visions for the future of foldable computing. Lenovo prioritizes durability, Windows compatibility, and global accessibility, creating a device that fits seamlessly into existing business environments. Huawei pushes the boundaries of hardware engineering, delivering a thinner, lighter device with a larger display and custom operating system optimized for the foldable form factor. For business users who require Windows compatibility and global support, the X1 Fold remains the more practical choice despite its thicker profile and aging processors. Its proven durability and enterprise-friendly features make it a safer investment for organizations deploying foldable technology. The device excels in versatility, allowing users to switch between tablet, laptop, and desktop modes with minimal compromise. Creative professionals and early adopters who prioritize display quality and cutting-edge design may find the MateBook Fold more appealing, provided they can access it in their region and adapt to HarmonyOS. The larger, brighter display and thinner profile create a more futuristic experience, though the limited software ecosystem and regional availability present significant barriers to widespread adoption. Looking forward, both devices point toward necessary improvements in the next generation of foldable computers. Future models should incorporate the latest processors with AI acceleration, reduce weight without sacrificing durability, integrate kickstands directly into the chassis, and provide larger, more comfortable keyboards. Display technology should continue to advance, with higher refresh rates, improved crease durability, and enhanced power efficiency. Software must evolve to better support the unique capabilities of foldable hardware, with more intuitive mode switching and optimized multitasking. The competition between Lenovo and Huawei benefits consumers by accelerating innovation and highlighting different approaches to solving the challenges of foldable computing. As these technologies mature and prices eventually decrease, foldable devices will transition from executive status symbols to practical tools for a broader range of users. The X1 Fold and MateBook Fold represent important steps in this evolution, each contributing valuable lessons that will shape the next generation of flexible computing devices. The ideal foldable device would combine Huawei’s hardware innovations with Lenovo’s software compatibility and global support. It would feature the thinness and display quality of the MateBook Fold, the enterprise security and connectivity options of the X1 Fold, and an operating system that seamlessly adapts to different usage modes. While neither current device achieves this perfect balance, both demonstrate remarkable engineering achievements that push the boundaries of what portable computers can be. As we look to the future, the success of foldable computing will depend not just on hardware specifications but on the development of software experiences that truly leverage the unique capabilities of these flexible displays. The device that ultimately dominates this category will be the one that most effectively bridges the gap between technical innovation and practical utility, creating experiences that simply aren’t possible on conventional laptops or tablets. Both Lenovo and Huawei have taken significant steps toward this goal, and their ongoing competition promises to accelerate progress toward truly transformative foldable computers.The post Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design first appeared on Yanko Design. #folding #future #lenovo #thinkpad #fold
    WWW.YANKODESIGN.COM
    Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design
    Why revisit the Lenovo ThinkPad X1 Fold in 2025? The answer lies in the rapid evolution of foldable computing. When Lenovo introduced its second-generation foldable PC last year, it represented the pinnacle of what was possible in this emerging category. The device combined a versatile 16.3-inch OLED display with robust engineering and the familiar Windows ecosystem. It set benchmarks for build quality, display technology, and adaptability that competitors would need to surpass. Designer: Lenovo Designer: Huawei Fast forward to today, and the landscape has shifted dramatically. Huawei has unveiled its MateBook Fold Ultimate Design, a device that challenges our understanding of what foldable laptops can achieve. With an 18-inch display that folds to a 13-inch form factor, a chassis measuring just 7.3mm when open, and a proprietary operating system built specifically for foldable hardware, Huawei has raised the stakes considerably. This comparison arrives at a pivotal moment for foldable computing. The category has matured beyond proof-of-concept to deliver genuinely useful productivity tools. Now that we have seen what Lenovo accomplished with the X1 Fold 2024, let us examine how Huawei’s MateBook Fold Ultimate Design responds and potentially redefines the future of portable computing. Design Philosophy and Physical Presence The Lenovo ThinkPad X1 Fold 2024 embodies the ThinkPad ethos of reliability and purposeful design. Its magnesium alloy frame and recycled PET woven fabric cover create a device that feels substantial and durable. The fold-flat hinge eliminates gaps when closed, protecting the display while maintaining a clean profile. At 8.6mm when open and 17.4mm when closed, the X1 Fold is not the thinnest laptop available, but its construction inspires confidence. The device weighs approximately 2.9 pounds without accessories, increasing to 4.3 pounds with the keyboard and stand attached. This weight reflects Lenovo’s prioritization of durability over absolute portability. Huawei takes a dramatically different approach with the MateBook Fold Ultimate Design. The device measures an astonishing 7.3mm when open and 14.9mm when closed, making it significantly thinner than the X1 Fold. At just 1.16kg (2.56 pounds) for the base unit and 1.45kg with the keyboard, the MateBook Fold is remarkably light for a device with an 18-inch display. This achievement comes from Huawei’s use of carbon fiber reinforcement and a zirconium-based liquid metal hinge. The 285mm “water-drop” hinge design provides smooth folding action and increased durability, with Huawei claiming a 400% improvement in hovering torque compared to conventional designs. The most significant physical difference between these devices becomes apparent in their approach to accessories. Lenovo requires a separate kickstand for desk use, adding bulk and complexity to the overall package. Huawei integrates a sturdy kickstand directly into the MateBook Fold, eliminating the need for additional accessories and streamlining the user experience. This built-in solution allows for more versatile positioning and reduces the number of components users need to manage. Both devices transform between multiple modes, but their physical dimensions create distinct experiences. When folded, the X1 Fold becomes a 12-inch laptop, which many users find cramped for serious multitasking. The MateBook Fold offers a more generous 13-inch workspace in laptop mode, providing additional screen real estate for productivity tasks. This difference may seem small on paper, but it significantly impacts the practical usability of these devices in their folded configurations. The materials chosen for each device reveal different priorities. Lenovo emphasizes sustainability with its recycled PET fabric cover and plastic-free packaging. This approach aligns with growing corporate environmental concerns and provides a tactile warmth that distinguishes the X1 Fold from typical metal-clad laptops. Huawei focuses on premium materials that enable extreme thinness, using advanced alloys and composites throughout the chassis. Both approaches result in distinctive aesthetics that will appeal to different user preferences. Display Technology and Visual Experience Display technology represents the heart of any foldable device, and both manufacturers have made significant investments in this critical component. The Lenovo ThinkPad X1 Fold features a 16.3-inch OLED panel with a resolution of 2560 x 2024 and a 4:3 aspect ratio. This display delivers 400 nits of brightness for standard content, increasing to 600 nits for HDR material. The panel supports DisplayHDR True Black 600 certification and Dolby Vision, covering 100% of the DCI-P3 color gamut. An anti-smudge coating helps maintain visual clarity during extended use. Huawei pushes display technology further with the MateBook Fold Ultimate Design. Its 18-inch LTPO OLED screen boasts a resolution of 3296 x 2472, maintaining the same 4:3 aspect ratio as the Lenovo. However, the MateBook Fold achieves a peak brightness of 1600 nits, more than double that of the X1 Fold. The dual-layer LTPO technology reduces power consumption by 30% compared to standard OLED panels while supporting adaptive refresh rates from 1Hz to 120Hz. This combination of size, brightness, and efficiency creates a visual experience that surpasses the X1 Fold in nearly every measurable aspect. Both displays exhibit a visible crease at the fold, though the severity varies. Lenovo’s hinge design minimizes the crease when the device is fully open, but it becomes more noticeable at certain viewing angles. Huawei claims its water-drop hinge reduces crease visibility, though independent verification is limited. In practical use, both creases become less distracting over time as users adapt to the form factor. Color accuracy and visual impact favor the MateBook Fold, with its higher brightness and contrast ratio of 2,000,000:1 creating more vibrant images and videos. The X1 Fold delivers excellent color reproduction but cannot match the visual punch of Huawei’s display. For creative professionals and media consumers, this difference could be decisive when choosing between these devices. The touch response and pen input capabilities of both displays deserve consideration. Lenovo’s display works seamlessly with the Precision Pen, offering pressure sensitivity that makes note-taking and sketching feel natural. The anti-smudge coating balances fingerprint resistance with smooth touch response. Huawei provides similar functionality, though detailed specifications about pressure sensitivity levels and palm rejection capabilities are not yet widely available. Both devices support multi-touch gestures for navigation and manipulation of on-screen elements. The 4:3 aspect ratio on both devices proves ideal for productivity applications, providing more vertical space than typical 16:9 laptop displays. This ratio works particularly well for document editing, web browsing, and coding. When watching widescreen video content, both devices display black bars at the top and bottom, but the overall screen size still delivers an immersive viewing experience, especially on the larger MateBook Fold. Performance and Hardware Capabilities The performance profiles of these devices reflect their different design philosophies. Lenovo equips the ThinkPad X1 Fold with 12th Generation Intel processors, ranging from the Core i5-1230U to the Core i7-1260U vPro. These 10-core, 12-thread chips provide adequate performance for productivity tasks but represent previous-generation technology in 2025. The X1 Fold supports up to 32GB of LPDDR5 RAM and 1TB of PCIe Gen 4 SSD storage. Intel Iris Xe integrated graphics handle visual processing, delivering sufficient power for office applications but struggling with demanding creative workloads. Huawei takes a different approach with its Kirin X90 ARM-based chipset. This custom silicon is specifically optimized for HarmonyOS and the foldable form factor. The MateBook Fold includes 32GB of RAM and offers storage options up to 2TB. While direct performance comparisons are difficult due to the different architectures, the Kirin X90 delivers responsive performance for HarmonyOS applications and benefits from tight hardware-software integration. Thermal management represents another point of divergence. Lenovo employs a fanless design in the X1 Fold, prioritizing silent operation over sustained performance. This approach leads to thermal throttling during extended workloads, limiting the device’s capabilities for processor-intensive tasks. Huawei incorporates a vapor chamber cooling system with diamond aluminum dual fans in the MateBook Fold, enabling 28W sustained performance without excessive heat or noise. This advanced cooling solution allows the MateBook Fold to maintain peak performance during demanding tasks, despite its thinner profile. Battery life reflects both hardware choices and software optimization. The X1 Fold includes a dual-battery design totaling 64Wh, delivering approximately 8 hours and 51 minutes in laptop mode and 7 hours and 27 minutes in tablet mode under real-world conditions. The MateBook Fold features a larger 74.69Wh battery, and its LTPO display technology reduces power consumption significantly. While independent verification of Huawei’s “all-day” battery claims is not yet available, the combination of a larger battery and more efficient display technology suggests the MateBook Fold should offer superior battery life in comparable usage scenarios. The storage subsystems in both devices utilize high-speed solid-state technology, but with different implementations. Lenovo’s PCIe Gen 4 SSD delivers sequential read speeds up to 5,000MB/s, providing quick access to large files and rapid application loading. Huawei has not published detailed storage performance metrics, but contemporary flagship devices typically feature similar high-performance storage solutions. Both devices offer sufficient storage capacity for professional workloads, with options ranging from 256GB to 2TB depending on configuration. Memory configurations play a crucial role in multitasking performance. Both devices offer 32GB in their top configurations, which provides ample headroom for demanding productivity workflows. Neither device allows for user-upgradable memory, as both use soldered RAM to maintain their slim profiles. This limitation means buyers must carefully consider their memory needs at purchase, as future upgrades are not possible. Operating Systems and Software Experience The most fundamental difference between these devices lies in their operating systems. The Lenovo ThinkPad X1 Fold runs Windows 11 Pro, providing access to the vast Windows software ecosystem and familiar productivity tools. Windows offers broad compatibility with business applications and enterprise management systems, making the X1 Fold a natural choice for corporate environments. However, Windows 11 still struggles with optimization for foldable form factors. Mode switching can be inconsistent, and the operating system sometimes fails to properly scale applications when transitioning between configurations. Huawei’s MateBook Fold runs HarmonyOS 5, a proprietary operating system designed specifically for the company’s ecosystem of devices. HarmonyOS offers several advantages for foldable hardware, including faster boot times, more efficient resource management, and seamless integration with other Huawei products. The operating system includes AI-powered features like document summarization, real-time translation, and context-aware suggestions through the Xiaoyi assistant. HarmonyOS also enables advanced multi-device collaboration, allowing users to transfer running apps between Huawei phones, tablets, and the MateBook Fold without interruption. The software ecosystem represents a significant consideration for potential buyers. Windows provides access to millions of applications, including industry-standard productivity, creative, and development tools. HarmonyOS currently offers over 1,000 optimized applications, with projections for 2,000+ by the end of 2025. While this number is growing rapidly, it remains a fraction of what Windows provides. Additionally, HarmonyOS and its app ecosystem are primarily focused on the Chinese market, limiting its appeal for international users. Security features differ between the platforms as well. Lenovo includes its ThinkShield security suite, Windows Hello facial recognition, and optional Computer Vision human-presence detection for privacy and security. Huawei implements its StarShield architecture, which provides security at the kernel level and throughout the operating system stack. Both approaches offer robust protection, but organizations with established Windows security protocols may prefer Lenovo’s more familiar implementation. The multitasking capabilities of each operating system deserve special attention for foldable devices. Windows 11 includes Snap Layouts and multiple virtual desktops, which work well on the X1 Fold’s large unfolded display. However, the interface can become cluttered in laptop mode due to the reduced screen size. HarmonyOS 5 features a multitasking system specifically designed for foldable displays, with intuitive gestures for splitting the screen, floating windows, and quick app switching. This optimization creates a more cohesive experience when transitioning between different device configurations. Software updates and long-term support policies differ significantly between these platforms. Windows 11 receives regular security updates and feature enhancements from Microsoft, with a well-established support lifecycle. HarmonyOS is newer, with less predictable update patterns, though Huawei has committed to regular improvements. For business users planning multi-year deployments, Windows offers more certainty regarding future compatibility and security maintenance. Keyboard, Input, and Accessory Integration The keyboard experience significantly impacts productivity on foldable devices, and both manufacturers take different approaches to this challenge. Lenovo offers the ThinkPad Bluetooth TrackPoint Keyboard Folio as an optional accessory. This keyboard maintains the classic ThinkPad feel with good key travel and includes the iconic red TrackPoint nub. However, the keyboard feels cramped compared to standard ThinkPad models, and the haptic touchpad is smaller than ideal for extended use. The keyboard attaches magnetically to the lower half of the folded display but adds 1.38 pounds to the overall weight. Huawei includes a 5mm wireless aluminum keyboard with the MateBook Fold. This ultra-thin keyboard offers 1.5mm of key travel and a responsive touchpad. Weighing just 0.64 pounds, it adds minimal bulk to the package while providing a comfortable typing experience. The keyboard connects wirelessly and can be positioned flexibly, allowing users to create a more ergonomic workspace than the fixed position of Lenovo’s solution. Stylus support is available on both devices, with Lenovo offering the Precision Pen for note-taking and drawing. The X1 Fold’s pen attaches magnetically to the display, ensuring it remains available when needed. Huawei provides similar stylus functionality, though detailed specifications for its pen accessory are limited in current documentation. The most significant accessory difference is the kickstand implementation. Lenovo requires a separate adjustable-angle kickstand for desk use, adding another component to manage and transport. Huawei integrates the kickstand directly into the MateBook Fold, providing immediate stability without additional accessories. This integrated approach streamlines the user experience and reduces setup time when transitioning between usage modes. Virtual keyboard implementations provide another input option when physical keyboards are impractical. Both devices can display touch keyboards on the lower portion of the folded screen, creating a laptop-like experience without additional hardware. Lenovo’s implementation relies on Windows 11’s touch keyboard, which offers reasonable accuracy but lacks haptic feedback. Huawei’s virtual keyboard is deeply integrated with HarmonyOS, providing customizable layouts and adaptive suggestions based on user behavior. Neither virtual keyboard fully replaces a physical keyboard for extended typing sessions, but both provide convenient input options for quick tasks. The accessory ecosystem extends beyond keyboards and styluses. Lenovo leverages the ThinkPad’s business heritage with a range of compatible docks, cases, and adapters designed for professional use. Huawei focuses on cross-device accessories that work across its product line, creating a cohesive ecosystem for users invested in multiple Huawei products. This difference reflects the broader positioning of each brand, with Lenovo targeting enterprise customers and Huawei pursuing ecosystem-driven consumer experiences. Connectivity and Expansion Options Connectivity options reflect the different priorities of these manufacturers. The Lenovo ThinkPad X1 Fold includes two Thunderbolt 4 ports and one USB-C 3.2 Gen 2 port, providing versatile connectivity for peripherals and external displays. The device supports Wi-Fi 6E and Bluetooth 5.2, with optional LTE/5G connectivity for truly mobile productivity. This cellular option represents a significant advantage for professionals who need reliable internet access regardless of Wi-Fi availability. The Huawei MateBook Fold offers two USB-C ports, Wi-Fi 6, and Bluetooth 5.2. The device does not include cellular connectivity options, limiting its independence from Wi-Fi networks. The reduced port selection compared to the X1 Fold may require additional adapters for users with multiple peripherals or specialized equipment. Audio capabilities favor the MateBook Fold, which includes six speakers compared to the X1 Fold’s three. Both devices feature four-array microphones for clear voice capture during video conferences. Camera quality is superior on the MateBook Fold, with an 8MP sensor versus the 5MP camera on the X1 Fold. These differences impact the multimedia experience, particularly for users who frequently participate in video calls or consume media content. External display support varies between the devices. Lenovo’s Thunderbolt 4 ports enable connection to multiple high-resolution monitors, supporting sophisticated desktop setups when needed. Huawei’s USB-C ports provide display output capabilities, but with potentially fewer options for multi-monitor configurations. For professionals who regularly connect to external displays, projectors, or specialized peripherals, these connectivity differences could significantly impact workflow efficiency. Wireless connectivity standards influence performance in different environments. The X1 Fold’s Wi-Fi 6E support provides access to the less congested 6GHz band, potentially delivering faster and more reliable connections in crowded wireless environments. The MateBook Fold’s Wi-Fi 6 implementation is still capable but lacks access to these additional frequency bands. For users in dense office environments or congested urban areas, this difference could affect day-to-day connectivity performance. Future expansion capabilities depend largely on the port selection and standards support. Thunderbolt 4 provides the X1 Fold with a forward-looking connectivity standard that supports a wide range of current and upcoming peripherals. The MateBook Fold’s standard USB-C implementation offers good compatibility but lacks some of the advanced features and bandwidth of Thunderbolt. This distinction may become more relevant as users add peripherals and accessories over the device’s lifespan. Price, Availability, and Value Proposition The value equation for these devices involves balancing innovation, performance, and accessibility. The Lenovo ThinkPad X1 Fold starts at $2,499 for the base configuration with a Core i5 processor, 16GB of RAM, and 256GB of storage. Fully equipped models with Core i7 processors, 32GB of RAM, and 1TB of storage approach $3,900. These prices typically do not include the keyboard and kickstand accessories, which add approximately $250-300 to the total cost. The Huawei MateBook Fold Ultimate Design is priced between CNY 24,000 and 27,000 (approximately $3,300 to $3,700) depending on configuration. This pricing includes the wireless keyboard, making the total package cost comparable to a fully equipped X1 Fold with accessories. However, the MateBook Fold is currently available only in China, with no announced plans for international release. This limited availability significantly restricts its potential market impact outside of Asia. Global support and service represent another consideration. Lenovo maintains service centers worldwide, providing reliable support for business travelers and international organizations. Huawei’s support network is more limited outside of China, potentially creating challenges for users who experience hardware issues in regions without official service options. The target audience for each device influences its value proposition. The X1 Fold appeals to business professionals who prioritize Windows compatibility, global support, and integration with existing enterprise systems. Its ThinkPad branding carries significant weight in corporate environments, where reliability and security take precedence over cutting-edge specifications. The MateBook Fold targets technology enthusiasts and creative professionals who value display quality, design innovation, and ecosystem integration. Its limited availability and HarmonyOS platform make it less suitable for mainstream business adoption but potentially more appealing to users seeking the absolute latest in hardware engineering. Financing options and business leasing programs further differentiate these devices in the market. Lenovo offers established enterprise leasing programs that allow organizations to deploy the X1 Fold without significant upfront capital expenditure. These programs typically include service agreements and upgrade paths that align with corporate refresh cycles. Huawei’s business services are less developed outside of China, potentially limiting financing options for international customers interested in the MateBook Fold. Conclusion: The Future of Foldable Computing The Lenovo ThinkPad X1 Fold 2024 and Huawei MateBook Fold Ultimate Design represent two distinct visions for the future of foldable computing. Lenovo prioritizes durability, Windows compatibility, and global accessibility, creating a device that fits seamlessly into existing business environments. Huawei pushes the boundaries of hardware engineering, delivering a thinner, lighter device with a larger display and custom operating system optimized for the foldable form factor. For business users who require Windows compatibility and global support, the X1 Fold remains the more practical choice despite its thicker profile and aging processors. Its proven durability and enterprise-friendly features make it a safer investment for organizations deploying foldable technology. The device excels in versatility, allowing users to switch between tablet, laptop, and desktop modes with minimal compromise. Creative professionals and early adopters who prioritize display quality and cutting-edge design may find the MateBook Fold more appealing, provided they can access it in their region and adapt to HarmonyOS. The larger, brighter display and thinner profile create a more futuristic experience, though the limited software ecosystem and regional availability present significant barriers to widespread adoption. Looking forward, both devices point toward necessary improvements in the next generation of foldable computers. Future models should incorporate the latest processors with AI acceleration, reduce weight without sacrificing durability, integrate kickstands directly into the chassis, and provide larger, more comfortable keyboards. Display technology should continue to advance, with higher refresh rates, improved crease durability, and enhanced power efficiency. Software must evolve to better support the unique capabilities of foldable hardware, with more intuitive mode switching and optimized multitasking. The competition between Lenovo and Huawei benefits consumers by accelerating innovation and highlighting different approaches to solving the challenges of foldable computing. As these technologies mature and prices eventually decrease, foldable devices will transition from executive status symbols to practical tools for a broader range of users. The X1 Fold and MateBook Fold represent important steps in this evolution, each contributing valuable lessons that will shape the next generation of flexible computing devices. The ideal foldable device would combine Huawei’s hardware innovations with Lenovo’s software compatibility and global support. It would feature the thinness and display quality of the MateBook Fold, the enterprise security and connectivity options of the X1 Fold, and an operating system that seamlessly adapts to different usage modes. While neither current device achieves this perfect balance, both demonstrate remarkable engineering achievements that push the boundaries of what portable computers can be. As we look to the future, the success of foldable computing will depend not just on hardware specifications but on the development of software experiences that truly leverage the unique capabilities of these flexible displays. The device that ultimately dominates this category will be the one that most effectively bridges the gap between technical innovation and practical utility, creating experiences that simply aren’t possible on conventional laptops or tablets. Both Lenovo and Huawei have taken significant steps toward this goal, and their ongoing competition promises to accelerate progress toward truly transformative foldable computers.The post Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design first appeared on Yanko Design.
    19 Комментарии 0 Поделились
  • Nomad levels up its best-selling charger with new 100W slim adapter [Hands-on]

    One of my favorite product lines Nomad has made over the years is their slim power adapters. They started with their 35W slim charger, which brought fast iPhone charging in this ultra-slim form factor. Then they released their 65W version that added a secondary port and was tailored to customers needing more juice. Now, they have finally launched their 100W slim power adapter for those who need to charge their larger machines on the go. Here is what you need to know.

    My experienceThe first thing I noticed when I opened the packaging was how light this charger is. For comparison, a 14-inch MacBook Pro comes with a 96W power adapter that weighs 454g. The Nomad charger is roughly half the weight, slimmer and more compact overall, has an additional USB-C port to charge a secondary device, and is cheaper.
    There is a lot to like about this charger. First is the build quality, which feels well-engineered and soft to the touch with zero harsh corners or sides. Since its so thin, it is made to huge the power outlet its connected to so you don’t have to worry about the charger falling off the outlet, which happens with Apple chargers. Since it has dual USB-C, I like how they distinguished the fast charger with a blue port. To clarify, both ports can charge at the full 100W, but when two USB-C cables are connected, the blue port is the designated fast charger and the regular one is the normal charger. This charger is perfect because I can fast-charge a MacBook Air and an iPad Pro simultaneously, so I never have to worry about slow charging speeds. But of course its more than fast enough to charge both 14in and 16in Macbook Pros.
    From my experience, it has stayed cool to the touch throughout my testing, even when pushing the wattage. Overall, this is a great charger that has no real downsides. I wish a USB-C cable were included in the price, but in this product category, it’s rare that any brand does that. A man can dream. Regardless, this is now my main travel charger!

    It’s incredible how much power they can fit in a charger that is roughly the size of a deck of cards. They were able to use GaN battery tech to make this possible. This allows the charger to get to 100W of max charging efficiently without overheating or losing efficiency. You get:

    100W max output to be able to charge the 16-inch MacBook Pro sufficiently
    Smart dual charging with each port charging up to 100WUltra slim, measuring just 19mm in thicknessFolding prongs

    This is officially my new travel charger, which I will bring with me everywhere. I can charge my iPad Pro and iPhone at fast charging speeds, no problem. I will also pair this with my Nomad universal cable to charge my Apple Watch while charging my iPad or iPhone. So bringing those two Nomad products gives me a super small, compact, and portable way to charge all three of my devices, and charge them fast. Its amazing how far battery tech has come and where its going. I love that we can now charge a fully loaded MacBook Pro at PD speeds with a charger that can fit in your pocket.
    Pricing and availability

    Nomad’s new 100W slim adapter is available today from their site for As of now, it comes in just one color, carbide. They also have their 65W slim adapter for if you do not need a full 100W, and they also have their 35W charger in the gorgeous white color for just Let me know what you think. Is this a product you were looking for? Do you have devices that need this much power? let’s discuss below.

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #nomad #levels #its #bestselling #charger
    Nomad levels up its best-selling charger with new 100W slim adapter [Hands-on]
    One of my favorite product lines Nomad has made over the years is their slim power adapters. They started with their 35W slim charger, which brought fast iPhone charging in this ultra-slim form factor. Then they released their 65W version that added a secondary port and was tailored to customers needing more juice. Now, they have finally launched their 100W slim power adapter for those who need to charge their larger machines on the go. Here is what you need to know. My experienceThe first thing I noticed when I opened the packaging was how light this charger is. For comparison, a 14-inch MacBook Pro comes with a 96W power adapter that weighs 454g. The Nomad charger is roughly half the weight, slimmer and more compact overall, has an additional USB-C port to charge a secondary device, and is cheaper. There is a lot to like about this charger. First is the build quality, which feels well-engineered and soft to the touch with zero harsh corners or sides. Since its so thin, it is made to huge the power outlet its connected to so you don’t have to worry about the charger falling off the outlet, which happens with Apple chargers. Since it has dual USB-C, I like how they distinguished the fast charger with a blue port. To clarify, both ports can charge at the full 100W, but when two USB-C cables are connected, the blue port is the designated fast charger and the regular one is the normal charger. This charger is perfect because I can fast-charge a MacBook Air and an iPad Pro simultaneously, so I never have to worry about slow charging speeds. But of course its more than fast enough to charge both 14in and 16in Macbook Pros. From my experience, it has stayed cool to the touch throughout my testing, even when pushing the wattage. Overall, this is a great charger that has no real downsides. I wish a USB-C cable were included in the price, but in this product category, it’s rare that any brand does that. A man can dream. Regardless, this is now my main travel charger! It’s incredible how much power they can fit in a charger that is roughly the size of a deck of cards. They were able to use GaN battery tech to make this possible. This allows the charger to get to 100W of max charging efficiently without overheating or losing efficiency. You get: 100W max output to be able to charge the 16-inch MacBook Pro sufficiently Smart dual charging with each port charging up to 100WUltra slim, measuring just 19mm in thicknessFolding prongs This is officially my new travel charger, which I will bring with me everywhere. I can charge my iPad Pro and iPhone at fast charging speeds, no problem. I will also pair this with my Nomad universal cable to charge my Apple Watch while charging my iPad or iPhone. So bringing those two Nomad products gives me a super small, compact, and portable way to charge all three of my devices, and charge them fast. Its amazing how far battery tech has come and where its going. I love that we can now charge a fully loaded MacBook Pro at PD speeds with a charger that can fit in your pocket. Pricing and availability Nomad’s new 100W slim adapter is available today from their site for As of now, it comes in just one color, carbide. They also have their 65W slim adapter for if you do not need a full 100W, and they also have their 35W charger in the gorgeous white color for just Let me know what you think. Is this a product you were looking for? Do you have devices that need this much power? let’s discuss below. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #nomad #levels #its #bestselling #charger
    9TO5MAC.COM
    Nomad levels up its best-selling charger with new 100W slim adapter [Hands-on]
    One of my favorite product lines Nomad has made over the years is their slim power adapters. They started with their 35W slim charger, which brought fast iPhone charging in this ultra-slim form factor. Then they released their 65W version that added a secondary port and was tailored to customers needing more juice. Now, they have finally launched their 100W slim power adapter for those who need to charge their larger machines on the go. Here is what you need to know. My experience (update from May 22, 2025) The first thing I noticed when I opened the packaging was how light this charger is. For comparison, a 14-inch MacBook Pro comes with a 96W power adapter that weighs 454g (about one pound). The Nomad charger is roughly half the weight, slimmer and more compact overall, has an additional USB-C port to charge a secondary device, and is cheaper. There is a lot to like about this charger. First is the build quality, which feels well-engineered and soft to the touch with zero harsh corners or sides. Since its so thin, it is made to huge the power outlet its connected to so you don’t have to worry about the charger falling off the outlet, which happens with Apple chargers. Since it has dual USB-C, I like how they distinguished the fast charger with a blue port. To clarify, both ports can charge at the full 100W, but when two USB-C cables are connected, the blue port is the designated fast charger and the regular one is the normal charger. This charger is perfect because I can fast-charge a MacBook Air and an iPad Pro simultaneously, so I never have to worry about slow charging speeds. But of course its more than fast enough to charge both 14in and 16in Macbook Pros. From my experience, it has stayed cool to the touch throughout my testing, even when pushing the wattage. Overall, this is a great charger that has no real downsides. I wish a USB-C cable were included in the price, but in this product category, it’s rare that any brand does that. A man can dream. Regardless, this is now my main travel charger! It’s incredible how much power they can fit in a charger that is roughly the size of a deck of cards. They were able to use GaN battery tech to make this possible. This allows the charger to get to 100W of max charging efficiently without overheating or losing efficiency. You get: 100W max output to be able to charge the 16-inch MacBook Pro sufficiently Smart dual charging with each port charging up to 100W (70/30 split when both plugged in) Ultra slim, measuring just 19mm in thickness (again, think a deck of cards) Folding prongs This is officially my new travel charger, which I will bring with me everywhere. I can charge my iPad Pro and iPhone at fast charging speeds, no problem. I will also pair this with my Nomad universal cable to charge my Apple Watch while charging my iPad or iPhone. So bringing those two Nomad products gives me a super small, compact, and portable way to charge all three of my devices, and charge them fast. Its amazing how far battery tech has come and where its going. I love that we can now charge a fully loaded MacBook Pro at PD speeds with a charger that can fit in your pocket. Pricing and availability Nomad’s new 100W slim adapter is available today from their site for $75. As of now, it comes in just one color, carbide. They also have their 65W slim adapter for $65 if you do not need a full 100W, and they also have their 35W charger in the gorgeous white color for just $29. Let me know what you think. Is this a product you were looking for? Do you have devices that need this much power? let’s discuss below. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии 0 Поделились
  • Anker Solix F3800 Plus review: the popular power station gets some upgrades

    Anker Solix F3800 Plus

    MSRP Score Details

    “The Anker Solix F3800 Plus beats the competition in usability, versatility and value.”

    Pros

    Very portable

    Good looking

    Very strong warranty, lifetime customer service

    High-quality battery

    Expandable to 53.8kWh

    Cons

    Comes with a very short power cord

    EV charging is not practical

    Expensive for casual users

    With the Anker Solix F3800 Plus, the company promises people a more feature-rich, evolved power station that addresses many of the shortfalls from the 1st generation F3800. I got a chance to put the old and the new unit’s side-by side to see just how well Anker has listened to its fanbase.

    Recommended Videos

    Last February, I was able to spend some time on the original Anker Solix F3800, which was then brand new to the market and despite having some minor bugs, is still one of the most impressive power stations out there. Fortunately, in the year that I have had it, Anker has released a plethora of firmware updates to make sure things ran smoothly without complications. You can read about my experience with the Solix F3800 over on our other site, The Manual. Since it’s introduction, the Solix F3800 has garnered quite a lot of fans, a lot of which have asked Anker for some new features and fixes. This is where the Solix F3800 Plus comes in. Not only has Anker listened to its customers by addressing some of the complaints, but it has added several new features which are sure to make new customers happy.
    How much does the Anker Solix F3800 Plus cost?
    The Anker Solix F3800 Plus price is MSRP, but you should be able to find it listed for less than that if you shop around. At the time of writing for example, you can pick up the Solix F3800 Plus for about directly from Anker, which is off of their regular MSRP.
    I was also able to find the F3800 Plus available at a number of online retailers, including Amazon – prices varied depending on what is packaged with it. When I reached out to Anker to ask about the discounts available, I was told that there are always discounts and specials running.
    What’s in the box of the Anker Solix F3800 Plus:
    Only a few things come packed with the F3800 Plus – the instruction and warranty pamphlets, the AC Charging cable for the unit itself, and two solar charging cables so you can connect some solar panels to the unit.
    I was bummed to see that the AC charging cable is considerably shorter than the one that came with the original F3800 unit. So, make sure that you have an AC outlet nearby, or be prepared to purchase a longer cord, separately.
    The original Anker Solix F3800 next to the F3800 Plus Ian Bell / Digital Trends
    Features and Design of the F3800 Plus
    The F3800 Plus doesn’t look any different than the original F3800 at first glance. Their shapes are pretty much identical, they weigh about the same etc., but when you take a closer look at the connections on the side of the F3800 Plus, that’s where things change.
    Here is a list of key differences between the F3800 and the F3800 Plus I was able to keep track of:

    The original F3800 accepts up to 2400W of solar input whereas the F3800 Plus accepts 3200W of solar input. This means you can charge the batteries much quicker
    You will need an adapter for the original F3800 if you want to charge your EV, the F3800 Plus has a port on the side where you can plug your EV in directly
    The F3800 Plus is compatible with 240V gas generatorsThe F3800 Plus supports charging via generator or solar while simultaneously powering connected devices
    The original F3800 was not able to output AC power while charging with AC at the same time – this has been fixed with the Plus version
    Anker has a good comparison video on YouTube highlighting the key differences.

    Anker Solix F3800 Plus specifications

    Capacity
    51.2Vdc 75Ah/3840Wh

    AC output
    AC Output 2
    120V~ 20A Max, 60Hz, 2400W Max
    AC Output120V/240V~ 25A Max, 60Hz, 6000W Max
    USB-A Output
    5V – 2.4AUSB-C Output
    5V – 3A / 9V – 3A / 15V – 3A / 20V – 3A / 20V – 5ACell chemistry
    LiFePO4 Cell

    EPS/UPS
    UPS: 20ms

    Solar input
    11-165V – 17A MaxSolar Inputs

    Environmental Operation
    Discharging Temperature
    -4°F-104°F / -20°C-40°C
    Charging Temperature
    32°F-104°F / 0°C-40°C

    AC input

    AC Input
    120V~ 15A Max/ 12A Max, 60Hz, L+N+PE

    AC Input Power1800W Max

    AC Input Power1440W Max

    Connectivity
    Wi-Fi, Bluetooth

    Dimensions

    27.6×15.3×15.6 in / 70.2×38.8×39.5 cm
    Weight: 136.7lb

    Anker Solix F3800 Plus Vs. Competition

    Feature
    Anker F3800 Plus
    EcoFlow Delta Pro Ultra
    Bluetti AC500 + B300S
    Goal Zero Yeti 6000X
    Jackery Explorer 3000 Pro

    Battery Capacity
    3.84 kWh3.6 kWh3.072 kWh per module6.071 kWh
    3.024 kWh

    AC Output6,000W
    7,200W
    5,000W
    2,000W
    3,000W

    AC Output9,000W
    10,800W
    10,000W
    3,500W
    6,000W

    Solar Input Capacity
    3,200W
    5,600W
    3,000W
    600W
    1,200W

    Portability
    ModerateLowModerate
    LowHighExpandability
    High
    High
    High
    Limited
    Limited

    Generator Charging
    I noticed that there are a few channels on YouTube covering 240V generator recharging with the F3800 Plus. Now, while I do not have a gas generator, nor do I plan on getting one now that I have the F3800 Plus here at home, I do understand that if you have both a gas generator and the F3800 Plus, you will want to use one to charge the other. Make sure that you purchase the Anker Solix Generator input Adapter so you can connect it to your gas generator first. Once connected to the 240V generator you should be able to charge your unit at 3300W according to the manual, and 6000W with an expansion battery attached. John from the YouTube channel Backyard Maine has a great video where he shares his experience connecting the F3800 Plus to his gas generator, I recommend checking it out if this is of interest to you.
    Solar Charging the F3800 Plus
    Anker did not send me any solar panels to test the F3800 Plus with, but once I get some, I will update and include my experience in the review here. The good news is that there are a lot of folks on YouTube that have connected solar panels, personally, I am a fan of Tommy Callaway’s Anker video.
    For home use, the F3800 Plus supports 410W permanent solar panels which you can purchase from Anker directly, or an aftermarket brand should you choose to. If you plan on taking the unit to the park or simply do not want to install the permanent panels, you can purchase some portable panels from Anker as well. The F38000 Plus supports a maximum 3200W charging input regardless of the panel’s portability.
    Accessories
    solar panels, ranging from portable panels to more permanent fixed panels. There are also several adapters to choose from that either allow you to connect the F3800 Plus to an EV, RV or solar panels. The system truly is expandable and designed to meet a number of needs.
    Expansion Batteries

    The Anker Solix F3800 Plus can increase it’s storage capacity up to 26,880Wh by connecting 6 Anker expansion batteries, or 12 if you are connecting a second F3800. The expansion batteries are about K each when on sale.Anker F3800 and F3800 Plus connections compared Ian Bell / Digital Trends
    “Real World” Testing the Anker Solix F3800 Plus
    I always find it amusing to watch videos or read articles where people are testing these power stations with power equipment in the garage, food blenders, or charging their EV’s. It’s as if we all expect life to go on as normal during a power outage, that’s how these companies want you to imagine things.
    But for a lot of people, including myself, I am not expecting a power outage to last days or weeks, I simply want my food to stay fresh, for the home to be comfortable depending on the weather, or my phone and lights to stay charged. And that’s for home use.

    If I am taking the F3800 plus to the park or camping , then sure, I will use it to power items to cook with, keep the lights on or play a sound system – and that’s why I like the F3800 Plus, when compared to a lot of its competitors – it’s portable with wheels and a built-in handle. The handle extends a little over a foot and is about the same length as you would get from carry-on luggage; it feels sturdy and didn’t make me worry about it breaking with prolonged use. For lifting the F3800, there is a smaller handle that flips out on the bottom so that you can lift the unit with two people. The unit itself weighs just over 132 lbs. and is too wide in my opinion for a single person to lift, so make sure you have some help with you. I transported the F3800 throughout the house, up some steps and over some grass in the yard. The wheels worked well and at no time did I feel like the unit was going to break – there were no clunking sounds or anything feeling loose.How does the App work?
    The Anker app is simple to use. Once installed, you can connect to the F3800 through Bluetooth, and then you will want to connect the F3800 to your Wi-Fi system so you can monitor the unit remotely. I was able to spend some considerable time with the Bluetti AC500 and can tell you that the Anker F3800’s app is considerably easier to navigate and use. You can also add multiple Anker devices through their app which is nice for when and if you decide to expand on this system.
    The first thing to notice is that the display is very easy to read and the instruction manual does a great job explaining what the icons mean on the display. Buttons on the front of the F3800 have a nice tactile feel to them – they are not mushy or sticky when pressed. Controls are intuitive to use for the most part. I would recommend keeping the manual handy so you can quickly find out how to put the F3800 into EV charging mode, and which side of the AC outlets you should use if you want to use the UPS function. These two areas are not easy to find unless you have the manual.
    To test the F3800 Plus at my home, I had it power a large freezer outside in the garage, charge some electric bikes, power a regular refrigerator and charge some laptops. Here is what the results are for those items.

    Danby 10 cu ft. chest freezer that I have in my garage: Started May 16th at 7:45PM and fully drained the Anker F3800 by May 18th 1:58PM – about 43 hours total. A little under 2 days of charge with a 47w draw from the F3800 Plus
    Frigidaire refrigerator – 20 hours until completely drains at a 150W draw. You could maybe get a couple more hours if no one opened the fridge.
    Standard laptop charges: between 30 and 60 charges if you avg, 50-100 Wh per laptop for example, triple that phone phones
    Charging an EV: With my Rivian R1S, I was able to get about 5 miles of charge out of the F3800 Plus – fully drained. Online, some Tesla owners were able to get about 11 miles of charge

    If you were confident the power would come back on in a hurry, you would be able to keep a fridge or freezer running, charge a few laptops or phones, maybe charge up an e-Bike and the F3800 Plus base unit would last about 17 hours before needing a recharge . With both the freezer and my fridge plugged in at the same time, I was about to get 7.34 hours before the unit was completely drained. Recharging the F3800 through a regular 120v outlet took 3.17 hours at 1645W.
    Charging an EV with the F3800 is gimmicky in my opinion, but I just know that if Anker did not include this capability, some people will make a stink about it. Unless you are stuck in a zombie apocalypse and need that extra 5 miles of range to get somewhere else to charge your car, this feature just doesn’t make sense to me.
    If power outages are common in your area, ymy recommendation is that you ‘’ll want to buy the solar panels so you can charge the F3800 Plus if you are expecting outages for multiple days. And you will likely want to add a battery packas well.

    1.
    Anker Solix F3800 Plus App
    2.
    Anker Solix F3800 Plus App
    3.
    Anker Solix F3800 Plus App

    Can you connect the older, original F3800 to the F3800 Plus?
    Yes, you can use the Anker Solix Double Power Hub. This increases your maximum output to 12,000W – impressive! As mentioned above, if you have two units connected, you can expand your batteries by 12 total, giving you days of backup for your home.
    The Anker Solix F3800 Plus can power your e-bike, EV and other devices Ian Bell / Digital Trends
    How long will the Anker Solix F3800 Plus Last?
    The Anker Solix F3800 Plus has an expected lifespan of 10 years or more due to the fact that they areit’s using EV grade batterieswhich are considered safer than other lithium-ion batteries. Like a lot of companies, Anker releases regular firmware updates to address problems.

    Product
    Warranty Coverage

    Anker SOLIX F3800 Plus
    5-year warranty with an expected lifespan of 10+ years using EV-grade LiFePO₄ batteries . With lifetime customer support

    EcoFlow DELTA Pro Ultra
    5-year warranty covering both the inverter and battery components .

    BLUETTI AC500 + B300S
    4-year warranty for both the AC500 power station and B300S battery module .

    Goal Zero Yeti 6000X
    2-year warranty standard for lithium-based Yeti products .

    Jackery Explorer 3000 Pro
    3-year standard warranty, extendable to 5 years upon product registration .

    Should you buy it?
    If your needs match mine, then I would recommend buying it. I like the design, portability and the warranty. Would I power my entire home with this? I probably would not, but according to other reviews on the web, and community forums, it certainly seems capable of doing the job if you go with the extra batteries and available accessories – for me, it’s a great base system that you can build from. I am “either/or” in this camp. If I purchased the F3800 Plus and the necessary equipment with the intention of powering my entire home, I am not going to go through all the trouble to disconnect it and take it somewhere – the portability aspect is useless to me.
    Because the F3800 Plus comes with wheels, an extension handle and a rear handle for portability, that’s what I would use it for. The Anker Solix F3800 Plus improves on the original F3800 in a lot of ways, and I like how Anker fixed a lot of the quirks I encountered from the original F3800. I spent a lot of time in the community forums learning a lot from F3800 owners, and it appears that the community as a whole has been happy with Anker’s support and firmware fixes. I plan on tracking my use as I get more hours with the F3800 and will see if Anker has plans to address any quirks that come up.
    If you currently have an existing Solix F3800 and none of the new features make sense to you, then there are no reasons to upgrade; Anker has consistently been providing firmware updates to improve the unit and any bugs associated with it. The good news is that you can add the F3800 plus to your existing unit and build from there too.
    If you are buying from scratch and want a versatile system that looks good, has a solid user interface, is a reliable performer and plenty of accessories and options for building upon, then this is the power station for you. The Anker Solix F3800 Plus beats the competition in usability, versatility and value. We have a pretty extensive list of Power Station reviews worth checking out if you think the Solix F3800 Plus isn’t right for you.
    #anker #solix #f3800 #plus #review
    Anker Solix F3800 Plus review: the popular power station gets some upgrades
    Anker Solix F3800 Plus MSRP Score Details “The Anker Solix F3800 Plus beats the competition in usability, versatility and value.” Pros Very portable Good looking Very strong warranty, lifetime customer service High-quality battery Expandable to 53.8kWh Cons Comes with a very short power cord EV charging is not practical Expensive for casual users With the Anker Solix F3800 Plus, the company promises people a more feature-rich, evolved power station that addresses many of the shortfalls from the 1st generation F3800. I got a chance to put the old and the new unit’s side-by side to see just how well Anker has listened to its fanbase. Recommended Videos Last February, I was able to spend some time on the original Anker Solix F3800, which was then brand new to the market and despite having some minor bugs, is still one of the most impressive power stations out there. Fortunately, in the year that I have had it, Anker has released a plethora of firmware updates to make sure things ran smoothly without complications. You can read about my experience with the Solix F3800 over on our other site, The Manual. Since it’s introduction, the Solix F3800 has garnered quite a lot of fans, a lot of which have asked Anker for some new features and fixes. This is where the Solix F3800 Plus comes in. Not only has Anker listened to its customers by addressing some of the complaints, but it has added several new features which are sure to make new customers happy. How much does the Anker Solix F3800 Plus cost? The Anker Solix F3800 Plus price is MSRP, but you should be able to find it listed for less than that if you shop around. At the time of writing for example, you can pick up the Solix F3800 Plus for about directly from Anker, which is off of their regular MSRP. I was also able to find the F3800 Plus available at a number of online retailers, including Amazon – prices varied depending on what is packaged with it. When I reached out to Anker to ask about the discounts available, I was told that there are always discounts and specials running. What’s in the box of the Anker Solix F3800 Plus: Only a few things come packed with the F3800 Plus – the instruction and warranty pamphlets, the AC Charging cable for the unit itself, and two solar charging cables so you can connect some solar panels to the unit. I was bummed to see that the AC charging cable is considerably shorter than the one that came with the original F3800 unit. So, make sure that you have an AC outlet nearby, or be prepared to purchase a longer cord, separately. The original Anker Solix F3800 next to the F3800 Plus Ian Bell / Digital Trends Features and Design of the F3800 Plus The F3800 Plus doesn’t look any different than the original F3800 at first glance. Their shapes are pretty much identical, they weigh about the same etc., but when you take a closer look at the connections on the side of the F3800 Plus, that’s where things change. Here is a list of key differences between the F3800 and the F3800 Plus I was able to keep track of: The original F3800 accepts up to 2400W of solar input whereas the F3800 Plus accepts 3200W of solar input. This means you can charge the batteries much quicker You will need an adapter for the original F3800 if you want to charge your EV, the F3800 Plus has a port on the side where you can plug your EV in directly The F3800 Plus is compatible with 240V gas generatorsThe F3800 Plus supports charging via generator or solar while simultaneously powering connected devices The original F3800 was not able to output AC power while charging with AC at the same time – this has been fixed with the Plus version Anker has a good comparison video on YouTube highlighting the key differences. Anker Solix F3800 Plus specifications Capacity 51.2Vdc 75Ah/3840Wh AC output AC Output 2 120V~ 20A Max, 60Hz, 2400W Max AC Output120V/240V~ 25A Max, 60Hz, 6000W Max USB-A Output 5V – 2.4AUSB-C Output 5V – 3A / 9V – 3A / 15V – 3A / 20V – 3A / 20V – 5ACell chemistry LiFePO4 Cell EPS/UPS UPS: 20ms Solar input 11-165V – 17A MaxSolar Inputs Environmental Operation Discharging Temperature -4°F-104°F / -20°C-40°C Charging Temperature 32°F-104°F / 0°C-40°C AC input AC Input 120V~ 15A Max/ 12A Max, 60Hz, L+N+PE AC Input Power1800W Max AC Input Power1440W Max Connectivity Wi-Fi, Bluetooth Dimensions 27.6×15.3×15.6 in / 70.2×38.8×39.5 cm Weight: 136.7lb Anker Solix F3800 Plus Vs. Competition Feature Anker F3800 Plus EcoFlow Delta Pro Ultra Bluetti AC500 + B300S Goal Zero Yeti 6000X Jackery Explorer 3000 Pro Battery Capacity 3.84 kWh3.6 kWh3.072 kWh per module6.071 kWh 3.024 kWh AC Output6,000W 7,200W 5,000W 2,000W 3,000W AC Output9,000W 10,800W 10,000W 3,500W 6,000W Solar Input Capacity 3,200W 5,600W 3,000W 600W 1,200W Portability ModerateLowModerate LowHighExpandability High High High Limited Limited Generator Charging I noticed that there are a few channels on YouTube covering 240V generator recharging with the F3800 Plus. Now, while I do not have a gas generator, nor do I plan on getting one now that I have the F3800 Plus here at home, I do understand that if you have both a gas generator and the F3800 Plus, you will want to use one to charge the other. Make sure that you purchase the Anker Solix Generator input Adapter so you can connect it to your gas generator first. Once connected to the 240V generator you should be able to charge your unit at 3300W according to the manual, and 6000W with an expansion battery attached. John from the YouTube channel Backyard Maine has a great video where he shares his experience connecting the F3800 Plus to his gas generator, I recommend checking it out if this is of interest to you. Solar Charging the F3800 Plus Anker did not send me any solar panels to test the F3800 Plus with, but once I get some, I will update and include my experience in the review here. The good news is that there are a lot of folks on YouTube that have connected solar panels, personally, I am a fan of Tommy Callaway’s Anker video. For home use, the F3800 Plus supports 410W permanent solar panels which you can purchase from Anker directly, or an aftermarket brand should you choose to. If you plan on taking the unit to the park or simply do not want to install the permanent panels, you can purchase some portable panels from Anker as well. The F38000 Plus supports a maximum 3200W charging input regardless of the panel’s portability. Accessories solar panels, ranging from portable panels to more permanent fixed panels. There are also several adapters to choose from that either allow you to connect the F3800 Plus to an EV, RV or solar panels. The system truly is expandable and designed to meet a number of needs. Expansion Batteries The Anker Solix F3800 Plus can increase it’s storage capacity up to 26,880Wh by connecting 6 Anker expansion batteries, or 12 if you are connecting a second F3800. The expansion batteries are about K each when on sale.Anker F3800 and F3800 Plus connections compared Ian Bell / Digital Trends “Real World” Testing the Anker Solix F3800 Plus I always find it amusing to watch videos or read articles where people are testing these power stations with power equipment in the garage, food blenders, or charging their EV’s. It’s as if we all expect life to go on as normal during a power outage, that’s how these companies want you to imagine things. But for a lot of people, including myself, I am not expecting a power outage to last days or weeks, I simply want my food to stay fresh, for the home to be comfortable depending on the weather, or my phone and lights to stay charged. And that’s for home use. If I am taking the F3800 plus to the park or camping , then sure, I will use it to power items to cook with, keep the lights on or play a sound system – and that’s why I like the F3800 Plus, when compared to a lot of its competitors – it’s portable with wheels and a built-in handle. The handle extends a little over a foot and is about the same length as you would get from carry-on luggage; it feels sturdy and didn’t make me worry about it breaking with prolonged use. For lifting the F3800, there is a smaller handle that flips out on the bottom so that you can lift the unit with two people. The unit itself weighs just over 132 lbs. and is too wide in my opinion for a single person to lift, so make sure you have some help with you. I transported the F3800 throughout the house, up some steps and over some grass in the yard. The wheels worked well and at no time did I feel like the unit was going to break – there were no clunking sounds or anything feeling loose.How does the App work? The Anker app is simple to use. Once installed, you can connect to the F3800 through Bluetooth, and then you will want to connect the F3800 to your Wi-Fi system so you can monitor the unit remotely. I was able to spend some considerable time with the Bluetti AC500 and can tell you that the Anker F3800’s app is considerably easier to navigate and use. You can also add multiple Anker devices through their app which is nice for when and if you decide to expand on this system. The first thing to notice is that the display is very easy to read and the instruction manual does a great job explaining what the icons mean on the display. Buttons on the front of the F3800 have a nice tactile feel to them – they are not mushy or sticky when pressed. Controls are intuitive to use for the most part. I would recommend keeping the manual handy so you can quickly find out how to put the F3800 into EV charging mode, and which side of the AC outlets you should use if you want to use the UPS function. These two areas are not easy to find unless you have the manual. To test the F3800 Plus at my home, I had it power a large freezer outside in the garage, charge some electric bikes, power a regular refrigerator and charge some laptops. Here is what the results are for those items. Danby 10 cu ft. chest freezer that I have in my garage: Started May 16th at 7:45PM and fully drained the Anker F3800 by May 18th 1:58PM – about 43 hours total. A little under 2 days of charge with a 47w draw from the F3800 Plus Frigidaire refrigerator – 20 hours until completely drains at a 150W draw. You could maybe get a couple more hours if no one opened the fridge. Standard laptop charges: between 30 and 60 charges if you avg, 50-100 Wh per laptop for example, triple that phone phones Charging an EV: With my Rivian R1S, I was able to get about 5 miles of charge out of the F3800 Plus – fully drained. Online, some Tesla owners were able to get about 11 miles of charge If you were confident the power would come back on in a hurry, you would be able to keep a fridge or freezer running, charge a few laptops or phones, maybe charge up an e-Bike and the F3800 Plus base unit would last about 17 hours before needing a recharge . With both the freezer and my fridge plugged in at the same time, I was about to get 7.34 hours before the unit was completely drained. Recharging the F3800 through a regular 120v outlet took 3.17 hours at 1645W. Charging an EV with the F3800 is gimmicky in my opinion, but I just know that if Anker did not include this capability, some people will make a stink about it. Unless you are stuck in a zombie apocalypse and need that extra 5 miles of range to get somewhere else to charge your car, this feature just doesn’t make sense to me. If power outages are common in your area, ymy recommendation is that you ‘’ll want to buy the solar panels so you can charge the F3800 Plus if you are expecting outages for multiple days. And you will likely want to add a battery packas well. 1. Anker Solix F3800 Plus App 2. Anker Solix F3800 Plus App 3. Anker Solix F3800 Plus App Can you connect the older, original F3800 to the F3800 Plus? Yes, you can use the Anker Solix Double Power Hub. This increases your maximum output to 12,000W – impressive! As mentioned above, if you have two units connected, you can expand your batteries by 12 total, giving you days of backup for your home. The Anker Solix F3800 Plus can power your e-bike, EV and other devices Ian Bell / Digital Trends How long will the Anker Solix F3800 Plus Last? The Anker Solix F3800 Plus has an expected lifespan of 10 years or more due to the fact that they areit’s using EV grade batterieswhich are considered safer than other lithium-ion batteries. Like a lot of companies, Anker releases regular firmware updates to address problems. Product Warranty Coverage Anker SOLIX F3800 Plus 5-year warranty with an expected lifespan of 10+ years using EV-grade LiFePO₄ batteries . With lifetime customer support EcoFlow DELTA Pro Ultra 5-year warranty covering both the inverter and battery components . BLUETTI AC500 + B300S 4-year warranty for both the AC500 power station and B300S battery module . Goal Zero Yeti 6000X 2-year warranty standard for lithium-based Yeti products . Jackery Explorer 3000 Pro 3-year standard warranty, extendable to 5 years upon product registration . Should you buy it? If your needs match mine, then I would recommend buying it. I like the design, portability and the warranty. Would I power my entire home with this? I probably would not, but according to other reviews on the web, and community forums, it certainly seems capable of doing the job if you go with the extra batteries and available accessories – for me, it’s a great base system that you can build from. I am “either/or” in this camp. If I purchased the F3800 Plus and the necessary equipment with the intention of powering my entire home, I am not going to go through all the trouble to disconnect it and take it somewhere – the portability aspect is useless to me. Because the F3800 Plus comes with wheels, an extension handle and a rear handle for portability, that’s what I would use it for. The Anker Solix F3800 Plus improves on the original F3800 in a lot of ways, and I like how Anker fixed a lot of the quirks I encountered from the original F3800. I spent a lot of time in the community forums learning a lot from F3800 owners, and it appears that the community as a whole has been happy with Anker’s support and firmware fixes. I plan on tracking my use as I get more hours with the F3800 and will see if Anker has plans to address any quirks that come up. If you currently have an existing Solix F3800 and none of the new features make sense to you, then there are no reasons to upgrade; Anker has consistently been providing firmware updates to improve the unit and any bugs associated with it. The good news is that you can add the F3800 plus to your existing unit and build from there too. If you are buying from scratch and want a versatile system that looks good, has a solid user interface, is a reliable performer and plenty of accessories and options for building upon, then this is the power station for you. The Anker Solix F3800 Plus beats the competition in usability, versatility and value. We have a pretty extensive list of Power Station reviews worth checking out if you think the Solix F3800 Plus isn’t right for you. #anker #solix #f3800 #plus #review
    WWW.DIGITALTRENDS.COM
    Anker Solix F3800 Plus review: the popular power station gets some upgrades
    Anker Solix F3800 Plus MSRP $4,799.00 Score Details “The Anker Solix F3800 Plus beats the competition in usability, versatility and value.” Pros Very portable Good looking Very strong warranty, lifetime customer service High-quality battery Expandable to 53.8kWh Cons Comes with a very short power cord EV charging is not practical Expensive for casual users With the Anker Solix F3800 Plus, the company promises people a more feature-rich, evolved power station that addresses many of the shortfalls from the 1st generation F3800. I got a chance to put the old and the new unit’s side-by side to see just how well Anker has listened to its fanbase. Recommended Videos Last February, I was able to spend some time on the original Anker Solix F3800, which was then brand new to the market and despite having some minor bugs, is still one of the most impressive power stations out there. Fortunately, in the year that I have had it, Anker has released a plethora of firmware updates to make sure things ran smoothly without complications. You can read about my experience with the Solix F3800 over on our other site, The Manual. Since it’s introduction, the Solix F3800 has garnered quite a lot of fans, a lot of which have asked Anker for some new features and fixes. This is where the Solix F3800 Plus comes in. Not only has Anker listened to its customers by addressing some of the complaints, but it has added several new features which are sure to make new customers happy. How much does the Anker Solix F3800 Plus cost? The Anker Solix F3800 Plus price is $4,799 MSRP, but you should be able to find it listed for less than that if you shop around. At the time of writing for example, you can pick up the Solix F3800 Plus for about $3,499 directly from Anker, which is $21,300 off of their regular MSRP. I was also able to find the F3800 Plus available at a number of online retailers, including Amazon – prices varied depending on what is packaged with it. When I reached out to Anker to ask about the discounts available, I was told that there are always discounts and specials running. What’s in the box of the Anker Solix F3800 Plus: Only a few things come packed with the F3800 Plus – the instruction and warranty pamphlets, the AC Charging cable for the unit itself, and two solar charging cables so you can connect some solar panels to the unit. I was bummed to see that the AC charging cable is considerably shorter than the one that came with the original F3800 unit. So, make sure that you have an AC outlet nearby, or be prepared to purchase a longer cord, separately. The original Anker Solix F3800 next to the F3800 Plus Ian Bell / Digital Trends Features and Design of the F3800 Plus The F3800 Plus doesn’t look any different than the original F3800 at first glance. Their shapes are pretty much identical, they weigh about the same etc., but when you take a closer look at the connections on the side of the F3800 Plus, that’s where things change. Here is a list of key differences between the F3800 and the F3800 Plus I was able to keep track of: The original F3800 accepts up to 2400W of solar input whereas the F3800 Plus accepts 3200W of solar input. This means you can charge the batteries much quicker You will need an adapter for the original F3800 if you want to charge your EV, the F3800 Plus has a port on the side where you can plug your EV in directly The F3800 Plus is compatible with 240V gas generators (up to 6,000 bypass) The F3800 Plus supports charging via generator or solar while simultaneously powering connected devices The original F3800 was not able to output AC power while charging with AC at the same time – this has been fixed with the Plus version Anker has a good comparison video on YouTube highlighting the key differences. Anker Solix F3800 Plus specifications Capacity 51.2Vdc 75Ah/3840Wh AC output AC Output 2 120V~ 20A Max, 60Hz, 2400W Max AC Output (NEMA L14-30R) 120V/240V~ 25A Max, 60Hz, 6000W Max USB-A Output 5V – 2.4A (12W Max Per Port) USB-C Output 5V – 3A / 9V – 3A / 15V – 3A / 20V – 3A / 20V – 5A (100W Max Per Port) Cell chemistry LiFePO4 Cell EPS/UPS UPS: 20ms Solar input 11-165V – 17A Max (1600W Max Each) (2) Solar Inputs Environmental Operation Discharging Temperature -4°F-104°F / -20°C-40°C Charging Temperature 32°F-104°F / 0°C-40°C AC input AC Input 120V~ 15A Max (< 3hrs) / 12A Max (continuous), 60Hz, L+N+PE AC Input Power (Charging) 1800W Max AC Input Power (Bypass Mode) 1440W Max Connectivity Wi-Fi, Bluetooth Dimensions 27.6×15.3×15.6 in / 70.2×38.8×39.5 cm Weight: 136.7lb Anker Solix F3800 Plus Vs. Competition Feature Anker F3800 Plus EcoFlow Delta Pro Ultra Bluetti AC500 + B300S Goal Zero Yeti 6000X Jackery Explorer 3000 Pro Battery Capacity 3.84 kWh (expandable to 26.9 kWh) 3.6 kWh (expandable to 25 kWh) 3.072 kWh per module (expandable to 18.4 kWh) 6.071 kWh 3.024 kWh AC Output (Continuous) 6,000W 7,200W 5,000W 2,000W 3,000W AC Output (Surge) 9,000W 10,800W 10,000W 3,500W 6,000W Solar Input Capacity 3,200W 5,600W 3,000W 600W 1,200W Portability Moderate (wheeled) Low (heavier) Moderate Low (heavier) High (wheeled) Expandability High High High Limited Limited Generator Charging I noticed that there are a few channels on YouTube covering 240V generator recharging with the F3800 Plus. Now, while I do not have a gas generator, nor do I plan on getting one now that I have the F3800 Plus here at home, I do understand that if you have both a gas generator and the F3800 Plus, you will want to use one to charge the other. Make sure that you purchase the Anker Solix Generator input Adapter so you can connect it to your gas generator first. Once connected to the 240V generator you should be able to charge your unit at 3300W according to the manual, and 6000W with an expansion battery attached. John from the YouTube channel Backyard Maine has a great video where he shares his experience connecting the F3800 Plus to his gas generator, I recommend checking it out if this is of interest to you. Solar Charging the F3800 Plus Anker did not send me any solar panels to test the F3800 Plus with, but once I get some, I will update and include my experience in the review here. The good news is that there are a lot of folks on YouTube that have connected solar panels, personally, I am a fan of Tommy Callaway’s Anker video. For home use, the F3800 Plus supports 410W permanent solar panels which you can purchase from Anker directly, or an aftermarket brand should you choose to. If you plan on taking the unit to the park or simply do not want to install the permanent panels, you can purchase some portable panels from Anker as well. The F38000 Plus supports a maximum 3200W charging input regardless of the panel’s portability. Accessories solar panels, ranging from portable panels to more permanent fixed panels. There are also several adapters to choose from that either allow you to connect the F3800 Plus to an EV, RV or solar panels. The system truly is expandable and designed to meet a number of needs. Expansion Batteries The Anker Solix F3800 Plus can increase it’s storage capacity up to 26,880Wh by connecting 6 Anker expansion batteries, or 12 if you are connecting a second F3800. The expansion batteries are about $2K each when on sale.Anker F3800 and F3800 Plus connections compared Ian Bell / Digital Trends “Real World” Testing the Anker Solix F3800 Plus I always find it amusing to watch videos or read articles where people are testing these power stations with power equipment in the garage, food blenders, or charging their EV’s. It’s as if we all expect life to go on as normal during a power outage, that’s how these companies want you to imagine things. But for a lot of people, including myself, I am not expecting a power outage to last days or weeks, I simply want my food to stay fresh, for the home to be comfortable depending on the weather, or my phone and lights to stay charged. And that’s for home use. If I am taking the F3800 plus to the park or camping , then sure, I will use it to power items to cook with, keep the lights on or play a sound system – and that’s why I like the F3800 Plus, when compared to a lot of its competitors – it’s portable with wheels and a built-in handle. The handle extends a little over a foot and is about the same length as you would get from carry-on luggage; it feels sturdy and didn’t make me worry about it breaking with prolonged use. For lifting the F3800, there is a smaller handle that flips out on the bottom so that you can lift the unit with two people. The unit itself weighs just over 132 lbs. and is too wide in my opinion for a single person to lift, so make sure you have some help with you. I transported the F3800 throughout the house, up some steps and over some grass in the yard. The wheels worked well and at no time did I feel like the unit was going to break – there were no clunking sounds or anything feeling loose.How does the App work? The Anker app is simple to use. Once installed, you can connect to the F3800 through Bluetooth, and then you will want to connect the F3800 to your Wi-Fi system so you can monitor the unit remotely. I was able to spend some considerable time with the Bluetti AC500 and can tell you that the Anker F3800’s app is considerably easier to navigate and use. You can also add multiple Anker devices through their app which is nice for when and if you decide to expand on this system. The first thing to notice is that the display is very easy to read and the instruction manual does a great job explaining what the icons mean on the display. Buttons on the front of the F3800 have a nice tactile feel to them – they are not mushy or sticky when pressed. Controls are intuitive to use for the most part. I would recommend keeping the manual handy so you can quickly find out how to put the F3800 into EV charging mode, and which side of the AC outlets you should use if you want to use the UPS function. These two areas are not easy to find unless you have the manual. To test the F3800 Plus at my home, I had it power a large freezer outside in the garage, charge some electric bikes, power a regular refrigerator and charge some laptops. Here is what the results are for those items. Danby 10 cu ft. chest freezer that I have in my garage: Started May 16th at 7:45PM and fully drained the Anker F3800 by May 18th 1:58PM – about 43 hours total. A little under 2 days of charge with a 47w draw from the F3800 Plus Frigidaire refrigerator – 20 hours until completely drains at a 150W draw. You could maybe get a couple more hours if no one opened the fridge. Standard laptop charges: between 30 and 60 charges if you avg, 50-100 Wh per laptop for example, triple that phone phones Charging an EV: With my Rivian R1S, I was able to get about 5 miles of charge out of the F3800 Plus – fully drained. Online, some Tesla owners were able to get about 11 miles of charge If you were confident the power would come back on in a hurry, you would be able to keep a fridge or freezer running, charge a few laptops or phones, maybe charge up an e-Bike and the F3800 Plus base unit would last about 17 hours before needing a recharge . With both the freezer and my fridge plugged in at the same time, I was about to get 7.34 hours before the unit was completely drained. Recharging the F3800 through a regular 120v outlet took 3.17 hours at 1645W (screenshot attached). Charging an EV with the F3800 is gimmicky in my opinion, but I just know that if Anker did not include this capability, some people will make a stink about it. Unless you are stuck in a zombie apocalypse and need that extra 5 miles of range to get somewhere else to charge your car, this feature just doesn’t make sense to me. If power outages are common in your area, ymy recommendation is that you ‘’ll want to buy the solar panels so you can charge the F3800 Plus if you are expecting outages for multiple days. And you will likely want to add a battery pack (or two) as well. 1. Anker Solix F3800 Plus App 2. Anker Solix F3800 Plus App 3. Anker Solix F3800 Plus App Can you connect the older, original F3800 to the F3800 Plus? Yes, you can use the Anker Solix Double Power Hub ($299). This increases your maximum output to 12,000W – impressive! As mentioned above, if you have two units connected, you can expand your batteries by 12 total, giving you days of backup for your home. The Anker Solix F3800 Plus can power your e-bike, EV and other devices Ian Bell / Digital Trends How long will the Anker Solix F3800 Plus Last? The Anker Solix F3800 Plus has an expected lifespan of 10 years or more due to the fact that they areit’s using EV grade batteries (lithium-ion batteries that use lithium iron phosphate as the cathode) which are considered safer than other lithium-ion batteries. Like a lot of companies, Anker releases regular firmware updates to address problems. Product Warranty Coverage Anker SOLIX F3800 Plus 5-year warranty with an expected lifespan of 10+ years using EV-grade LiFePO₄ batteries . With lifetime customer support EcoFlow DELTA Pro Ultra 5-year warranty covering both the inverter and battery components . BLUETTI AC500 + B300S 4-year warranty for both the AC500 power station and B300S battery module . Goal Zero Yeti 6000X 2-year warranty standard for lithium-based Yeti products . Jackery Explorer 3000 Pro 3-year standard warranty, extendable to 5 years upon product registration . Should you buy it? If your needs match mine, then I would recommend buying it. I like the design, portability and the warranty. Would I power my entire home with this? I probably would not, but according to other reviews on the web, and community forums, it certainly seems capable of doing the job if you go with the extra batteries and available accessories – for me, it’s a great base system that you can build from. I am “either/or” in this camp. If I purchased the F3800 Plus and the necessary equipment with the intention of powering my entire home, I am not going to go through all the trouble to disconnect it and take it somewhere – the portability aspect is useless to me. Because the F3800 Plus comes with wheels, an extension handle and a rear handle for portability, that’s what I would use it for. The Anker Solix F3800 Plus improves on the original F3800 in a lot of ways, and I like how Anker fixed a lot of the quirks I encountered from the original F3800. I spent a lot of time in the community forums learning a lot from F3800 owners, and it appears that the community as a whole has been happy with Anker’s support and firmware fixes. I plan on tracking my use as I get more hours with the F3800 and will see if Anker has plans to address any quirks that come up. If you currently have an existing Solix F3800 and none of the new features make sense to you, then there are no reasons to upgrade; Anker has consistently been providing firmware updates to improve the unit and any bugs associated with it. The good news is that you can add the F3800 plus to your existing unit and build from there too. If you are buying from scratch and want a versatile system that looks good, has a solid user interface, is a reliable performer and plenty of accessories and options for building upon, then this is the power station for you. The Anker Solix F3800 Plus beats the competition in usability, versatility and value. We have a pretty extensive list of Power Station reviews worth checking out if you think the Solix F3800 Plus isn’t right for you.
    0 Комментарии 0 Поделились
  • BLUETTI Apex 300 Review: The All-in-One Solar, Gas, and Battery Solution for Blackouts and Beyond

    The BLUETTI Apex 300 isn’t meant to sit idle between emergencies. It fits into daily routines, powering everyday essentials without rewiring or installing. This review focuses on how it performs with real products in familiar settings. That includes household appliances during outages, coolers and fans during weekend camping, and portable gear on long tournament days. There are no solar arrays or panel integrations. Just plug and use.
    PROS:
    Exceptional 6,000+ charge cycle lifespan offers 17 years of reliable operation, doubling industry standards.
    Impressive 3,840W output and 120/240V dual voltages for handling multiple high-demand appliances simultaneously without faltering.
    Efficient 20W AC idle drain extends runtime significantly during extended outages.
    Modular design with B300K expansion battery allows customized scaling without replacing initial investment.
    Compatible with 120/240V gas generatorfor extended power outage.
    Massive 6,400W solar input capacity enables rapid renewable charging with potential two-year payback and over 30kW of solar input for whole-home backup.
    Low upfront cost at just /Wh for those who need serious power.
    CONS:
    2.7kW capacity may limit portability, making it less suitable for those with lower power needs.
    Lacks dedicated DC ports, but this trade-off helps keep the price more affordable.

    RATINGS:
    AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:The Apex 300 transforms uncertainty into confidence, delivering power when everything else fails. Peace of mind has never been so tangible.
    Designer: BLUETTI
    Click Here to Buy Now:. Hurry, deal ends soon!

    With 2,764.8Wh of capacity and 3,840W of output, the Apex 300 handles a refrigerator in the kitchen, a portable AC near the tent, or a Typhur air fryer at the courts. It doesn’t need a permanent location. You can roll it into the laundry room to run a washer or dryer in an emergency, or drop it under a canopy to keep drinks cold and phones charged.

    While the unit supports advanced configurations through expansion hubs and bypass systems, those features are outside the scope of this review. The goal here is practical performance with common products, powered directly from the main unit or its optional DC hub.
    From prolonged blackout prep to match-day support, the Apex 300 demonstrates the potential of a high-capacity portable power station, especially when paired with a fuel generator, all without leaving the average user behind.
    Design & Ergonomics
    The Apex 300 has a compact, squared chassis with reinforced edges and no cosmetic finishes. It weighs just under 84 pounds. While the mass is noticeable, it’s not difficult to move. A recessed top handle sits flush and centered for balance. Two side handles are molded into the body, one on each side. This lets you lift using proper form without needing to twist or overcompensate. The handle spacing and weight distribution make it possible to load in and out of a trunk or reposition in tight spaces without tipping. The casing is matte composite. No gloss, no soft-touch. It’s built to resist fire and impact, with corner protection and stiff panels that don’t flex. There’s no padding, no shiny accents. This is a working product for flinching in harsh environments or heavy-duty use, not something designed for display.

    The front panel consolidates all standard AC outputs. What stands out most on the front panel is the 120/240V voltage selector—a rare feature in this category. With a simple toggle, the Apex 300 can switch between standard 120V and powerful 240V split-phase output, all from a single unit. There’s no need for dual machines, external inverters, or bulky adapters. Just press the 240V button, and the side port activates 240V output while the front-facing 120V outlets remain fully functional. Even better, it supports simultaneous charging and discharging in both voltage modes, making it one of the most flexible power solutions out there. There are four 120V/20A outlets arranged in a horizontal line. Above the sockets, the integrated digital display shows live system status. Remaining battery is presented both numerically and visually via a segmented arc. Directly below, the estimated charge or runtime is shown in hours and minutes. Along the sides of the screen, AC and DC power input and output are broken down in watts. System icons flank the upper corners, indicating ECO mode, connectivity status, and fan operation. Alerts appear in the lower corners with a flashing indicator. The display is not touch-sensitive, and there are no layered menus. Everything is presented in one view. Visibility holds up in bright conditions without overwhelming in low light.

    The left side houses dual cooling vents and serves as a passive intake for airflow. The 120/240V 50A AC input/output port and high-capacity outputs, including the 120V/30A TT-30R and 120V/240V 50A NEMA L14-50R outlets, are well located. The 50A AC input also supports charging from a 120/240V gas generator, making it ideal for extended power outages. These ports are clearly labeled. Rubberized flaps protect these areas. A grounding screw is located near the input ports. Vents positioned near these ports help manage thermal output. During charging or peak load, the integrated fans remain active but quiet, operating at around 40 to 50 dB under standard use.

    The right side is used for expansion. This is where the Apex 300 connects to the B300K battery via a shorter, more manageable cable. Compared to the previous longer cable version, this design saves space and improves efficiency with a more compact setup. That link locks securely and routes downward. A sealed accessory port sits next to the connector. The upper portion includes additional ventilation similar to the left side. There’s no interference between ports, and stacking doesn’t block airflow.

    The B300K adds 2764.8Wh to the total system capacity. At nearly 79 pounds, it’s only slightly lighter than the main unit. Each side of the B300K includes a top-mounted handle for lifting. When docked, the battery aligns flush with the Apex 300 and maintains overall balance. Up to four B300K modules can be stacked, but extra securing is recommended when exceeding two levels.

    Cooling is managed through a dual fan system located behind the side grills. These stay active during higher loads or rapid charging. Fan noise remains even, with no distracting pitch or rattle. This makes the Apex 300 usable near sleeping areas or indoor workspaces without disturbance.

    DC output is delivered through the optional Hub D1. This hub adds USB-C, USB-A, DC5521, a 12V auto socket, and a 50A Anderson connector standing out as a high-power DC port designed for safety and stability. It attaches vertically and doesn’t expand the unit’s footprint. If you rely on DC or USB-based devices, the hub becomes essential.

    The Bluetti app mirrors much of what’s shown on the Apex 300’s physical display. Once paired via Wi-Fi or Bluetooth, it displays a central battery status ring with remaining percentage, real-time breakdowns of AC and DC input/output wattage, and estimated time until full charge or depletion. Users can toggle AC and DC outputs, track solar contribution, and review historical usage. The interface uses strong visual cues with all major controls accessible directly from the home screen. Charging modes, notifications, and system alerts are accessed without diving through submenus. The layout prioritizes quick access and clarity over aesthetics.

    Everything about the Apex 300 centers on performance. It’s a modular, high-output power system designed for actual use, not showroom aesthetics. Whether keeping food cold during blackouts or running appliances off-grid, it stays focused on delivering energy where it’s needed most.
    Performance
    This review centers on standalone use without any home integration. When the power goes out, whether from weather, an accident, or a grid failure, you plug in what you need and the Apex 300 just runs. No rewiring. No fuss. All testing here used the onboard AC ports directly.

    In one overnight “staged” outage, the unit powered a full-size refrigerator, router, lights, and a breathing machine. Output stayed steady, and the digital panel clearly showed remaining time and load. The app mirrored this from another room. Power usage was easy to track, and the fridge didn’t cycle off.
    On a long weekend of stay-at-home glamping, the Apex 300 handled a Typhur air fryer, a drip coffee machine, and a portable AC without blinking. The 3,840W output had no problem handling the startup surge. The fans kicked on but didn’t become a distraction. Nothing tripped, nothing overheated.

    On another occasion, it powered backyard lighting, a portable fridge, and charged phones during an overnight glamping setup. Later, during a neighborhood blackout caused by a downed transformer, the Apex 300 powered a microwave, a drip coffee maker, and several LED lanterns while also recharging phones and two-way radios. It helped keep things calm without dragging out a gas generator. During another outage, it kept two fans and a portable AC unit running through the night in a hot upstairs office. While I don’t rely on a CPAP device, anyone who does can rest assured knowing the Apex 300 can power one continuously without issue. The ports are spaced well enough to plug in multiple devices without overlap or cord clutter.

    If your fridge runs on AC power, as most home units do, you don’t need anything extra. Just plug it into one of the four 120V outlets or the larger NEMA sockets, and it works. The Apex 300 delivers clean, reliable AC power for standard appliances. However, if you have a 12V DC fridge like those used in vans or campsites, limitations appear. The Apex 300 doesn’t have native DC output for those loads without an accessory.
    Everything here was tested without tying into a breaker panel or generator loop. This is power where you need it, when the wall socket doesn’t exist. The Apex 300 isn’t just spec sheets—it held up during real blackouts, heatwaves, and extended unplugged days. It powered what mattered, and didn’t get in the way.
    Emergency Runtime Scenarios
    In a blackout with no charging, the Apex 300 offers 2,764.8Wh. Adding the B300K doubles that to 5,529.6Wh. A basic emergency load including a fridge, laptop, router, phone, lights, and a CPAP draws about 1,950 to 2,200Wh daily.

    The Apex 300 alone powers this for roughly one day. Stretch it to 1.5 days by cutting nonessential loads. With the B300K, expect 2 to 2.5 days. Focus on the fridge and communication gear to reach 3 days.
    Cycle loads instead of running everything at once. Run the fridge during the day. Charge devices one at a time. Use lights only when needed.
    Sustainability
    While I haven’t personally tested the Apex 300 with solar panels, the sustainability potential here deserves serious attention. The system’s solar integration capabilities transform it from the category of home battery backup to a genuine renewable energy solution with remarkable long-term value.

    The Apex 300’s most impressive feature is its exceptional solar input capacity. When paired with BLUETTI’s SolarX 4K Solar Charge Controller, a single unit can process up to 6,400W of solar input. This represents a quantum leap beyond typical portable power stations that max out around 1,000-2,000W. For perspective, this means you could potentially recharge the entire system in just a few hours of good sunlight rather than waiting all day or longer.
    Most foldable solar panels might have inherent limitations in efficiency and are dependent on weather conditions, which is why a high input capacity for energy storage is so crucial. The Apex 300 maximizes every minute of sunshine, capturing significantly more energy during peak daylight hours. This efficiency accelerates the system’s potential payback period to approximately two years according to BLUETTI’s calculations. Few renewable energy investments offer such a rapid return.
    The Apex 300 avoids the usual tradeoff between portability and long-term value. At its core are BLUETTI’s automotive-grade LFP batteries, rated for over 6,000 charge cycles. That translates to around 17 years of daily use, nearly doubling the lifespan of many competing systems that typically last 3,000 to 4,000 cycles. This added durability cuts down on the frequency of replacements, which in turn reduces electronic waste and long-term costs. BLUETTI reinforces this commitment to longevity with rigorous validation. The larger Elite 200 V2 Solar Generator has passed 33 CNAS-certified automotive-grade tests, underscoring the brand’s approach to building quality and environmental responsibility across its ecosystem.
    This solar integration capability creates genuine resilience for regions prone to extreme weather events like Texas and Florida. The system’s dual MPPT controllers enable remarkably fast charging, reaching 80% capacity in just 40 minutes under optimal conditions. When fully expanded, the Apex 300 system can scale to deliver over 11kW of output with 58kWh of storage capacity, providing enough power to maintain essential home systems for a week without grid access.
    The AT1 Smart Distribution Box completes the sustainability equation by intelligently managing power flow between solar panels and the grid. This allows homeowners to create a customized, automated whole-home backup system that prioritizes renewable energy usage while maintaining grid connectivity when needed. The entire ecosystem works together through BLUETTI’s smartphone app, making sustainable energy management accessible even to those without technical expertise.
    Value
    The Apex 300 represents a significant investment. What truly matters isn’t only the initial cost but the long-term value proposition. This portable power station delivers exceptional returns through its versatility, durability, and advanced capabilities that go far beyond emergency backup. The system’s true value emerges when you consider how it integrates into everyday life and critical situations without compromise.

    The system’s exceptional efficiency further enhances its value proposition. With remarkably low 20W AC idle drain, the Apex 300 preserves power when not actively running devices. This translates to 24 additional hours of refrigerator runtime, 2.5 times longer AC standby, and 2.5 more days of CPAP operation compared to competing systems with higher idle consumption. During extended outages, this efficiency becomes invaluable, potentially meaning the difference between maintaining power for essential devices and running out at critical moments. The 0ms UPS switching ensures absolutely seamless power transitions, protecting sensitive electronics and providing peace of mind for those relying on medical equipment.

    Perhaps most impressive is how the Apex 300 scales with your needs without forcing unnecessary complexity. The base unit delivers substantial capability on its own, while the modular expansion system allows growth without replacing your initial investment. The optional Hub D1 adds comprehensive DC output options, the B300K batteries multiply capacity, and solar integration unlocks renewable energy potential. This flexibility means the system grows with your needs rather than becoming obsolete when requirements change. Few products in any category offer this combination of immediate utility, long-term durability, exceptional efficiency, and adaptable design. For anyone serious about energy independence, weather resilience, or sustainable power solutions, the Apex 300 delivers value that extends far beyond its price tag.
    The Bottom Line
    This review set out to evaluate the Apex 300 as a practical power solution for real-world scenarios, from blackouts to outdoor adventures. The results speak for themselves after extended testing with everyday appliances and devices. The Apex 300 delivers on its promises with exceptional performance, remarkable durability, and thoughtful design choices that prioritize user experience. Its 17 years lifespan, ultra-efficient 20W idle drain, and seamless expandability create a system that grows with your needs rather than becoming obsolete. While we didn’t test solar integration, the potential 6,400W solar input capacity through the SolarX 4K could transform this from merely a backup solution into a comprehensive renewable energy system with a potential two-year payback period.

    Whether you’re preparing for power outages or planning off-grid adventures, the Apex 300 offers a flexible solution with support for battery, solar, and even gas input. It’s designed to handle real-world energy needs with surprising ease.
    Among the available options, the one we’re reviewing, the Apex 300 + B300K expansion battery bundle, stands out because it costs just per watt-hour, with tax and shipping already included. The offer is limited by both time and availability, with installment payments now available for added flexibility.
    There are other bundles designed for different needs, so it’s worth checking which one fits your setup. The Apex 300 campaign is now live on Indiegogo until July 19.
    Click Here to Buy Now:. Hurry, deal ends soon!The post BLUETTI Apex 300 Review: The All-in-One Solar, Gas, and Battery Solution for Blackouts and Beyond first appeared on Yanko Design.
    #bluetti #apex #review #allinone #solar
    BLUETTI Apex 300 Review: The All-in-One Solar, Gas, and Battery Solution for Blackouts and Beyond
    The BLUETTI Apex 300 isn’t meant to sit idle between emergencies. It fits into daily routines, powering everyday essentials without rewiring or installing. This review focuses on how it performs with real products in familiar settings. That includes household appliances during outages, coolers and fans during weekend camping, and portable gear on long tournament days. There are no solar arrays or panel integrations. Just plug and use. PROS: Exceptional 6,000+ charge cycle lifespan offers 17 years of reliable operation, doubling industry standards. Impressive 3,840W output and 120/240V dual voltages for handling multiple high-demand appliances simultaneously without faltering. Efficient 20W AC idle drain extends runtime significantly during extended outages. Modular design with B300K expansion battery allows customized scaling without replacing initial investment. Compatible with 120/240V gas generatorfor extended power outage. Massive 6,400W solar input capacity enables rapid renewable charging with potential two-year payback and over 30kW of solar input for whole-home backup. Low upfront cost at just /Wh for those who need serious power. CONS: 2.7kW capacity may limit portability, making it less suitable for those with lower power needs. Lacks dedicated DC ports, but this trade-off helps keep the price more affordable. RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:The Apex 300 transforms uncertainty into confidence, delivering power when everything else fails. Peace of mind has never been so tangible. Designer: BLUETTI Click Here to Buy Now:. Hurry, deal ends soon! With 2,764.8Wh of capacity and 3,840W of output, the Apex 300 handles a refrigerator in the kitchen, a portable AC near the tent, or a Typhur air fryer at the courts. It doesn’t need a permanent location. You can roll it into the laundry room to run a washer or dryer in an emergency, or drop it under a canopy to keep drinks cold and phones charged. While the unit supports advanced configurations through expansion hubs and bypass systems, those features are outside the scope of this review. The goal here is practical performance with common products, powered directly from the main unit or its optional DC hub. From prolonged blackout prep to match-day support, the Apex 300 demonstrates the potential of a high-capacity portable power station, especially when paired with a fuel generator, all without leaving the average user behind. Design & Ergonomics The Apex 300 has a compact, squared chassis with reinforced edges and no cosmetic finishes. It weighs just under 84 pounds. While the mass is noticeable, it’s not difficult to move. A recessed top handle sits flush and centered for balance. Two side handles are molded into the body, one on each side. This lets you lift using proper form without needing to twist or overcompensate. The handle spacing and weight distribution make it possible to load in and out of a trunk or reposition in tight spaces without tipping. The casing is matte composite. No gloss, no soft-touch. It’s built to resist fire and impact, with corner protection and stiff panels that don’t flex. There’s no padding, no shiny accents. This is a working product for flinching in harsh environments or heavy-duty use, not something designed for display. The front panel consolidates all standard AC outputs. What stands out most on the front panel is the 120/240V voltage selector—a rare feature in this category. With a simple toggle, the Apex 300 can switch between standard 120V and powerful 240V split-phase output, all from a single unit. There’s no need for dual machines, external inverters, or bulky adapters. Just press the 240V button, and the side port activates 240V output while the front-facing 120V outlets remain fully functional. Even better, it supports simultaneous charging and discharging in both voltage modes, making it one of the most flexible power solutions out there. There are four 120V/20A outlets arranged in a horizontal line. Above the sockets, the integrated digital display shows live system status. Remaining battery is presented both numerically and visually via a segmented arc. Directly below, the estimated charge or runtime is shown in hours and minutes. Along the sides of the screen, AC and DC power input and output are broken down in watts. System icons flank the upper corners, indicating ECO mode, connectivity status, and fan operation. Alerts appear in the lower corners with a flashing indicator. The display is not touch-sensitive, and there are no layered menus. Everything is presented in one view. Visibility holds up in bright conditions without overwhelming in low light. The left side houses dual cooling vents and serves as a passive intake for airflow. The 120/240V 50A AC input/output port and high-capacity outputs, including the 120V/30A TT-30R and 120V/240V 50A NEMA L14-50R outlets, are well located. The 50A AC input also supports charging from a 120/240V gas generator, making it ideal for extended power outages. These ports are clearly labeled. Rubberized flaps protect these areas. A grounding screw is located near the input ports. Vents positioned near these ports help manage thermal output. During charging or peak load, the integrated fans remain active but quiet, operating at around 40 to 50 dB under standard use. The right side is used for expansion. This is where the Apex 300 connects to the B300K battery via a shorter, more manageable cable. Compared to the previous longer cable version, this design saves space and improves efficiency with a more compact setup. That link locks securely and routes downward. A sealed accessory port sits next to the connector. The upper portion includes additional ventilation similar to the left side. There’s no interference between ports, and stacking doesn’t block airflow. The B300K adds 2764.8Wh to the total system capacity. At nearly 79 pounds, it’s only slightly lighter than the main unit. Each side of the B300K includes a top-mounted handle for lifting. When docked, the battery aligns flush with the Apex 300 and maintains overall balance. Up to four B300K modules can be stacked, but extra securing is recommended when exceeding two levels. Cooling is managed through a dual fan system located behind the side grills. These stay active during higher loads or rapid charging. Fan noise remains even, with no distracting pitch or rattle. This makes the Apex 300 usable near sleeping areas or indoor workspaces without disturbance. DC output is delivered through the optional Hub D1. This hub adds USB-C, USB-A, DC5521, a 12V auto socket, and a 50A Anderson connector standing out as a high-power DC port designed for safety and stability. It attaches vertically and doesn’t expand the unit’s footprint. If you rely on DC or USB-based devices, the hub becomes essential. The Bluetti app mirrors much of what’s shown on the Apex 300’s physical display. Once paired via Wi-Fi or Bluetooth, it displays a central battery status ring with remaining percentage, real-time breakdowns of AC and DC input/output wattage, and estimated time until full charge or depletion. Users can toggle AC and DC outputs, track solar contribution, and review historical usage. The interface uses strong visual cues with all major controls accessible directly from the home screen. Charging modes, notifications, and system alerts are accessed without diving through submenus. The layout prioritizes quick access and clarity over aesthetics. Everything about the Apex 300 centers on performance. It’s a modular, high-output power system designed for actual use, not showroom aesthetics. Whether keeping food cold during blackouts or running appliances off-grid, it stays focused on delivering energy where it’s needed most. Performance This review centers on standalone use without any home integration. When the power goes out, whether from weather, an accident, or a grid failure, you plug in what you need and the Apex 300 just runs. No rewiring. No fuss. All testing here used the onboard AC ports directly. In one overnight “staged” outage, the unit powered a full-size refrigerator, router, lights, and a breathing machine. Output stayed steady, and the digital panel clearly showed remaining time and load. The app mirrored this from another room. Power usage was easy to track, and the fridge didn’t cycle off. On a long weekend of stay-at-home glamping, the Apex 300 handled a Typhur air fryer, a drip coffee machine, and a portable AC without blinking. The 3,840W output had no problem handling the startup surge. The fans kicked on but didn’t become a distraction. Nothing tripped, nothing overheated. On another occasion, it powered backyard lighting, a portable fridge, and charged phones during an overnight glamping setup. Later, during a neighborhood blackout caused by a downed transformer, the Apex 300 powered a microwave, a drip coffee maker, and several LED lanterns while also recharging phones and two-way radios. It helped keep things calm without dragging out a gas generator. During another outage, it kept two fans and a portable AC unit running through the night in a hot upstairs office. While I don’t rely on a CPAP device, anyone who does can rest assured knowing the Apex 300 can power one continuously without issue. The ports are spaced well enough to plug in multiple devices without overlap or cord clutter. If your fridge runs on AC power, as most home units do, you don’t need anything extra. Just plug it into one of the four 120V outlets or the larger NEMA sockets, and it works. The Apex 300 delivers clean, reliable AC power for standard appliances. However, if you have a 12V DC fridge like those used in vans or campsites, limitations appear. The Apex 300 doesn’t have native DC output for those loads without an accessory. Everything here was tested without tying into a breaker panel or generator loop. This is power where you need it, when the wall socket doesn’t exist. The Apex 300 isn’t just spec sheets—it held up during real blackouts, heatwaves, and extended unplugged days. It powered what mattered, and didn’t get in the way. Emergency Runtime Scenarios In a blackout with no charging, the Apex 300 offers 2,764.8Wh. Adding the B300K doubles that to 5,529.6Wh. A basic emergency load including a fridge, laptop, router, phone, lights, and a CPAP draws about 1,950 to 2,200Wh daily. The Apex 300 alone powers this for roughly one day. Stretch it to 1.5 days by cutting nonessential loads. With the B300K, expect 2 to 2.5 days. Focus on the fridge and communication gear to reach 3 days. Cycle loads instead of running everything at once. Run the fridge during the day. Charge devices one at a time. Use lights only when needed. Sustainability While I haven’t personally tested the Apex 300 with solar panels, the sustainability potential here deserves serious attention. The system’s solar integration capabilities transform it from the category of home battery backup to a genuine renewable energy solution with remarkable long-term value. The Apex 300’s most impressive feature is its exceptional solar input capacity. When paired with BLUETTI’s SolarX 4K Solar Charge Controller, a single unit can process up to 6,400W of solar input. This represents a quantum leap beyond typical portable power stations that max out around 1,000-2,000W. For perspective, this means you could potentially recharge the entire system in just a few hours of good sunlight rather than waiting all day or longer. Most foldable solar panels might have inherent limitations in efficiency and are dependent on weather conditions, which is why a high input capacity for energy storage is so crucial. The Apex 300 maximizes every minute of sunshine, capturing significantly more energy during peak daylight hours. This efficiency accelerates the system’s potential payback period to approximately two years according to BLUETTI’s calculations. Few renewable energy investments offer such a rapid return. The Apex 300 avoids the usual tradeoff between portability and long-term value. At its core are BLUETTI’s automotive-grade LFP batteries, rated for over 6,000 charge cycles. That translates to around 17 years of daily use, nearly doubling the lifespan of many competing systems that typically last 3,000 to 4,000 cycles. This added durability cuts down on the frequency of replacements, which in turn reduces electronic waste and long-term costs. BLUETTI reinforces this commitment to longevity with rigorous validation. The larger Elite 200 V2 Solar Generator has passed 33 CNAS-certified automotive-grade tests, underscoring the brand’s approach to building quality and environmental responsibility across its ecosystem. This solar integration capability creates genuine resilience for regions prone to extreme weather events like Texas and Florida. The system’s dual MPPT controllers enable remarkably fast charging, reaching 80% capacity in just 40 minutes under optimal conditions. When fully expanded, the Apex 300 system can scale to deliver over 11kW of output with 58kWh of storage capacity, providing enough power to maintain essential home systems for a week without grid access. The AT1 Smart Distribution Box completes the sustainability equation by intelligently managing power flow between solar panels and the grid. This allows homeowners to create a customized, automated whole-home backup system that prioritizes renewable energy usage while maintaining grid connectivity when needed. The entire ecosystem works together through BLUETTI’s smartphone app, making sustainable energy management accessible even to those without technical expertise. Value The Apex 300 represents a significant investment. What truly matters isn’t only the initial cost but the long-term value proposition. This portable power station delivers exceptional returns through its versatility, durability, and advanced capabilities that go far beyond emergency backup. The system’s true value emerges when you consider how it integrates into everyday life and critical situations without compromise. The system’s exceptional efficiency further enhances its value proposition. With remarkably low 20W AC idle drain, the Apex 300 preserves power when not actively running devices. This translates to 24 additional hours of refrigerator runtime, 2.5 times longer AC standby, and 2.5 more days of CPAP operation compared to competing systems with higher idle consumption. During extended outages, this efficiency becomes invaluable, potentially meaning the difference between maintaining power for essential devices and running out at critical moments. The 0ms UPS switching ensures absolutely seamless power transitions, protecting sensitive electronics and providing peace of mind for those relying on medical equipment. Perhaps most impressive is how the Apex 300 scales with your needs without forcing unnecessary complexity. The base unit delivers substantial capability on its own, while the modular expansion system allows growth without replacing your initial investment. The optional Hub D1 adds comprehensive DC output options, the B300K batteries multiply capacity, and solar integration unlocks renewable energy potential. This flexibility means the system grows with your needs rather than becoming obsolete when requirements change. Few products in any category offer this combination of immediate utility, long-term durability, exceptional efficiency, and adaptable design. For anyone serious about energy independence, weather resilience, or sustainable power solutions, the Apex 300 delivers value that extends far beyond its price tag. The Bottom Line This review set out to evaluate the Apex 300 as a practical power solution for real-world scenarios, from blackouts to outdoor adventures. The results speak for themselves after extended testing with everyday appliances and devices. The Apex 300 delivers on its promises with exceptional performance, remarkable durability, and thoughtful design choices that prioritize user experience. Its 17 years lifespan, ultra-efficient 20W idle drain, and seamless expandability create a system that grows with your needs rather than becoming obsolete. While we didn’t test solar integration, the potential 6,400W solar input capacity through the SolarX 4K could transform this from merely a backup solution into a comprehensive renewable energy system with a potential two-year payback period. Whether you’re preparing for power outages or planning off-grid adventures, the Apex 300 offers a flexible solution with support for battery, solar, and even gas input. It’s designed to handle real-world energy needs with surprising ease. Among the available options, the one we’re reviewing, the Apex 300 + B300K expansion battery bundle, stands out because it costs just per watt-hour, with tax and shipping already included. The offer is limited by both time and availability, with installment payments now available for added flexibility. There are other bundles designed for different needs, so it’s worth checking which one fits your setup. The Apex 300 campaign is now live on Indiegogo until July 19. Click Here to Buy Now:. Hurry, deal ends soon!The post BLUETTI Apex 300 Review: The All-in-One Solar, Gas, and Battery Solution for Blackouts and Beyond first appeared on Yanko Design. #bluetti #apex #review #allinone #solar
    WWW.YANKODESIGN.COM
    BLUETTI Apex 300 Review: The All-in-One Solar, Gas, and Battery Solution for Blackouts and Beyond
    The BLUETTI Apex 300 isn’t meant to sit idle between emergencies. It fits into daily routines, powering everyday essentials without rewiring or installing. This review focuses on how it performs with real products in familiar settings. That includes household appliances during outages, coolers and fans during weekend camping, and portable gear on long tournament days. There are no solar arrays or panel integrations. Just plug and use. PROS: Exceptional 6,000+ charge cycle lifespan offers 17 years of reliable operation, doubling industry standards. Impressive 3,840W output and 120/240V dual voltages for handling multiple high-demand appliances simultaneously without faltering. Efficient 20W AC idle drain extends runtime significantly during extended outages. Modular design with B300K expansion battery allows customized scaling without replacing initial investment. Compatible with 120/240V gas generator (11,520W in parallel connection) for extended power outage. Massive 6,400W solar input capacity enables rapid renewable charging with potential two-year payback and over 30kW of solar input for whole-home backup. Low upfront cost at just $0.36/Wh for those who need serious power. CONS: 2.7kW capacity may limit portability, making it less suitable for those with lower power needs. Lacks dedicated DC ports (requires the optional Hub D1 accessory, which offers 700W DC output), but this trade-off helps keep the price more affordable. RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:The Apex 300 transforms uncertainty into confidence, delivering power when everything else fails. Peace of mind has never been so tangible. Designer: BLUETTI Click Here to Buy Now: $1199 $2399 ($1200 off). Hurry, deal ends soon! With 2,764.8Wh of capacity and 3,840W of output, the Apex 300 handles a refrigerator in the kitchen, a portable AC near the tent, or a Typhur air fryer at the courts. It doesn’t need a permanent location. You can roll it into the laundry room to run a washer or dryer in an emergency, or drop it under a canopy to keep drinks cold and phones charged. While the unit supports advanced configurations through expansion hubs and bypass systems, those features are outside the scope of this review. The goal here is practical performance with common products, powered directly from the main unit or its optional DC hub. From prolonged blackout prep to match-day support, the Apex 300 demonstrates the potential of a high-capacity portable power station, especially when paired with a fuel generator, all without leaving the average user behind. Design & Ergonomics The Apex 300 has a compact, squared chassis with reinforced edges and no cosmetic finishes. It weighs just under 84 pounds. While the mass is noticeable, it’s not difficult to move. A recessed top handle sits flush and centered for balance. Two side handles are molded into the body, one on each side. This lets you lift using proper form without needing to twist or overcompensate. The handle spacing and weight distribution make it possible to load in and out of a trunk or reposition in tight spaces without tipping. The casing is matte composite. No gloss, no soft-touch. It’s built to resist fire and impact, with corner protection and stiff panels that don’t flex. There’s no padding, no shiny accents. This is a working product for flinching in harsh environments or heavy-duty use, not something designed for display. The front panel consolidates all standard AC outputs. What stands out most on the front panel is the 120/240V voltage selector—a rare feature in this category. With a simple toggle, the Apex 300 can switch between standard 120V and powerful 240V split-phase output, all from a single unit. There’s no need for dual machines, external inverters, or bulky adapters. Just press the 240V button, and the side port activates 240V output while the front-facing 120V outlets remain fully functional. Even better, it supports simultaneous charging and discharging in both voltage modes, making it one of the most flexible power solutions out there. There are four 120V/20A outlets arranged in a horizontal line. Above the sockets, the integrated digital display shows live system status. Remaining battery is presented both numerically and visually via a segmented arc. Directly below, the estimated charge or runtime is shown in hours and minutes. Along the sides of the screen, AC and DC power input and output are broken down in watts. System icons flank the upper corners, indicating ECO mode, connectivity status, and fan operation. Alerts appear in the lower corners with a flashing indicator. The display is not touch-sensitive, and there are no layered menus. Everything is presented in one view. Visibility holds up in bright conditions without overwhelming in low light. The left side houses dual cooling vents and serves as a passive intake for airflow. The 120/240V 50A AC input/output port and high-capacity outputs, including the 120V/30A TT-30R and 120V/240V 50A NEMA L14-50R outlets, are well located. The 50A AC input also supports charging from a 120/240V gas generator, making it ideal for extended power outages. These ports are clearly labeled. Rubberized flaps protect these areas. A grounding screw is located near the input ports. Vents positioned near these ports help manage thermal output. During charging or peak load, the integrated fans remain active but quiet, operating at around 40 to 50 dB under standard use. The right side is used for expansion. This is where the Apex 300 connects to the B300K battery via a shorter, more manageable cable. Compared to the previous longer cable version, this design saves space and improves efficiency with a more compact setup. That link locks securely and routes downward. A sealed accessory port sits next to the connector. The upper portion includes additional ventilation similar to the left side. There’s no interference between ports, and stacking doesn’t block airflow. The B300K adds 2764.8Wh to the total system capacity. At nearly 79 pounds, it’s only slightly lighter than the main unit. Each side of the B300K includes a top-mounted handle for lifting. When docked, the battery aligns flush with the Apex 300 and maintains overall balance. Up to four B300K modules can be stacked, but extra securing is recommended when exceeding two levels. Cooling is managed through a dual fan system located behind the side grills. These stay active during higher loads or rapid charging. Fan noise remains even, with no distracting pitch or rattle. This makes the Apex 300 usable near sleeping areas or indoor workspaces without disturbance. DC output is delivered through the optional Hub D1. This hub adds USB-C, USB-A, DC5521, a 12V auto socket, and a 50A Anderson connector standing out as a high-power DC port designed for safety and stability. It attaches vertically and doesn’t expand the unit’s footprint. If you rely on DC or USB-based devices, the hub becomes essential. The Bluetti app mirrors much of what’s shown on the Apex 300’s physical display. Once paired via Wi-Fi or Bluetooth, it displays a central battery status ring with remaining percentage, real-time breakdowns of AC and DC input/output wattage, and estimated time until full charge or depletion. Users can toggle AC and DC outputs, track solar contribution, and review historical usage. The interface uses strong visual cues with all major controls accessible directly from the home screen. Charging modes, notifications, and system alerts are accessed without diving through submenus. The layout prioritizes quick access and clarity over aesthetics. Everything about the Apex 300 centers on performance. It’s a modular, high-output power system designed for actual use, not showroom aesthetics. Whether keeping food cold during blackouts or running appliances off-grid, it stays focused on delivering energy where it’s needed most. Performance This review centers on standalone use without any home integration. When the power goes out, whether from weather, an accident, or a grid failure, you plug in what you need and the Apex 300 just runs. No rewiring. No fuss. All testing here used the onboard AC ports directly. In one overnight “staged” outage, the unit powered a full-size refrigerator, router, lights, and a breathing machine. Output stayed steady, and the digital panel clearly showed remaining time and load. The app mirrored this from another room. Power usage was easy to track, and the fridge didn’t cycle off. On a long weekend of stay-at-home glamping, the Apex 300 handled a Typhur air fryer, a drip coffee machine, and a portable AC without blinking. The 3,840W output had no problem handling the startup surge. The fans kicked on but didn’t become a distraction. Nothing tripped, nothing overheated. On another occasion, it powered backyard lighting, a portable fridge, and charged phones during an overnight glamping setup. Later, during a neighborhood blackout caused by a downed transformer, the Apex 300 powered a microwave, a drip coffee maker, and several LED lanterns while also recharging phones and two-way radios. It helped keep things calm without dragging out a gas generator. During another outage, it kept two fans and a portable AC unit running through the night in a hot upstairs office. While I don’t rely on a CPAP device, anyone who does can rest assured knowing the Apex 300 can power one continuously without issue. The ports are spaced well enough to plug in multiple devices without overlap or cord clutter. If your fridge runs on AC power, as most home units do, you don’t need anything extra. Just plug it into one of the four 120V outlets or the larger NEMA sockets, and it works. The Apex 300 delivers clean, reliable AC power for standard appliances. However, if you have a 12V DC fridge like those used in vans or campsites, limitations appear. The Apex 300 doesn’t have native DC output for those loads without an accessory. Everything here was tested without tying into a breaker panel or generator loop. This is power where you need it, when the wall socket doesn’t exist. The Apex 300 isn’t just spec sheets—it held up during real blackouts, heatwaves, and extended unplugged days. It powered what mattered, and didn’t get in the way. Emergency Runtime Scenarios In a blackout with no charging, the Apex 300 offers 2,764.8Wh. Adding the B300K doubles that to 5,529.6Wh. A basic emergency load including a fridge, laptop, router, phone, lights, and a CPAP draws about 1,950 to 2,200Wh daily. The Apex 300 alone powers this for roughly one day. Stretch it to 1.5 days by cutting nonessential loads. With the B300K, expect 2 to 2.5 days. Focus on the fridge and communication gear to reach 3 days. Cycle loads instead of running everything at once. Run the fridge during the day. Charge devices one at a time. Use lights only when needed. Sustainability While I haven’t personally tested the Apex 300 with solar panels, the sustainability potential here deserves serious attention. The system’s solar integration capabilities transform it from the category of home battery backup to a genuine renewable energy solution with remarkable long-term value. The Apex 300’s most impressive feature is its exceptional solar input capacity. When paired with BLUETTI’s SolarX 4K Solar Charge Controller, a single unit can process up to 6,400W of solar input. This represents a quantum leap beyond typical portable power stations that max out around 1,000-2,000W. For perspective, this means you could potentially recharge the entire system in just a few hours of good sunlight rather than waiting all day or longer. Most foldable solar panels might have inherent limitations in efficiency and are dependent on weather conditions, which is why a high input capacity for energy storage is so crucial. The Apex 300 maximizes every minute of sunshine, capturing significantly more energy during peak daylight hours. This efficiency accelerates the system’s potential payback period to approximately two years according to BLUETTI’s calculations. Few renewable energy investments offer such a rapid return. The Apex 300 avoids the usual tradeoff between portability and long-term value. At its core are BLUETTI’s automotive-grade LFP batteries, rated for over 6,000 charge cycles. That translates to around 17 years of daily use, nearly doubling the lifespan of many competing systems that typically last 3,000 to 4,000 cycles. This added durability cuts down on the frequency of replacements, which in turn reduces electronic waste and long-term costs. BLUETTI reinforces this commitment to longevity with rigorous validation. The larger Elite 200 V2 Solar Generator has passed 33 CNAS-certified automotive-grade tests, underscoring the brand’s approach to building quality and environmental responsibility across its ecosystem. This solar integration capability creates genuine resilience for regions prone to extreme weather events like Texas and Florida. The system’s dual MPPT controllers enable remarkably fast charging, reaching 80% capacity in just 40 minutes under optimal conditions. When fully expanded, the Apex 300 system can scale to deliver over 11kW of output with 58kWh of storage capacity, providing enough power to maintain essential home systems for a week without grid access. The AT1 Smart Distribution Box completes the sustainability equation by intelligently managing power flow between solar panels and the grid. This allows homeowners to create a customized, automated whole-home backup system that prioritizes renewable energy usage while maintaining grid connectivity when needed. The entire ecosystem works together through BLUETTI’s smartphone app, making sustainable energy management accessible even to those without technical expertise. Value The Apex 300 represents a significant investment. What truly matters isn’t only the initial cost but the long-term value proposition. This portable power station delivers exceptional returns through its versatility, durability, and advanced capabilities that go far beyond emergency backup. The system’s true value emerges when you consider how it integrates into everyday life and critical situations without compromise. The system’s exceptional efficiency further enhances its value proposition. With remarkably low 20W AC idle drain, the Apex 300 preserves power when not actively running devices. This translates to 24 additional hours of refrigerator runtime, 2.5 times longer AC standby, and 2.5 more days of CPAP operation compared to competing systems with higher idle consumption. During extended outages, this efficiency becomes invaluable, potentially meaning the difference between maintaining power for essential devices and running out at critical moments. The 0ms UPS switching ensures absolutely seamless power transitions, protecting sensitive electronics and providing peace of mind for those relying on medical equipment. Perhaps most impressive is how the Apex 300 scales with your needs without forcing unnecessary complexity. The base unit delivers substantial capability on its own, while the modular expansion system allows growth without replacing your initial investment. The optional Hub D1 adds comprehensive DC output options, the B300K batteries multiply capacity, and solar integration unlocks renewable energy potential. This flexibility means the system grows with your needs rather than becoming obsolete when requirements change. Few products in any category offer this combination of immediate utility, long-term durability, exceptional efficiency, and adaptable design. For anyone serious about energy independence, weather resilience, or sustainable power solutions, the Apex 300 delivers value that extends far beyond its price tag. The Bottom Line This review set out to evaluate the Apex 300 as a practical power solution for real-world scenarios, from blackouts to outdoor adventures. The results speak for themselves after extended testing with everyday appliances and devices. The Apex 300 delivers on its promises with exceptional performance, remarkable durability, and thoughtful design choices that prioritize user experience. Its 17 years lifespan (nearly double the industry standard), ultra-efficient 20W idle drain, and seamless expandability create a system that grows with your needs rather than becoming obsolete. While we didn’t test solar integration, the potential 6,400W solar input capacity through the SolarX 4K could transform this from merely a backup solution into a comprehensive renewable energy system with a potential two-year payback period. Whether you’re preparing for power outages or planning off-grid adventures, the Apex 300 offers a flexible solution with support for battery, solar, and even gas input. It’s designed to handle real-world energy needs with surprising ease. Among the available options, the one we’re reviewing, the Apex 300 + B300K expansion battery bundle, stands out because it costs just $0.36 per watt-hour, with tax and shipping already included. The offer is limited by both time and availability, with installment payments now available for added flexibility. There are other bundles designed for different needs, so it’s worth checking which one fits your setup. The Apex 300 campaign is now live on Indiegogo until July 19. Click Here to Buy Now: $1199 $2399 ($1200 off). Hurry, deal ends soon!The post BLUETTI Apex 300 Review: The All-in-One Solar, Gas, and Battery Solution for Blackouts and Beyond first appeared on Yanko Design.
    0 Комментарии 0 Поделились
  • Collaborators: Healthcare Innovation to Impact

    JONATHAN CARLSON: From the beginning, healthcare stood out to us as an important opportunity for general reasoners to improve the lives and experiences of patients and providers. Indeed, in the past two years, there’s been an explosion of scientific papers looking at the application first of text reasoners and medicine, then multi-modal reasoners that can interpret medical images, and now, most recently, healthcare agents that can reason with each other. But even more impressive than the pace of research has been the surprisingly rapid diffusion of this technology into real world clinical workflows. 
    LUNGREN: So today, we’ll talk about how our cross-company collaboration has shortened that gap and delivered advanced AI capabilities and solutions into the hands of developers and clinicians around the world, empowering everyone in health and life sciences to achieve more. I’m Doctor Matt Lungren, chief scientific officer for Microsoft Health and Life Sciences. 
    CARLSON: And I’m Jonathan Carlson, vice president and managing director of Microsoft Health Futures. 
    LUNGREN: And together we brought some key players leading in the space of AI and health
    CARLSON: We’ve asked these brilliant folks to join us because each of them represents a mission critical group of cutting-edge stakeholders, scaling breakthroughs into purpose-built solutions and capabilities for health
    LUNGREN: We’ll hear today how generative AI capabilities can unlock reasoning across every data type in medicine: text, images, waveforms, genomics. And further, how multi-agent frameworks in healthcare can accelerate complex workflows, in some cases acting as a specialist team member, safely secured inside the Microsoft 365 tools used by hundreds of millions of healthcare enterprise users across the world. The opportunity to save time today and lives tomorrow with AI has never been larger.  MATTHEW LUNGREN: Jonathan. You know, it’s been really interesting kind of observing Microsoft Research over the decades. I’ve, you know, been watching you guys in my prior academic career. You are always on the front of innovation, particularly in health
     JONATHAN CARLSON: I mean, it’s some of what’s in our DNA, I mean, we’ve been publishing in health and life sciences for two decades here. But when we launched Health Futures as a mission-focused lab about 7 or 8 years ago, we really started with the premise that the way to have impact was to really close the loop between, not just good ideas that get published, but good ideas that can actually be grounded in real problems that clinicians and scientists care about, that then allow us to actually go from that first proof of concept into an incubation, into getting real world feedback that allows us to close that loop. And now with, you know, the HLS organization here as a product group, we have the opportunity to work really closely with you all to not just prove what’s possible in the clinic or in the lab, but actually start scaling that into the broader community. 
    CAMERON RUNDE: And one thing I’ll add here is that the problems that we’re trying to tackle in health
    CARLSON: So, Matt, back to you. What are you guys doing in the product group? How do you guys see these models getting into the clinic?
    LUNGREN: You know, I think a lot of people, you know, think about AI is just, you know, maybe just even a few years old because of GPT and how that really captured the public’s consciousness. Right?
    And so, you think about the speech-to-text technology of being able to dictate something, for a clinic note or for a visit, that was typically based on Nuance technology. And so there’s a lot of product understanding of the market, how to deliver something that clinicians will use, understanding the pain points and workflows and really that Health IT space, which is sometimes the third rail, I feel like with a lot of innovation in healthcare. 
    But beyond that, I mean, I think now that we have this really powerful engine of Microsoft and the platform capabilities, we’re seeing, innovations on the healthcare side for data storage, data interoperability, with different types of medical data. You have new applications coming online, the ability, of course, to see generative AI now infused into the speech-to-text and, becoming Dragon Copilot, which is something that has been, you know, tremendously, received by the community. 
    Physicians are able to now just have a conversation with a patient. They turn to their computer and the note is ready for them. There’s no more this, we call it keyboard liberation. I don’t know if you heard that before. And that’s just been tremendous. And there’s so much more coming from that side. And then there’s other parts of the workflow that we also get engaged in — the diagnostic workflow.
    So medical imaging, sharing images across different hospital systems, the list goes on. And so now when you move into AI, we feel like there’s a huge opportunity to deliver capabilities into the clinical workflow via the products and solutions we already have. But, I mean, we’ll now that we’ve kind of expanded our team to involve Azure and platform, we’re really able to now focus on the developers.
    WILL GUYMAN: Yeah. And you’re always telling me as a doctor how frustrating it is to be spending time at the computer instead of with your patients. I think you told me, you know, 4,000 clicks a day for the typical doctor, which is tremendous. And something like Dragon Copilot can save that five minutes per patient. But it can also now take actions after the patient encounter so it can draft the after-visit summary. 
    It can order labs and medications for the referral. And that’s incredible. And we want to keep building on that. There’s so many other use cases across the ecosystem. And so that’s why in Azure AI Foundry, we have translated a lot of the research from Microsoft Research and made that available to developers to build and customize for their own applications. 
    SMITHA SALIGRAMA: Yeah. And as you were saying, in our transformation of moving from solutions to platforms and as, scaling solutions to other, multiple scenarios, as we put our models in AI Foundry, we provide these developer capabilities like bring your own data and fine
    LUNGREN: Well, I want to do a reality check because, you know, I think to us that are now really focused on technology, it seems like, I’ve heard this story before, right. I, I remember even in, my academic clinical days where it felt like technology was always the quick answer and it felt like technology was, there was maybe a disconnect between what my problems were or what I think needed to be done versus kind of the solutions that were kind of, created or offered to us. And I guess at some level, how Jonathan, do you think about this? Because to do things well in the science space is one thing, to do things well in science, but then also have it be something that actually drives health
    CARLSON: Yeah. I mean, as you said, I think one of the core pathologies of Big Tech is we assume every problem is a technology problem. And that’s all it will take to solve the problem. And I think, look, I was trained as a computational biologist, and that sits in the awkward middle between biology and computation. And the thing that we always have to remember, the thing that we were very acutely aware of when we set out, was that we are not the experts. We do have, you know, you as an M.D., we have everybody on the team, we have biologists on the team. 
    But this is a big space. And the only way we’re going to have real impact, the only way we’re even going to pick the right problems to work on is if we really partner deeply, with providers, with EHRvendors, with scientists, and really understand what’s important and again, get that feedback loop. 
    RUNDE: Yeah, I think we really need to ground the work that we do in the science itself. You need to understand the broader ecosystem and the broader landscape, across healthwe think are important. Because, as Jonathan said, we’re not the experts in health
    CARLSON: When we really launched this, this mission, 7 or 8 years ago, we really came in with the premise of, if we decide to stop, we want to be sure the world cares. And the only way that’s going to be true is if we’re really deeply embedded with the people that matter–the patients, the providers and the scientists.
    LUNGREN: And now it really feels like this collaborative effort, you know, really can help start to extend that mission. Right. I think, you know, Will and Smitha, that we definitely feel the passion and the innovation. And we certainly benefit from those collaborations, too. But then we have these other partners and even customers, right, that we can start to tap into and have that flywheel keep spinning. 
    GUYMAN: Yeah. And the whole industry is an ecosystem. So, we have our own data sets at Microsoft Research that you’ve trained amazing AI models with. And those are in the catalog. But then you’ve also partnered with institutions like Providence or Page AI . And those models are in the catalog with their data. And then there are third parties like Nvidia that have their own specialized proprietary data sets, and their models are there too. So, we have this ecosystem of open source models. And maybe Smitha, you want to talk about how developers can actually customize these. 
    SALIGRAMA: Yeah. So we use the Azure AI Foundry ecosystem. Developers can feel at home if they’re using the AI Foundry. So they can look at our model cards that we publish as part of the models we publish, understand the use cases of these models, how to, quickly, bring up these APIs and, look at different use cases of how to apply these and even fine
    LUNGREN: Yeah it has been interesting to see we have these health
    GUYMAN: Well, the general-purpose large language models are amazing for medical general reasoning. So Microsoft Research has shown that that they can perform super well on, for example, like the United States medical licensing exam, they can exceed doctor performance if they’re just picking between different multiple-choice questions. But real medicine we know is messier. It doesn’t always start with the whole patient context provided as text in the prompt. You have to get the source data and that raw data is often non-text. The majority of it is non-text. It’s things like medical imaging, radiology, pathology, ophthalmology, dermatology. It goes on and on. And there’s endless signal data, lab data. And so all of this diverse data type needs to be processed through specialized models because much of that data is not available on the public internet. 
    And that’s why we’re taking this partner approach, first party and third party models that can interpret all this kind of data and then connect them ultimately back to these general reasoners to reason over that. 
    LUNGREN: So, you know, I’ve been at this company for a while and, you know, familiar with kind of how long it takes, generally to get, you know, a really good research paper, do all the studies, do all the data analysis, and then go through the process of publishing, right, which takes, as, you know, a long time and it’s, you know, very rigorous. 
    And one of the things that struck me, last year, I think we, we started this big collaboration and, within a quarter, you had a Nature paper coming out from Microsoft Research, and that model that the Nature paper was describing was ready to be used by anyone on the Azure AI Foundry within that same quarter. It kind of blew my mind when I thought about it, you know, even though we were all, you know, working very hard to get that done. Any thoughts on that? I mean, has this ever happened in your career? And, you know, what’s the secret sauce to that? 
    CARLSON: Yeah, I mean, the time scale from research to product has been massively compressed. And I’d push that even further, which is to say, the reason why it took a quarter was because we were laying the railroad tracks as we’re driving the train. We have examples right after that when we are launching on Foundry the same day we were publishing the paper. 
    And frankly, the review times are becoming longer than it takes to actually productize the models. I think there’s two things that are going on with that are really converging. One is that the overall ecosystem is converging on a relatively small number of patterns, and that gives us, as a tech company, a reason to go off and really make those patterns hardened in a way that allows not just us, but third parties as well, to really have a nice workflow to publish these models. 
    But the other is actually, I think, a change in how we work, you know, and for most of our history as an industrial research lab, we would do research and then we’d go pitch it to somebody and try and throw it over the fence. We’ve really built a much more integrated team. In fact, if you look at that Nature paper or any of the other papers, there’s folks from product teams. Many of you are on the papers along with our clinical collaborators.
    RUNDE: Yeah. I think one thing that’s really important to note is that there’s a ton of different ways that you can have impact, right? So I like to think about phasing. In Health Futures at least, I like to think about phasing the work that we do. So first we have research, which is really early innovation. And the impact there is getting our technology and our tools out there and really sharing the learnings that we’ve had. 
    So that can be through publications like you mentioned. It can be through open-sourcing our models. And then you go to incubation. So, this is, I think, one of the more new spaces that we’re getting into, which is maybe that blurred line between research and product. Right. Which is, how do we take the tools and technologies that we’ve built and get them into the hands of users, typically through our partnerships? 
    Right. So, we partner very deeply and collaborate very deeply across the industry. And incubation is really important because we get that early feedback. We get an ability to pivot if we need to. And we also get the ability to see what types of impact our technology is having in the real world. And then lastly, when you think about scale, there’s tons of different ways that you can scale. We can scale third-party through our collaborators and really empower them to go to market to commercialize the things that we’ve built together. 
    You can also think about scaling internally, which is why I’m so thankful that we’ve created this flywheel between research and product, and a lot of the models that we’ve built that have gone through research, have gone through incubation, have been able to scale on the Azure AI Foundry. But that’s not really our expertise. Right? The scale piece in research, that’s research and incubation. Smitha, how do you think about scaling? 
    SALIGRAMA: So, there are several angles to scaling the models, the state-of-the-art models we see from the research team. The first angle is, the open sourcing, to get developer trust, and very generous commercial licenses so that they can use it and for their own, use cases. The second is, we also allow them to customize these models, fine
    GUYMAN: And as one example, you know, University of Wisconsin Health, you know, which Matt knows well. They took one of our models, which is highly versatile. They customized it in Foundry and they optimized it to reliably identify abnormal chest X-rays, the most common imaging procedure, so they could improve their turnaround time triage quickly. And that’s just one example. But we have other partners like Sectra who are doing more of operations use cases automatically routing imaging to the radiologists, setting them up to be efficient. And then Page AI is doing, you know, biomarker identification for actually diagnostics and new drug discovery. So, there’s so many use cases that we have partners already who are building and customizing.
    LUNGREN: The part that’s striking to me is just that, you know, we could all sit in a room and think about all the different ways someone might use these models on the catalog. And I’m still shocked at the stuff that people use them for and how effective they are. And I think part of that is, you know, again, we talk a lot about generative AI and healthcare and all the things you can do. Again, you know, in text, you refer to that earlier and certainly off the shelf, there’s really powerful applications. But there is, you know, kind of this tip of the iceberg effect where under the water, most of the data that we use to take care of our patients is not text. Right. It’s all the different other modalities. And I think that this has been an unlock right, sort of taking these innovations, innovations from the community, putting them in this ecosystem kind of catalog, essentially. Right. And then allowing folks to kind of, you know, build and develop applications with all these different types of data. Again, I’ve been surprised at what I’m seeing. 
    CARLSON: This has been just one of the most profound shifts that’s happened in the last 12 months, really. I mean, two years ago we had general models in text that really shifted how we think about, I mean, natural language processing got totally upended by that. Turns out the same technology works for images as well. It doesn’t only allow you to automatically extract concepts from images, but allows you to align those image concepts with text concepts, which means that you can have a conversation with that image. And once you’re in that world now, you are a place where you can start stitching together these multimodal models that really change how you can interact with the data, and how you can start getting more information out of the raw primary data that is part of the patient journey.
    LUNGREN: Well, and we’re going to get to that because I think you just touched on something. And I want to re-emphasize stitching these things together. There’s a lot of different ways to potentially do that. Right? There’s ways that you can literally train the model end to end with adapters and all kinds of other early fusion fusions. All kinds of ways. But one of the things that the word of the I guess the year is going to be agents and an agent is a very interesting term to think about how you might abstract away some of the components or the tasks that you want the model to, to accomplish in the midst of sort of a real human to maybe model interaction. Can you talk a little bit more about, how we’re thinking about agents in this, in this platform approach?  GUYMAN: Well, this is our newest addition to the Azure AI Foundry. So there’s an agent catalog now where we have a set of pre-configured agents for health care. And then we also have a multi-agent orchestrator that can jump
    LUNGREN: And, and I really like that concept because, you know, as, as a, as a from the user personas, I think about myself as a user. How am I going to interact with these agents? Where does it naturally fit? And I and I sort of, you know, I’ve seen some of the demonstrations and some of the work that’s going on with Stanford in particular, showing that, you know, and literally in a Teams chat, I can have my clinician colleagues and I can have specialized health
    It is a completely mind-blowing thing for me. And it’s a light bulb moment for me to I wonder, what have we, what have we heard from folks that have, you know, tried out this health care agent orchestrator in this kind of deployment environment via Teams?
    GUYMAN: Well, someone joked, you know, are you sure you’re not using Teams because you work at Microsoft?But, then we actually were meeting with one of the, radiologists at one of our partners, and they said that that morning they had just done a Teams meeting, or they had met with other specialists to talk about a patient’s cancer case, or they were coming up with a treatment plan. 
    And that was the light bulb moment for us. We realized, actually, Teams is already being used by physicians as an internal communication tool, as a tool to get work done. And especially since the pandemic, a lot of the meetings moved to virtual and telemedicine. And so it’s a great distribution channel for AI, which is often been a struggle for AI to actually get in the hands of clinicians. And so now we’re allowing developers to build and then deploy very easily and extend it into their own workflows. 
    CARLSON: I think that’s such an important point. I mean, if you think about one of the really important concepts in computer science is an application programing interface, like some set of rules that allow two applications to talk to each other. One of the big pushes, really important pushes, in medicine has been standards that allow us to actually have data standards and APIs that allow these to talk to each other, and yet still we end up with these silos. There’s silos of data. There’s silos of applications.
    And just like when you and I work on our phone, we have to go back and forth between applications. One of the things that I think agents do is that it takes the idea that now you can use language to understand intent and effectively program an interface, and it creates a whole new abstraction layer that allows us to simplify the interaction between not just humans and the endpoint, but also for developers. 
    It allows us to have this abstraction layer that lets different developers focus on different types of models, and yet stitch them all together in a very, very natural, way, not just for the users, but for the ability to actually deploy those models. 
    SALIGRAMA: Just to add to what Jonathan was mentioning, the other cool thing about the Microsoft Teams user interface is it’s also enterprise ready.
    RUNDE: And one important thing that we’re thinking about, is exactly this from the very early research through incubation and then to scale, obviously. Right. And so early on in research, we are actively working with our partners and our collaborators to make sure that we have the right data privacy and consent in place. We’re doing this in incubation as well. And then obviously in scale. Yep. 
    LUNGREN: So, I think AI has always been thought of as a savior kind of technology. We talked a little bit about how there’s been some ups and downs in terms of the ability for technology to be effective in health care. At the same time, we’re seeing a lot of new innovations that are really making a difference. But then we kind of get, you know, we talked about agents a little bit. It feels like we’re maybe abstracting too far. Maybe it’s things are going too fast, almost. What makes this different? I mean, in your mind is this truly a logical next step or is it going to take some time? 
    CARLSON: I think there’s a couple things that have happened. I think first, on just a pure technology. What led to ChatGPT? And I like to think of really three major breakthroughs.
    The first was new mathematical concepts of attention, which really means that we now have a way that a machine can figure out which parts of the context it should actually focus on, just the way our brains do. Right? I mean, if you’re a clinician and somebody is talking to you, the majority of that conversation is not relevant for the diagnosis. But, you know how to zoom in on the parts that matter. That’s a super powerful mathematical concept. The second one is this idea of self-supervision. So, I think one of the fundamental problems of machine learning has been that you have to train on labeled training data and labels are expensive, which means data sets are small, which means the final models are very narrow and brittle. And the idea of self-supervision is that you can just get a model to automatically learn concepts, and the language is just predict the next word. And what’s important about that is that leads to models that can actually manipulate and understand really messy text and pull out what’s important about that, and then and then stitch that back together in interesting ways.
    And the third concept, that came out of those first two, was just the observational scale. And that’s that more is better, more data, more compute, bigger models. And that really leads to a reason to keep investing. And for these models to keep getting better. So that as a as a groundwork, that’s what led to ChatGPT. That’s what led to our ability now to not just have rule-based systems or simple machine learning based systems to take a messy EHR record, say, and pull out a couple concepts.
    But to really feed the whole thing in and say, okay, I need you to figure out which concepts are in here. And is this particular attribute there, for example. That’s now led to the next breakthrough, which is all those core ideas apply to images as well. They apply to proteins, to DNA. And so we’re starting to see models that understand images and the concepts of images, and can actually map those back to text as well. 
    So, you can look at a pathology image and say, not just at the cell, but it appears that there’s some certain sort of cancer in this particular, tissue there. And then you take those two things together and you layer on the fact that now you have a model, or a set of models, that can understand intent, can understand human concepts and biomedical concepts, and you can start stitching them together into specialized agents that can actually reason with each other, which at some level gives you an API as a developer to say, okay, I need to focus on a pathology model and get this really, really, sound while somebody else is focusing on a radiology model, but now allows us to stitch these all together with a user interface that we can now talk to through natural language. 
    RUNDE: I’d like to double click a little bit on that medical abstraction piece that you mentioned. Just the amount of data, clinical data that there is for each individual patient. Let’s think about cancer patients for a second to make this real. Right. For every cancer patient, it could take a couple of hours to structure their information. And why is that important? Because, you have to get that information in a structured way and abstract relevant information to be able to unlock precision health applications right, for each patient. So, to be able to match them to a trial, right, someone has to sit there and go through all of the clinical notes from their entire patient care journey, from the beginning to the end. And that’s not scalable. And so one thing that we’ve been doing in an active project that we’ve been working on with a handful of our partners, but Providence specifically, I’ll call out, is using AI to actually abstract and curate that information. So that gives time back to the health care provider to spend with patients, instead of spending all their time curating this information. 
    And this is super important because it sets the scene and the backbone for all those precision health applications. Like I mentioned, clinical trial matching, tumor boards is another really important example here. Maybe Matt, you can talk to that a little bit.
    LUNGREN: It’s a great example. And you know it’s so funny. We’ve talked about this use case and the you know the health
    And a tumor board is a critical meeting that happens at many cancer centers where specialists all get together, come with their perspective, and make a comment on what would be the best next step in treatment. But the background in preparing for that is you know, again, organizing the data. But to your point, also, what are the clinical trials that are active? There are thousands of clinical trials. There’s hundreds every day added. How can anyone keep up with that? And these are the kinds of use cases that start to bubble up. And you realize that a technology that understands concepts, context and can reason over vast amounts of data with a language interface-that is a powerful tool. Even before we get to some of the, you know, unlocking new insights and even precision medicine, this is that idea of saving time before lives to me. And there’s an enormous amount of undifferentiated heavy lifting that happens in health
    GUYMAN: And we’ve packaged these agents, the manual abstraction work that, you know, manually takes hours. Now we have an agent. It’s in Foundry along with the clinical trial matching agent, which I think at Providence you showed could double the match rate over the baseline that they were using by using the AI for multiple data sources. So, we have that and then we have this orchestration that is using this really neat technology from Microsoft Research. Semantic Kernel, Magentic
    There’s turn taking, there’s negotiation between the agents. So, there’s this really interesting system that’s emerging. And again, this is all possible to be used through Teams. And there’s some great extensibility as well. We’ve been talking about that and working on some cool tools. 
    SALIGRAMA: Yeah. Yeah. No, I think if I have to geek out a little bit on how all this agent tech orchestrations are coming up, like I’ve been in software engineering for decades, it’s kind of a next version of distributed systems where you have these services that talk to each other. It’s a more natural way because LLMs are giving these natural ways instead of a structured API ways of conversing. We have these agents which can naturally understand how to talk to each other. Right. So this is like the next evolution of our systems now. And the way we’re packaging all of this is multiple ways based on all the standards and innovation that’s happening in this space. So, first of all, we are building these agents that are very good at specific tasks, like, Will was saying like, a trial matching agent or patient timeline agents. 
    So, we take all of these, and then we package it in a workflow and an orchestration. We use the standard, some of these coming from research. The Semantic Kernel, the Magentic-One. And then, all of these also allow us to extend these agents with custom agents that can be plugged in. So, we are open sourcing the entire agent orchestration in AI Foundry templates, so that developers can extend their own agents, and make their own workflows out of it. So, a lot of cool innovation happening to apply this technology to specific scenarios and workflows. 
    LUNGREN: Well, I was going to ask you, like, so as part of that extension. So, like, you know, folks can say, hey, I have maybe a really specific part of my workflow that I want to use some agents for, maybe one of the agents that can do PubMed literature search, for example. But then there’s also agents that, come in from the outside, you know, sort of like I could, I can imagine a software company or AI company that has a built-in agent that plugs in as well. 
    SALIGRAMA: Yeah. Yeah, absolutely. So, you can bring your own agent. And then we have these, standard ways of communicating with agents and integrating with the orchestration language so you can bring your own agent and extend this health care agent, agent orchestrator to your own needs. 
    LUNGREN: I can just think of, like, in a group chat, like a bunch of different specialist agents. And I really would want an orchestrator to help find the right tool, to your point earlier, because I’m guessing this ecosystem is going to expand quickly. Yeah. And I may not know which tool is best for which question. I just want to ask the question. Right. 
    SALIGRAMA: Yeah. Yeah. 
    CARLSON: Well, I think to that point to I mean, you said an important point here, which is tools, and these are not necessarily just AI tools. Right? I mean, we’ve known this for a while, right? LLMS are not very good at math, but you can have it use a calculator and then it works very well. And you know you guys both brought up the universal medical abstraction a couple times. 
    And one of the things that I find so powerful about that is we’ve long had this vision within the precision health community that we should be able to have a learning hospital system. We should be able to actually learn from the actual real clinical experiences that are happening every day, so that we can stop practicing medicine based off averages. 
    There’s a lot of work that’s gone on for the last 20 years about how to actually do causal inference. That’s not an AI question. That’s a statistical question. The bottleneck, the reason why we haven’t been able to do that is because most of that information is locked up in unstructured text. And these other tools need essentially a table. 
    And so now you can decompose this problem, say, well, what if I can use AI not to get to the causal answer, but to just structure the information. So now I can put it into the causal inference tool. And these sorts of patterns I think again become very, not just powerful for a programmer, but they start pulling together different specialties. And I think we’ll really see an acceleration, really, of collaboration across disciplines because of this. 
    CARLSON: So, when I joined Microsoft Research 18 years ago, I was doing work in computational biology. And I would always have to answer the question: why is Microsoft in biomedicine? And I would always kind of joke saying, well, it is. We sell Office and Windows to every health
    SALIGRAMA: A lot of healthcare organizations already use Microsoft productivity tools, as you mentioned. So, they asked the developers, build these agents, and use our healthcare orchestrations, to plug in these agents and expose these in these productivity tools. They will get access to all these healthcare workers. So the healthcare agent orchestrator we have today integrates with Microsoft Teams, and it showcases an example of how you can atmention these agents and talk to them like you were talking to another person in a Teams chat. And then it also provides examples of these agents and how they can use these productivity tools. One of the examples we have there is how they can summarize the assessments of this whole chat into a Word Doc, or even convert that into a PowerPoint presentation, for later on.
    CARLSON: One of the things that has struck me is how easy it is to do. I mean, Will, I don’t know if you’ve worked with folks that have gone from 0 to 60, like, how fast? What does that look like? 
    GUYMAN: Yeah, it’s funny for us, the technology to transfer all this context into a Word Document or PowerPoint presentation for a doctor to take to a meeting is relatively straightforward compared to the complicated clinical trial matching multimodal processing. The feedback has been tremendous in terms of, wow, that saves so much time to have this organized report that then I can show up to meeting with and the agents can come with me to that meeting because they’re literally having a Teams meeting, often with other human specialists. And the agents can be there and ask and answer questions and fact check and source all the right information on the fly. So, there’s a nice integration into these existing tools. 
    LUNGREN: We worked with several different centers just to kind of understand, you know, where this might be useful. And, like, as I think we talked about before, the ideas that we’ve come up with again, this is a great one because it’s complex. It’s kind of hairy. There’s a lot of things happening under the hood that don’t necessarily require a medical license to do, right, to prepare for a tumor board and to organize data. But, it’s fascinating, actually. So, you know, folks have come up with ideas of, could I have an agent that can operate an MRI machine, and I can ask the agent to change some parameters or redo a protocol. We thought that was a pretty powerful use case. We’ve had others that have just said, you know, I really want to have a specific agent that’s able to kind of act like deep research does for the consumer side, but based on the context of my patient, so that it can search all the literature and pull the data in the papers that are relevant to this case. And the list goes on and on from operations all the way to clinical, you know, sort of decision making at some level. And I think that the research community that’s going to sprout around this will help us, guide us, I guess, to see what is the most high-impact use cases. Where is this effective? And maybe where it’s not effective.
    But to me, the part that makes me so, I guess excited about this is just that I don’t have to think about, okay, well, then we have to figure out Health IT. Because it’s always, you know, we always have great ideas and research, and it always feels like there’s such a huge chasm to get it in front of the health care workers that might want to test this out. And it feels like, again, this productivity tool use case again with the enterprise security, the possibility for bringing in third parties to contribute really does feel like it’s a new surface area for innovation.
    CARLSON: Yeah, I love that. Look. Let me end by putting you all on the spot. So, in three years, multimodal agents will do what? Matt, I’ll start with you. 
    LUNGREN: I am convinced that it’s going to save massive amount of time before it saves many lives. 
    RUNDE: I’ll focus on the patient care journey and diagnostic journey. I think it will kind of transform that process for the patient itself and shorten that process. 
    GUYMAN: Yeah, I think we’ve seen already papers recently showing that different modalities surfaced complementary information. And so we’ll see kind of this AI and these agents becoming an essential companion to the physician, surfacing insights that would have been overlooked otherwise. 
    SALIGRAMA: And similar to what you guys were saying, agents will become important assistants to healthcare workers, reducing a lot of documentation and workflow, excess work they have to do. 
    CARLSON: I love that. And I guess for my part, I think really what we’re going to see is a massive unleash of creativity. We’ve had a lot of folks that have been innovating in this space, but they haven’t had a way to actually get it into the hands of early adopters. And I think we’re going to see that really lead to an explosion of creativity across the ecosystem. 
    LUNGREN: So, where do we get started? Like where are the developers who are listening to this? The folks that are at, you know, labs, research labs and developing health care solutions. Where do they go to get started with the Foundry, the models we’ve talked about, the healthcare agent orchestrator. Where do they go?
    GUYMAN: So AI.azure.com is the AI Foundry. It’s a website you can go as a developer. You can sign in with your Azure subscription, get your Azure account, your own VM, all that stuff. And you have an agent catalog, the model catalog. You can start from there. There is documentation and templates that you can then deploy to Teams or other applications. 
    LUNGREN: And tutorials are coming. Right. We have recordings of tutorials. We’ll have Hackathons, some sessions and then more to come. Yeah, we’re really excited.  
    LUNGREN: Thank you so much, guys for joining us. 
    CARLSON: Yes. Yeah. Thanks. 
    SALIGRAMA: Thanks for having us.  
    #collaborators #healthcare #innovation #impact
    Collaborators: Healthcare Innovation to Impact
    JONATHAN CARLSON: From the beginning, healthcare stood out to us as an important opportunity for general reasoners to improve the lives and experiences of patients and providers. Indeed, in the past two years, there’s been an explosion of scientific papers looking at the application first of text reasoners and medicine, then multi-modal reasoners that can interpret medical images, and now, most recently, healthcare agents that can reason with each other. But even more impressive than the pace of research has been the surprisingly rapid diffusion of this technology into real world clinical workflows.  LUNGREN: So today, we’ll talk about how our cross-company collaboration has shortened that gap and delivered advanced AI capabilities and solutions into the hands of developers and clinicians around the world, empowering everyone in health and life sciences to achieve more. I’m Doctor Matt Lungren, chief scientific officer for Microsoft Health and Life Sciences.  CARLSON: And I’m Jonathan Carlson, vice president and managing director of Microsoft Health Futures.  LUNGREN: And together we brought some key players leading in the space of AI and health CARLSON: We’ve asked these brilliant folks to join us because each of them represents a mission critical group of cutting-edge stakeholders, scaling breakthroughs into purpose-built solutions and capabilities for health LUNGREN: We’ll hear today how generative AI capabilities can unlock reasoning across every data type in medicine: text, images, waveforms, genomics. And further, how multi-agent frameworks in healthcare can accelerate complex workflows, in some cases acting as a specialist team member, safely secured inside the Microsoft 365 tools used by hundreds of millions of healthcare enterprise users across the world. The opportunity to save time today and lives tomorrow with AI has never been larger.  MATTHEW LUNGREN: Jonathan. You know, it’s been really interesting kind of observing Microsoft Research over the decades. I’ve, you know, been watching you guys in my prior academic career. You are always on the front of innovation, particularly in health  JONATHAN CARLSON: I mean, it’s some of what’s in our DNA, I mean, we’ve been publishing in health and life sciences for two decades here. But when we launched Health Futures as a mission-focused lab about 7 or 8 years ago, we really started with the premise that the way to have impact was to really close the loop between, not just good ideas that get published, but good ideas that can actually be grounded in real problems that clinicians and scientists care about, that then allow us to actually go from that first proof of concept into an incubation, into getting real world feedback that allows us to close that loop. And now with, you know, the HLS organization here as a product group, we have the opportunity to work really closely with you all to not just prove what’s possible in the clinic or in the lab, but actually start scaling that into the broader community.  CAMERON RUNDE: And one thing I’ll add here is that the problems that we’re trying to tackle in health CARLSON: So, Matt, back to you. What are you guys doing in the product group? How do you guys see these models getting into the clinic? LUNGREN: You know, I think a lot of people, you know, think about AI is just, you know, maybe just even a few years old because of GPT and how that really captured the public’s consciousness. Right? And so, you think about the speech-to-text technology of being able to dictate something, for a clinic note or for a visit, that was typically based on Nuance technology. And so there’s a lot of product understanding of the market, how to deliver something that clinicians will use, understanding the pain points and workflows and really that Health IT space, which is sometimes the third rail, I feel like with a lot of innovation in healthcare.  But beyond that, I mean, I think now that we have this really powerful engine of Microsoft and the platform capabilities, we’re seeing, innovations on the healthcare side for data storage, data interoperability, with different types of medical data. You have new applications coming online, the ability, of course, to see generative AI now infused into the speech-to-text and, becoming Dragon Copilot, which is something that has been, you know, tremendously, received by the community.  Physicians are able to now just have a conversation with a patient. They turn to their computer and the note is ready for them. There’s no more this, we call it keyboard liberation. I don’t know if you heard that before. And that’s just been tremendous. And there’s so much more coming from that side. And then there’s other parts of the workflow that we also get engaged in — the diagnostic workflow. So medical imaging, sharing images across different hospital systems, the list goes on. And so now when you move into AI, we feel like there’s a huge opportunity to deliver capabilities into the clinical workflow via the products and solutions we already have. But, I mean, we’ll now that we’ve kind of expanded our team to involve Azure and platform, we’re really able to now focus on the developers. WILL GUYMAN: Yeah. And you’re always telling me as a doctor how frustrating it is to be spending time at the computer instead of with your patients. I think you told me, you know, 4,000 clicks a day for the typical doctor, which is tremendous. And something like Dragon Copilot can save that five minutes per patient. But it can also now take actions after the patient encounter so it can draft the after-visit summary.  It can order labs and medications for the referral. And that’s incredible. And we want to keep building on that. There’s so many other use cases across the ecosystem. And so that’s why in Azure AI Foundry, we have translated a lot of the research from Microsoft Research and made that available to developers to build and customize for their own applications.  SMITHA SALIGRAMA: Yeah. And as you were saying, in our transformation of moving from solutions to platforms and as, scaling solutions to other, multiple scenarios, as we put our models in AI Foundry, we provide these developer capabilities like bring your own data and fine LUNGREN: Well, I want to do a reality check because, you know, I think to us that are now really focused on technology, it seems like, I’ve heard this story before, right. I, I remember even in, my academic clinical days where it felt like technology was always the quick answer and it felt like technology was, there was maybe a disconnect between what my problems were or what I think needed to be done versus kind of the solutions that were kind of, created or offered to us. And I guess at some level, how Jonathan, do you think about this? Because to do things well in the science space is one thing, to do things well in science, but then also have it be something that actually drives health CARLSON: Yeah. I mean, as you said, I think one of the core pathologies of Big Tech is we assume every problem is a technology problem. And that’s all it will take to solve the problem. And I think, look, I was trained as a computational biologist, and that sits in the awkward middle between biology and computation. And the thing that we always have to remember, the thing that we were very acutely aware of when we set out, was that we are not the experts. We do have, you know, you as an M.D., we have everybody on the team, we have biologists on the team.  But this is a big space. And the only way we’re going to have real impact, the only way we’re even going to pick the right problems to work on is if we really partner deeply, with providers, with EHRvendors, with scientists, and really understand what’s important and again, get that feedback loop.  RUNDE: Yeah, I think we really need to ground the work that we do in the science itself. You need to understand the broader ecosystem and the broader landscape, across healthwe think are important. Because, as Jonathan said, we’re not the experts in health CARLSON: When we really launched this, this mission, 7 or 8 years ago, we really came in with the premise of, if we decide to stop, we want to be sure the world cares. And the only way that’s going to be true is if we’re really deeply embedded with the people that matter–the patients, the providers and the scientists. LUNGREN: And now it really feels like this collaborative effort, you know, really can help start to extend that mission. Right. I think, you know, Will and Smitha, that we definitely feel the passion and the innovation. And we certainly benefit from those collaborations, too. But then we have these other partners and even customers, right, that we can start to tap into and have that flywheel keep spinning.  GUYMAN: Yeah. And the whole industry is an ecosystem. So, we have our own data sets at Microsoft Research that you’ve trained amazing AI models with. And those are in the catalog. But then you’ve also partnered with institutions like Providence or Page AI . And those models are in the catalog with their data. And then there are third parties like Nvidia that have their own specialized proprietary data sets, and their models are there too. So, we have this ecosystem of open source models. And maybe Smitha, you want to talk about how developers can actually customize these.  SALIGRAMA: Yeah. So we use the Azure AI Foundry ecosystem. Developers can feel at home if they’re using the AI Foundry. So they can look at our model cards that we publish as part of the models we publish, understand the use cases of these models, how to, quickly, bring up these APIs and, look at different use cases of how to apply these and even fine LUNGREN: Yeah it has been interesting to see we have these health GUYMAN: Well, the general-purpose large language models are amazing for medical general reasoning. So Microsoft Research has shown that that they can perform super well on, for example, like the United States medical licensing exam, they can exceed doctor performance if they’re just picking between different multiple-choice questions. But real medicine we know is messier. It doesn’t always start with the whole patient context provided as text in the prompt. You have to get the source data and that raw data is often non-text. The majority of it is non-text. It’s things like medical imaging, radiology, pathology, ophthalmology, dermatology. It goes on and on. And there’s endless signal data, lab data. And so all of this diverse data type needs to be processed through specialized models because much of that data is not available on the public internet.  And that’s why we’re taking this partner approach, first party and third party models that can interpret all this kind of data and then connect them ultimately back to these general reasoners to reason over that.  LUNGREN: So, you know, I’ve been at this company for a while and, you know, familiar with kind of how long it takes, generally to get, you know, a really good research paper, do all the studies, do all the data analysis, and then go through the process of publishing, right, which takes, as, you know, a long time and it’s, you know, very rigorous.  And one of the things that struck me, last year, I think we, we started this big collaboration and, within a quarter, you had a Nature paper coming out from Microsoft Research, and that model that the Nature paper was describing was ready to be used by anyone on the Azure AI Foundry within that same quarter. It kind of blew my mind when I thought about it, you know, even though we were all, you know, working very hard to get that done. Any thoughts on that? I mean, has this ever happened in your career? And, you know, what’s the secret sauce to that?  CARLSON: Yeah, I mean, the time scale from research to product has been massively compressed. And I’d push that even further, which is to say, the reason why it took a quarter was because we were laying the railroad tracks as we’re driving the train. We have examples right after that when we are launching on Foundry the same day we were publishing the paper.  And frankly, the review times are becoming longer than it takes to actually productize the models. I think there’s two things that are going on with that are really converging. One is that the overall ecosystem is converging on a relatively small number of patterns, and that gives us, as a tech company, a reason to go off and really make those patterns hardened in a way that allows not just us, but third parties as well, to really have a nice workflow to publish these models.  But the other is actually, I think, a change in how we work, you know, and for most of our history as an industrial research lab, we would do research and then we’d go pitch it to somebody and try and throw it over the fence. We’ve really built a much more integrated team. In fact, if you look at that Nature paper or any of the other papers, there’s folks from product teams. Many of you are on the papers along with our clinical collaborators. RUNDE: Yeah. I think one thing that’s really important to note is that there’s a ton of different ways that you can have impact, right? So I like to think about phasing. In Health Futures at least, I like to think about phasing the work that we do. So first we have research, which is really early innovation. And the impact there is getting our technology and our tools out there and really sharing the learnings that we’ve had.  So that can be through publications like you mentioned. It can be through open-sourcing our models. And then you go to incubation. So, this is, I think, one of the more new spaces that we’re getting into, which is maybe that blurred line between research and product. Right. Which is, how do we take the tools and technologies that we’ve built and get them into the hands of users, typically through our partnerships?  Right. So, we partner very deeply and collaborate very deeply across the industry. And incubation is really important because we get that early feedback. We get an ability to pivot if we need to. And we also get the ability to see what types of impact our technology is having in the real world. And then lastly, when you think about scale, there’s tons of different ways that you can scale. We can scale third-party through our collaborators and really empower them to go to market to commercialize the things that we’ve built together.  You can also think about scaling internally, which is why I’m so thankful that we’ve created this flywheel between research and product, and a lot of the models that we’ve built that have gone through research, have gone through incubation, have been able to scale on the Azure AI Foundry. But that’s not really our expertise. Right? The scale piece in research, that’s research and incubation. Smitha, how do you think about scaling?  SALIGRAMA: So, there are several angles to scaling the models, the state-of-the-art models we see from the research team. The first angle is, the open sourcing, to get developer trust, and very generous commercial licenses so that they can use it and for their own, use cases. The second is, we also allow them to customize these models, fine GUYMAN: And as one example, you know, University of Wisconsin Health, you know, which Matt knows well. They took one of our models, which is highly versatile. They customized it in Foundry and they optimized it to reliably identify abnormal chest X-rays, the most common imaging procedure, so they could improve their turnaround time triage quickly. And that’s just one example. But we have other partners like Sectra who are doing more of operations use cases automatically routing imaging to the radiologists, setting them up to be efficient. And then Page AI is doing, you know, biomarker identification for actually diagnostics and new drug discovery. So, there’s so many use cases that we have partners already who are building and customizing. LUNGREN: The part that’s striking to me is just that, you know, we could all sit in a room and think about all the different ways someone might use these models on the catalog. And I’m still shocked at the stuff that people use them for and how effective they are. And I think part of that is, you know, again, we talk a lot about generative AI and healthcare and all the things you can do. Again, you know, in text, you refer to that earlier and certainly off the shelf, there’s really powerful applications. But there is, you know, kind of this tip of the iceberg effect where under the water, most of the data that we use to take care of our patients is not text. Right. It’s all the different other modalities. And I think that this has been an unlock right, sort of taking these innovations, innovations from the community, putting them in this ecosystem kind of catalog, essentially. Right. And then allowing folks to kind of, you know, build and develop applications with all these different types of data. Again, I’ve been surprised at what I’m seeing.  CARLSON: This has been just one of the most profound shifts that’s happened in the last 12 months, really. I mean, two years ago we had general models in text that really shifted how we think about, I mean, natural language processing got totally upended by that. Turns out the same technology works for images as well. It doesn’t only allow you to automatically extract concepts from images, but allows you to align those image concepts with text concepts, which means that you can have a conversation with that image. And once you’re in that world now, you are a place where you can start stitching together these multimodal models that really change how you can interact with the data, and how you can start getting more information out of the raw primary data that is part of the patient journey. LUNGREN: Well, and we’re going to get to that because I think you just touched on something. And I want to re-emphasize stitching these things together. There’s a lot of different ways to potentially do that. Right? There’s ways that you can literally train the model end to end with adapters and all kinds of other early fusion fusions. All kinds of ways. But one of the things that the word of the I guess the year is going to be agents and an agent is a very interesting term to think about how you might abstract away some of the components or the tasks that you want the model to, to accomplish in the midst of sort of a real human to maybe model interaction. Can you talk a little bit more about, how we’re thinking about agents in this, in this platform approach?  GUYMAN: Well, this is our newest addition to the Azure AI Foundry. So there’s an agent catalog now where we have a set of pre-configured agents for health care. And then we also have a multi-agent orchestrator that can jump LUNGREN: And, and I really like that concept because, you know, as, as a, as a from the user personas, I think about myself as a user. How am I going to interact with these agents? Where does it naturally fit? And I and I sort of, you know, I’ve seen some of the demonstrations and some of the work that’s going on with Stanford in particular, showing that, you know, and literally in a Teams chat, I can have my clinician colleagues and I can have specialized health It is a completely mind-blowing thing for me. And it’s a light bulb moment for me to I wonder, what have we, what have we heard from folks that have, you know, tried out this health care agent orchestrator in this kind of deployment environment via Teams? GUYMAN: Well, someone joked, you know, are you sure you’re not using Teams because you work at Microsoft?But, then we actually were meeting with one of the, radiologists at one of our partners, and they said that that morning they had just done a Teams meeting, or they had met with other specialists to talk about a patient’s cancer case, or they were coming up with a treatment plan.  And that was the light bulb moment for us. We realized, actually, Teams is already being used by physicians as an internal communication tool, as a tool to get work done. And especially since the pandemic, a lot of the meetings moved to virtual and telemedicine. And so it’s a great distribution channel for AI, which is often been a struggle for AI to actually get in the hands of clinicians. And so now we’re allowing developers to build and then deploy very easily and extend it into their own workflows.  CARLSON: I think that’s such an important point. I mean, if you think about one of the really important concepts in computer science is an application programing interface, like some set of rules that allow two applications to talk to each other. One of the big pushes, really important pushes, in medicine has been standards that allow us to actually have data standards and APIs that allow these to talk to each other, and yet still we end up with these silos. There’s silos of data. There’s silos of applications. And just like when you and I work on our phone, we have to go back and forth between applications. One of the things that I think agents do is that it takes the idea that now you can use language to understand intent and effectively program an interface, and it creates a whole new abstraction layer that allows us to simplify the interaction between not just humans and the endpoint, but also for developers.  It allows us to have this abstraction layer that lets different developers focus on different types of models, and yet stitch them all together in a very, very natural, way, not just for the users, but for the ability to actually deploy those models.  SALIGRAMA: Just to add to what Jonathan was mentioning, the other cool thing about the Microsoft Teams user interface is it’s also enterprise ready. RUNDE: And one important thing that we’re thinking about, is exactly this from the very early research through incubation and then to scale, obviously. Right. And so early on in research, we are actively working with our partners and our collaborators to make sure that we have the right data privacy and consent in place. We’re doing this in incubation as well. And then obviously in scale. Yep.  LUNGREN: So, I think AI has always been thought of as a savior kind of technology. We talked a little bit about how there’s been some ups and downs in terms of the ability for technology to be effective in health care. At the same time, we’re seeing a lot of new innovations that are really making a difference. But then we kind of get, you know, we talked about agents a little bit. It feels like we’re maybe abstracting too far. Maybe it’s things are going too fast, almost. What makes this different? I mean, in your mind is this truly a logical next step or is it going to take some time?  CARLSON: I think there’s a couple things that have happened. I think first, on just a pure technology. What led to ChatGPT? And I like to think of really three major breakthroughs. The first was new mathematical concepts of attention, which really means that we now have a way that a machine can figure out which parts of the context it should actually focus on, just the way our brains do. Right? I mean, if you’re a clinician and somebody is talking to you, the majority of that conversation is not relevant for the diagnosis. But, you know how to zoom in on the parts that matter. That’s a super powerful mathematical concept. The second one is this idea of self-supervision. So, I think one of the fundamental problems of machine learning has been that you have to train on labeled training data and labels are expensive, which means data sets are small, which means the final models are very narrow and brittle. And the idea of self-supervision is that you can just get a model to automatically learn concepts, and the language is just predict the next word. And what’s important about that is that leads to models that can actually manipulate and understand really messy text and pull out what’s important about that, and then and then stitch that back together in interesting ways. And the third concept, that came out of those first two, was just the observational scale. And that’s that more is better, more data, more compute, bigger models. And that really leads to a reason to keep investing. And for these models to keep getting better. So that as a as a groundwork, that’s what led to ChatGPT. That’s what led to our ability now to not just have rule-based systems or simple machine learning based systems to take a messy EHR record, say, and pull out a couple concepts. But to really feed the whole thing in and say, okay, I need you to figure out which concepts are in here. And is this particular attribute there, for example. That’s now led to the next breakthrough, which is all those core ideas apply to images as well. They apply to proteins, to DNA. And so we’re starting to see models that understand images and the concepts of images, and can actually map those back to text as well.  So, you can look at a pathology image and say, not just at the cell, but it appears that there’s some certain sort of cancer in this particular, tissue there. And then you take those two things together and you layer on the fact that now you have a model, or a set of models, that can understand intent, can understand human concepts and biomedical concepts, and you can start stitching them together into specialized agents that can actually reason with each other, which at some level gives you an API as a developer to say, okay, I need to focus on a pathology model and get this really, really, sound while somebody else is focusing on a radiology model, but now allows us to stitch these all together with a user interface that we can now talk to through natural language.  RUNDE: I’d like to double click a little bit on that medical abstraction piece that you mentioned. Just the amount of data, clinical data that there is for each individual patient. Let’s think about cancer patients for a second to make this real. Right. For every cancer patient, it could take a couple of hours to structure their information. And why is that important? Because, you have to get that information in a structured way and abstract relevant information to be able to unlock precision health applications right, for each patient. So, to be able to match them to a trial, right, someone has to sit there and go through all of the clinical notes from their entire patient care journey, from the beginning to the end. And that’s not scalable. And so one thing that we’ve been doing in an active project that we’ve been working on with a handful of our partners, but Providence specifically, I’ll call out, is using AI to actually abstract and curate that information. So that gives time back to the health care provider to spend with patients, instead of spending all their time curating this information.  And this is super important because it sets the scene and the backbone for all those precision health applications. Like I mentioned, clinical trial matching, tumor boards is another really important example here. Maybe Matt, you can talk to that a little bit. LUNGREN: It’s a great example. And you know it’s so funny. We’ve talked about this use case and the you know the health And a tumor board is a critical meeting that happens at many cancer centers where specialists all get together, come with their perspective, and make a comment on what would be the best next step in treatment. But the background in preparing for that is you know, again, organizing the data. But to your point, also, what are the clinical trials that are active? There are thousands of clinical trials. There’s hundreds every day added. How can anyone keep up with that? And these are the kinds of use cases that start to bubble up. And you realize that a technology that understands concepts, context and can reason over vast amounts of data with a language interface-that is a powerful tool. Even before we get to some of the, you know, unlocking new insights and even precision medicine, this is that idea of saving time before lives to me. And there’s an enormous amount of undifferentiated heavy lifting that happens in health GUYMAN: And we’ve packaged these agents, the manual abstraction work that, you know, manually takes hours. Now we have an agent. It’s in Foundry along with the clinical trial matching agent, which I think at Providence you showed could double the match rate over the baseline that they were using by using the AI for multiple data sources. So, we have that and then we have this orchestration that is using this really neat technology from Microsoft Research. Semantic Kernel, Magentic There’s turn taking, there’s negotiation between the agents. So, there’s this really interesting system that’s emerging. And again, this is all possible to be used through Teams. And there’s some great extensibility as well. We’ve been talking about that and working on some cool tools.  SALIGRAMA: Yeah. Yeah. No, I think if I have to geek out a little bit on how all this agent tech orchestrations are coming up, like I’ve been in software engineering for decades, it’s kind of a next version of distributed systems where you have these services that talk to each other. It’s a more natural way because LLMs are giving these natural ways instead of a structured API ways of conversing. We have these agents which can naturally understand how to talk to each other. Right. So this is like the next evolution of our systems now. And the way we’re packaging all of this is multiple ways based on all the standards and innovation that’s happening in this space. So, first of all, we are building these agents that are very good at specific tasks, like, Will was saying like, a trial matching agent or patient timeline agents.  So, we take all of these, and then we package it in a workflow and an orchestration. We use the standard, some of these coming from research. The Semantic Kernel, the Magentic-One. And then, all of these also allow us to extend these agents with custom agents that can be plugged in. So, we are open sourcing the entire agent orchestration in AI Foundry templates, so that developers can extend their own agents, and make their own workflows out of it. So, a lot of cool innovation happening to apply this technology to specific scenarios and workflows.  LUNGREN: Well, I was going to ask you, like, so as part of that extension. So, like, you know, folks can say, hey, I have maybe a really specific part of my workflow that I want to use some agents for, maybe one of the agents that can do PubMed literature search, for example. But then there’s also agents that, come in from the outside, you know, sort of like I could, I can imagine a software company or AI company that has a built-in agent that plugs in as well.  SALIGRAMA: Yeah. Yeah, absolutely. So, you can bring your own agent. And then we have these, standard ways of communicating with agents and integrating with the orchestration language so you can bring your own agent and extend this health care agent, agent orchestrator to your own needs.  LUNGREN: I can just think of, like, in a group chat, like a bunch of different specialist agents. And I really would want an orchestrator to help find the right tool, to your point earlier, because I’m guessing this ecosystem is going to expand quickly. Yeah. And I may not know which tool is best for which question. I just want to ask the question. Right.  SALIGRAMA: Yeah. Yeah.  CARLSON: Well, I think to that point to I mean, you said an important point here, which is tools, and these are not necessarily just AI tools. Right? I mean, we’ve known this for a while, right? LLMS are not very good at math, but you can have it use a calculator and then it works very well. And you know you guys both brought up the universal medical abstraction a couple times.  And one of the things that I find so powerful about that is we’ve long had this vision within the precision health community that we should be able to have a learning hospital system. We should be able to actually learn from the actual real clinical experiences that are happening every day, so that we can stop practicing medicine based off averages.  There’s a lot of work that’s gone on for the last 20 years about how to actually do causal inference. That’s not an AI question. That’s a statistical question. The bottleneck, the reason why we haven’t been able to do that is because most of that information is locked up in unstructured text. And these other tools need essentially a table.  And so now you can decompose this problem, say, well, what if I can use AI not to get to the causal answer, but to just structure the information. So now I can put it into the causal inference tool. And these sorts of patterns I think again become very, not just powerful for a programmer, but they start pulling together different specialties. And I think we’ll really see an acceleration, really, of collaboration across disciplines because of this.  CARLSON: So, when I joined Microsoft Research 18 years ago, I was doing work in computational biology. And I would always have to answer the question: why is Microsoft in biomedicine? And I would always kind of joke saying, well, it is. We sell Office and Windows to every health SALIGRAMA: A lot of healthcare organizations already use Microsoft productivity tools, as you mentioned. So, they asked the developers, build these agents, and use our healthcare orchestrations, to plug in these agents and expose these in these productivity tools. They will get access to all these healthcare workers. So the healthcare agent orchestrator we have today integrates with Microsoft Teams, and it showcases an example of how you can atmention these agents and talk to them like you were talking to another person in a Teams chat. And then it also provides examples of these agents and how they can use these productivity tools. One of the examples we have there is how they can summarize the assessments of this whole chat into a Word Doc, or even convert that into a PowerPoint presentation, for later on. CARLSON: One of the things that has struck me is how easy it is to do. I mean, Will, I don’t know if you’ve worked with folks that have gone from 0 to 60, like, how fast? What does that look like?  GUYMAN: Yeah, it’s funny for us, the technology to transfer all this context into a Word Document or PowerPoint presentation for a doctor to take to a meeting is relatively straightforward compared to the complicated clinical trial matching multimodal processing. The feedback has been tremendous in terms of, wow, that saves so much time to have this organized report that then I can show up to meeting with and the agents can come with me to that meeting because they’re literally having a Teams meeting, often with other human specialists. And the agents can be there and ask and answer questions and fact check and source all the right information on the fly. So, there’s a nice integration into these existing tools.  LUNGREN: We worked with several different centers just to kind of understand, you know, where this might be useful. And, like, as I think we talked about before, the ideas that we’ve come up with again, this is a great one because it’s complex. It’s kind of hairy. There’s a lot of things happening under the hood that don’t necessarily require a medical license to do, right, to prepare for a tumor board and to organize data. But, it’s fascinating, actually. So, you know, folks have come up with ideas of, could I have an agent that can operate an MRI machine, and I can ask the agent to change some parameters or redo a protocol. We thought that was a pretty powerful use case. We’ve had others that have just said, you know, I really want to have a specific agent that’s able to kind of act like deep research does for the consumer side, but based on the context of my patient, so that it can search all the literature and pull the data in the papers that are relevant to this case. And the list goes on and on from operations all the way to clinical, you know, sort of decision making at some level. And I think that the research community that’s going to sprout around this will help us, guide us, I guess, to see what is the most high-impact use cases. Where is this effective? And maybe where it’s not effective. But to me, the part that makes me so, I guess excited about this is just that I don’t have to think about, okay, well, then we have to figure out Health IT. Because it’s always, you know, we always have great ideas and research, and it always feels like there’s such a huge chasm to get it in front of the health care workers that might want to test this out. And it feels like, again, this productivity tool use case again with the enterprise security, the possibility for bringing in third parties to contribute really does feel like it’s a new surface area for innovation. CARLSON: Yeah, I love that. Look. Let me end by putting you all on the spot. So, in three years, multimodal agents will do what? Matt, I’ll start with you.  LUNGREN: I am convinced that it’s going to save massive amount of time before it saves many lives.  RUNDE: I’ll focus on the patient care journey and diagnostic journey. I think it will kind of transform that process for the patient itself and shorten that process.  GUYMAN: Yeah, I think we’ve seen already papers recently showing that different modalities surfaced complementary information. And so we’ll see kind of this AI and these agents becoming an essential companion to the physician, surfacing insights that would have been overlooked otherwise.  SALIGRAMA: And similar to what you guys were saying, agents will become important assistants to healthcare workers, reducing a lot of documentation and workflow, excess work they have to do.  CARLSON: I love that. And I guess for my part, I think really what we’re going to see is a massive unleash of creativity. We’ve had a lot of folks that have been innovating in this space, but they haven’t had a way to actually get it into the hands of early adopters. And I think we’re going to see that really lead to an explosion of creativity across the ecosystem.  LUNGREN: So, where do we get started? Like where are the developers who are listening to this? The folks that are at, you know, labs, research labs and developing health care solutions. Where do they go to get started with the Foundry, the models we’ve talked about, the healthcare agent orchestrator. Where do they go? GUYMAN: So AI.azure.com is the AI Foundry. It’s a website you can go as a developer. You can sign in with your Azure subscription, get your Azure account, your own VM, all that stuff. And you have an agent catalog, the model catalog. You can start from there. There is documentation and templates that you can then deploy to Teams or other applications.  LUNGREN: And tutorials are coming. Right. We have recordings of tutorials. We’ll have Hackathons, some sessions and then more to come. Yeah, we’re really excited.   LUNGREN: Thank you so much, guys for joining us.  CARLSON: Yes. Yeah. Thanks.  SALIGRAMA: Thanks for having us.   #collaborators #healthcare #innovation #impact
    WWW.MICROSOFT.COM
    Collaborators: Healthcare Innovation to Impact
    JONATHAN CARLSON: From the beginning, healthcare stood out to us as an important opportunity for general reasoners to improve the lives and experiences of patients and providers. Indeed, in the past two years, there’s been an explosion of scientific papers looking at the application first of text reasoners and medicine, then multi-modal reasoners that can interpret medical images, and now, most recently, healthcare agents that can reason with each other. But even more impressive than the pace of research has been the surprisingly rapid diffusion of this technology into real world clinical workflows.  LUNGREN: So today, we’ll talk about how our cross-company collaboration has shortened that gap and delivered advanced AI capabilities and solutions into the hands of developers and clinicians around the world, empowering everyone in health and life sciences to achieve more. I’m Doctor Matt Lungren, chief scientific officer for Microsoft Health and Life Sciences.  CARLSON: And I’m Jonathan Carlson, vice president and managing director of Microsoft Health Futures.  LUNGREN: And together we brought some key players leading in the space of AI and health CARLSON: We’ve asked these brilliant folks to join us because each of them represents a mission critical group of cutting-edge stakeholders, scaling breakthroughs into purpose-built solutions and capabilities for health LUNGREN: We’ll hear today how generative AI capabilities can unlock reasoning across every data type in medicine: text, images, waveforms, genomics. And further, how multi-agent frameworks in healthcare can accelerate complex workflows, in some cases acting as a specialist team member, safely secured inside the Microsoft 365 tools used by hundreds of millions of healthcare enterprise users across the world. The opportunity to save time today and lives tomorrow with AI has never been larger. [MUSIC FADES]  MATTHEW LUNGREN: Jonathan. You know, it’s been really interesting kind of observing Microsoft Research over the decades. I’ve, you know, been watching you guys in my prior academic career. You are always on the front of innovation, particularly in health  JONATHAN CARLSON: I mean, it’s some of what’s in our DNA, I mean, we’ve been publishing in health and life sciences for two decades here. But when we launched Health Futures as a mission-focused lab about 7 or 8 years ago, we really started with the premise that the way to have impact was to really close the loop between, not just good ideas that get published, but good ideas that can actually be grounded in real problems that clinicians and scientists care about, that then allow us to actually go from that first proof of concept into an incubation, into getting real world feedback that allows us to close that loop. And now with, you know, the HLS organization here as a product group, we have the opportunity to work really closely with you all to not just prove what’s possible in the clinic or in the lab, but actually start scaling that into the broader community.  CAMERON RUNDE: And one thing I’ll add here is that the problems that we’re trying to tackle in health CARLSON: So, Matt, back to you. What are you guys doing in the product group? How do you guys see these models getting into the clinic? LUNGREN: You know, I think a lot of people, you know, think about AI is just, you know, maybe just even a few years old because of GPT and how that really captured the public’s consciousness. Right? And so, you think about the speech-to-text technology of being able to dictate something, for a clinic note or for a visit, that was typically based on Nuance technology. And so there’s a lot of product understanding of the market, how to deliver something that clinicians will use, understanding the pain points and workflows and really that Health IT space, which is sometimes the third rail, I feel like with a lot of innovation in healthcare.  But beyond that, I mean, I think now that we have this really powerful engine of Microsoft and the platform capabilities, we’re seeing, innovations on the healthcare side for data storage, data interoperability, with different types of medical data. You have new applications coming online, the ability, of course, to see generative AI now infused into the speech-to-text and, becoming Dragon Copilot, which is something that has been, you know, tremendously, received by the community.  Physicians are able to now just have a conversation with a patient. They turn to their computer and the note is ready for them. There’s no more this, we call it keyboard liberation. I don’t know if you heard that before. And that’s just been tremendous. And there’s so much more coming from that side. And then there’s other parts of the workflow that we also get engaged in — the diagnostic workflow. So medical imaging, sharing images across different hospital systems, the list goes on. And so now when you move into AI, we feel like there’s a huge opportunity to deliver capabilities into the clinical workflow via the products and solutions we already have. But, I mean, we’ll now that we’ve kind of expanded our team to involve Azure and platform, we’re really able to now focus on the developers. WILL GUYMAN: Yeah. And you’re always telling me as a doctor how frustrating it is to be spending time at the computer instead of with your patients. I think you told me, you know, 4,000 clicks a day for the typical doctor, which is tremendous. And something like Dragon Copilot can save that five minutes per patient. But it can also now take actions after the patient encounter so it can draft the after-visit summary.  It can order labs and medications for the referral. And that’s incredible. And we want to keep building on that. There’s so many other use cases across the ecosystem. And so that’s why in Azure AI Foundry, we have translated a lot of the research from Microsoft Research and made that available to developers to build and customize for their own applications.  SMITHA SALIGRAMA: Yeah. And as you were saying, in our transformation of moving from solutions to platforms and as, scaling solutions to other, multiple scenarios, as we put our models in AI Foundry, we provide these developer capabilities like bring your own data and fine LUNGREN: Well, I want to do a reality check because, you know, I think to us that are now really focused on technology, it seems like, I’ve heard this story before, right. I, I remember even in, my academic clinical days where it felt like technology was always the quick answer and it felt like technology was, there was maybe a disconnect between what my problems were or what I think needed to be done versus kind of the solutions that were kind of, created or offered to us. And I guess at some level, how Jonathan, do you think about this? Because to do things well in the science space is one thing, to do things well in science, but then also have it be something that actually drives health CARLSON: Yeah. I mean, as you said, I think one of the core pathologies of Big Tech is we assume every problem is a technology problem. And that’s all it will take to solve the problem. And I think, look, I was trained as a computational biologist, and that sits in the awkward middle between biology and computation. And the thing that we always have to remember, the thing that we were very acutely aware of when we set out, was that we are not the experts. We do have, you know, you as an M.D., we have everybody on the team, we have biologists on the team.  But this is a big space. And the only way we’re going to have real impact, the only way we’re even going to pick the right problems to work on is if we really partner deeply, with providers, with EHR (electronic health records) vendors, with scientists, and really understand what’s important and again, get that feedback loop.  RUNDE: Yeah, I think we really need to ground the work that we do in the science itself. You need to understand the broader ecosystem and the broader landscape, across healthwe think are important. Because, as Jonathan said, we’re not the experts in health CARLSON: When we really launched this, this mission, 7 or 8 years ago, we really came in with the premise of, if we decide to stop, we want to be sure the world cares. And the only way that’s going to be true is if we’re really deeply embedded with the people that matter–the patients, the providers and the scientists. LUNGREN: And now it really feels like this collaborative effort, you know, really can help start to extend that mission. Right. I think, you know, Will and Smitha, that we definitely feel the passion and the innovation. And we certainly benefit from those collaborations, too. But then we have these other partners and even customers, right, that we can start to tap into and have that flywheel keep spinning.  GUYMAN: Yeah. And the whole industry is an ecosystem. So, we have our own data sets at Microsoft Research that you’ve trained amazing AI models with. And those are in the catalog. But then you’ve also partnered with institutions like Providence or Page AI . And those models are in the catalog with their data. And then there are third parties like Nvidia that have their own specialized proprietary data sets, and their models are there too. So, we have this ecosystem of open source models. And maybe Smitha, you want to talk about how developers can actually customize these.  SALIGRAMA: Yeah. So we use the Azure AI Foundry ecosystem. Developers can feel at home if they’re using the AI Foundry. So they can look at our model cards that we publish as part of the models we publish, understand the use cases of these models, how to, quickly, bring up these APIs and, look at different use cases of how to apply these and even fine LUNGREN: Yeah it has been interesting to see we have these health GUYMAN: Well, the general-purpose large language models are amazing for medical general reasoning. So Microsoft Research has shown that that they can perform super well on, for example, like the United States medical licensing exam, they can exceed doctor performance if they’re just picking between different multiple-choice questions. But real medicine we know is messier. It doesn’t always start with the whole patient context provided as text in the prompt. You have to get the source data and that raw data is often non-text. The majority of it is non-text. It’s things like medical imaging, radiology, pathology, ophthalmology, dermatology. It goes on and on. And there’s endless signal data, lab data. And so all of this diverse data type needs to be processed through specialized models because much of that data is not available on the public internet.  And that’s why we’re taking this partner approach, first party and third party models that can interpret all this kind of data and then connect them ultimately back to these general reasoners to reason over that.  LUNGREN: So, you know, I’ve been at this company for a while and, you know, familiar with kind of how long it takes, generally to get, you know, a really good research paper, do all the studies, do all the data analysis, and then go through the process of publishing, right, which takes, as, you know, a long time and it’s, you know, very rigorous.  And one of the things that struck me, last year, I think we, we started this big collaboration and, within a quarter, you had a Nature paper coming out from Microsoft Research, and that model that the Nature paper was describing was ready to be used by anyone on the Azure AI Foundry within that same quarter. It kind of blew my mind when I thought about it, you know, even though we were all, you know, working very hard to get that done. Any thoughts on that? I mean, has this ever happened in your career? And, you know, what’s the secret sauce to that?  CARLSON: Yeah, I mean, the time scale from research to product has been massively compressed. And I’d push that even further, which is to say, the reason why it took a quarter was because we were laying the railroad tracks as we’re driving the train. We have examples right after that when we are launching on Foundry the same day we were publishing the paper.  And frankly, the review times are becoming longer than it takes to actually productize the models. I think there’s two things that are going on with that are really converging. One is that the overall ecosystem is converging on a relatively small number of patterns, and that gives us, as a tech company, a reason to go off and really make those patterns hardened in a way that allows not just us, but third parties as well, to really have a nice workflow to publish these models.  But the other is actually, I think, a change in how we work, you know, and for most of our history as an industrial research lab, we would do research and then we’d go pitch it to somebody and try and throw it over the fence. We’ve really built a much more integrated team. In fact, if you look at that Nature paper or any of the other papers, there’s folks from product teams. Many of you are on the papers along with our clinical collaborators. RUNDE: Yeah. I think one thing that’s really important to note is that there’s a ton of different ways that you can have impact, right? So I like to think about phasing. In Health Futures at least, I like to think about phasing the work that we do. So first we have research, which is really early innovation. And the impact there is getting our technology and our tools out there and really sharing the learnings that we’ve had.  So that can be through publications like you mentioned. It can be through open-sourcing our models. And then you go to incubation. So, this is, I think, one of the more new spaces that we’re getting into, which is maybe that blurred line between research and product. Right. Which is, how do we take the tools and technologies that we’ve built and get them into the hands of users, typically through our partnerships?  Right. So, we partner very deeply and collaborate very deeply across the industry. And incubation is really important because we get that early feedback. We get an ability to pivot if we need to. And we also get the ability to see what types of impact our technology is having in the real world. And then lastly, when you think about scale, there’s tons of different ways that you can scale. We can scale third-party through our collaborators and really empower them to go to market to commercialize the things that we’ve built together.  You can also think about scaling internally, which is why I’m so thankful that we’ve created this flywheel between research and product, and a lot of the models that we’ve built that have gone through research, have gone through incubation, have been able to scale on the Azure AI Foundry. But that’s not really our expertise. Right? The scale piece in research, that’s research and incubation. Smitha, how do you think about scaling?  SALIGRAMA: So, there are several angles to scaling the models, the state-of-the-art models we see from the research team. The first angle is, the open sourcing, to get developer trust, and very generous commercial licenses so that they can use it and for their own, use cases. The second is, we also allow them to customize these models, fine GUYMAN: And as one example, you know, University of Wisconsin Health, you know, which Matt knows well. They took one of our models, which is highly versatile. They customized it in Foundry and they optimized it to reliably identify abnormal chest X-rays, the most common imaging procedure, so they could improve their turnaround time triage quickly. And that’s just one example. But we have other partners like Sectra who are doing more of operations use cases automatically routing imaging to the radiologists, setting them up to be efficient. And then Page AI is doing, you know, biomarker identification for actually diagnostics and new drug discovery. So, there’s so many use cases that we have partners already who are building and customizing. LUNGREN: The part that’s striking to me is just that, you know, we could all sit in a room and think about all the different ways someone might use these models on the catalog. And I’m still shocked at the stuff that people use them for and how effective they are. And I think part of that is, you know, again, we talk a lot about generative AI and healthcare and all the things you can do. Again, you know, in text, you refer to that earlier and certainly off the shelf, there’s really powerful applications. But there is, you know, kind of this tip of the iceberg effect where under the water, most of the data that we use to take care of our patients is not text. Right. It’s all the different other modalities. And I think that this has been an unlock right, sort of taking these innovations, innovations from the community, putting them in this ecosystem kind of catalog, essentially. Right. And then allowing folks to kind of, you know, build and develop applications with all these different types of data. Again, I’ve been surprised at what I’m seeing.  CARLSON: This has been just one of the most profound shifts that’s happened in the last 12 months, really. I mean, two years ago we had general models in text that really shifted how we think about, I mean, natural language processing got totally upended by that. Turns out the same technology works for images as well. It doesn’t only allow you to automatically extract concepts from images, but allows you to align those image concepts with text concepts, which means that you can have a conversation with that image. And once you’re in that world now, you are a place where you can start stitching together these multimodal models that really change how you can interact with the data, and how you can start getting more information out of the raw primary data that is part of the patient journey. LUNGREN: Well, and we’re going to get to that because I think you just touched on something. And I want to re-emphasize stitching these things together. There’s a lot of different ways to potentially do that. Right? There’s ways that you can literally train the model end to end with adapters and all kinds of other early fusion fusions. All kinds of ways. But one of the things that the word of the I guess the year is going to be agents and an agent is a very interesting term to think about how you might abstract away some of the components or the tasks that you want the model to, to accomplish in the midst of sort of a real human to maybe model interaction. Can you talk a little bit more about, how we’re thinking about agents in this, in this platform approach?  GUYMAN: Well, this is our newest addition to the Azure AI Foundry. So there’s an agent catalog now where we have a set of pre-configured agents for health care. And then we also have a multi-agent orchestrator that can jump LUNGREN: And, and I really like that concept because, you know, as, as a, as a from the user personas, I think about myself as a user. How am I going to interact with these agents? Where does it naturally fit? And I and I sort of, you know, I’ve seen some of the demonstrations and some of the work that’s going on with Stanford in particular, showing that, you know, and literally in a Teams chat, I can have my clinician colleagues and I can have specialized health It is a completely mind-blowing thing for me. And it’s a light bulb moment for me to I wonder, what have we, what have we heard from folks that have, you know, tried out this health care agent orchestrator in this kind of deployment environment via Teams? GUYMAN: Well, someone joked, you know, are you sure you’re not using Teams because you work at Microsoft? [LAUGHS] But, then we actually were meeting with one of the, radiologists at one of our partners, and they said that that morning they had just done a Teams meeting, or they had met with other specialists to talk about a patient’s cancer case, or they were coming up with a treatment plan.  And that was the light bulb moment for us. We realized, actually, Teams is already being used by physicians as an internal communication tool, as a tool to get work done. And especially since the pandemic, a lot of the meetings moved to virtual and telemedicine. And so it’s a great distribution channel for AI, which is often been a struggle for AI to actually get in the hands of clinicians. And so now we’re allowing developers to build and then deploy very easily and extend it into their own workflows.  CARLSON: I think that’s such an important point. I mean, if you think about one of the really important concepts in computer science is an application programing interface, like some set of rules that allow two applications to talk to each other. One of the big pushes, really important pushes, in medicine has been standards that allow us to actually have data standards and APIs that allow these to talk to each other, and yet still we end up with these silos. There’s silos of data. There’s silos of applications. And just like when you and I work on our phone, we have to go back and forth between applications. One of the things that I think agents do is that it takes the idea that now you can use language to understand intent and effectively program an interface, and it creates a whole new abstraction layer that allows us to simplify the interaction between not just humans and the endpoint, but also for developers.  It allows us to have this abstraction layer that lets different developers focus on different types of models, and yet stitch them all together in a very, very natural, way, not just for the users, but for the ability to actually deploy those models.  SALIGRAMA: Just to add to what Jonathan was mentioning, the other cool thing about the Microsoft Teams user interface is it’s also enterprise ready. RUNDE: And one important thing that we’re thinking about, is exactly this from the very early research through incubation and then to scale, obviously. Right. And so early on in research, we are actively working with our partners and our collaborators to make sure that we have the right data privacy and consent in place. We’re doing this in incubation as well. And then obviously in scale. Yep.  LUNGREN: So, I think AI has always been thought of as a savior kind of technology. We talked a little bit about how there’s been some ups and downs in terms of the ability for technology to be effective in health care. At the same time, we’re seeing a lot of new innovations that are really making a difference. But then we kind of get, you know, we talked about agents a little bit. It feels like we’re maybe abstracting too far. Maybe it’s things are going too fast, almost. What makes this different? I mean, in your mind is this truly a logical next step or is it going to take some time?  CARLSON: I think there’s a couple things that have happened. I think first, on just a pure technology. What led to ChatGPT? And I like to think of really three major breakthroughs. The first was new mathematical concepts of attention, which really means that we now have a way that a machine can figure out which parts of the context it should actually focus on, just the way our brains do. Right? I mean, if you’re a clinician and somebody is talking to you, the majority of that conversation is not relevant for the diagnosis. But, you know how to zoom in on the parts that matter. That’s a super powerful mathematical concept. The second one is this idea of self-supervision. So, I think one of the fundamental problems of machine learning has been that you have to train on labeled training data and labels are expensive, which means data sets are small, which means the final models are very narrow and brittle. And the idea of self-supervision is that you can just get a model to automatically learn concepts, and the language is just predict the next word. And what’s important about that is that leads to models that can actually manipulate and understand really messy text and pull out what’s important about that, and then and then stitch that back together in interesting ways. And the third concept, that came out of those first two, was just the observational scale. And that’s that more is better, more data, more compute, bigger models. And that really leads to a reason to keep investing. And for these models to keep getting better. So that as a as a groundwork, that’s what led to ChatGPT. That’s what led to our ability now to not just have rule-based systems or simple machine learning based systems to take a messy EHR record, say, and pull out a couple concepts. But to really feed the whole thing in and say, okay, I need you to figure out which concepts are in here. And is this particular attribute there, for example. That’s now led to the next breakthrough, which is all those core ideas apply to images as well. They apply to proteins, to DNA. And so we’re starting to see models that understand images and the concepts of images, and can actually map those back to text as well.  So, you can look at a pathology image and say, not just at the cell, but it appears that there’s some certain sort of cancer in this particular, tissue there. And then you take those two things together and you layer on the fact that now you have a model, or a set of models, that can understand intent, can understand human concepts and biomedical concepts, and you can start stitching them together into specialized agents that can actually reason with each other, which at some level gives you an API as a developer to say, okay, I need to focus on a pathology model and get this really, really, sound while somebody else is focusing on a radiology model, but now allows us to stitch these all together with a user interface that we can now talk to through natural language.  RUNDE: I’d like to double click a little bit on that medical abstraction piece that you mentioned. Just the amount of data, clinical data that there is for each individual patient. Let’s think about cancer patients for a second to make this real. Right. For every cancer patient, it could take a couple of hours to structure their information. And why is that important? Because, you have to get that information in a structured way and abstract relevant information to be able to unlock precision health applications right, for each patient. So, to be able to match them to a trial, right, someone has to sit there and go through all of the clinical notes from their entire patient care journey, from the beginning to the end. And that’s not scalable. And so one thing that we’ve been doing in an active project that we’ve been working on with a handful of our partners, but Providence specifically, I’ll call out, is using AI to actually abstract and curate that information. So that gives time back to the health care provider to spend with patients, instead of spending all their time curating this information.  And this is super important because it sets the scene and the backbone for all those precision health applications. Like I mentioned, clinical trial matching, tumor boards is another really important example here. Maybe Matt, you can talk to that a little bit. LUNGREN: It’s a great example. And you know it’s so funny. We’ve talked about this use case and the you know the health And a tumor board is a critical meeting that happens at many cancer centers where specialists all get together, come with their perspective, and make a comment on what would be the best next step in treatment. But the background in preparing for that is you know, again, organizing the data. But to your point, also, what are the clinical trials that are active? There are thousands of clinical trials. There’s hundreds every day added. How can anyone keep up with that? And these are the kinds of use cases that start to bubble up. And you realize that a technology that understands concepts, context and can reason over vast amounts of data with a language interface-that is a powerful tool. Even before we get to some of the, you know, unlocking new insights and even precision medicine, this is that idea of saving time before lives to me. And there’s an enormous amount of undifferentiated heavy lifting that happens in health GUYMAN: And we’ve packaged these agents, the manual abstraction work that, you know, manually takes hours. Now we have an agent. It’s in Foundry along with the clinical trial matching agent, which I think at Providence you showed could double the match rate over the baseline that they were using by using the AI for multiple data sources. So, we have that and then we have this orchestration that is using this really neat technology from Microsoft Research. Semantic Kernel, Magentic There’s turn taking, there’s negotiation between the agents. So, there’s this really interesting system that’s emerging. And again, this is all possible to be used through Teams. And there’s some great extensibility as well. We’ve been talking about that and working on some cool tools.  SALIGRAMA: Yeah. Yeah. No, I think if I have to geek out a little bit on how all this agent tech orchestrations are coming up, like I’ve been in software engineering for decades, it’s kind of a next version of distributed systems where you have these services that talk to each other. It’s a more natural way because LLMs are giving these natural ways instead of a structured API ways of conversing. We have these agents which can naturally understand how to talk to each other. Right. So this is like the next evolution of our systems now. And the way we’re packaging all of this is multiple ways based on all the standards and innovation that’s happening in this space. So, first of all, we are building these agents that are very good at specific tasks, like, Will was saying like, a trial matching agent or patient timeline agents.  So, we take all of these, and then we package it in a workflow and an orchestration. We use the standard, some of these coming from research. The Semantic Kernel, the Magentic-One. And then, all of these also allow us to extend these agents with custom agents that can be plugged in. So, we are open sourcing the entire agent orchestration in AI Foundry templates, so that developers can extend their own agents, and make their own workflows out of it. So, a lot of cool innovation happening to apply this technology to specific scenarios and workflows.  LUNGREN: Well, I was going to ask you, like, so as part of that extension. So, like, you know, folks can say, hey, I have maybe a really specific part of my workflow that I want to use some agents for, maybe one of the agents that can do PubMed literature search, for example. But then there’s also agents that, come in from the outside, you know, sort of like I could, I can imagine a software company or AI company that has a built-in agent that plugs in as well.  SALIGRAMA: Yeah. Yeah, absolutely. So, you can bring your own agent. And then we have these, standard ways of communicating with agents and integrating with the orchestration language so you can bring your own agent and extend this health care agent, agent orchestrator to your own needs.  LUNGREN: I can just think of, like, in a group chat, like a bunch of different specialist agents. And I really would want an orchestrator to help find the right tool, to your point earlier, because I’m guessing this ecosystem is going to expand quickly. Yeah. And I may not know which tool is best for which question. I just want to ask the question. Right.  SALIGRAMA: Yeah. Yeah.  CARLSON: Well, I think to that point to I mean, you said an important point here, which is tools, and these are not necessarily just AI tools. Right? I mean, we’ve known this for a while, right? LLMS are not very good at math, but you can have it use a calculator and then it works very well. And you know you guys both brought up the universal medical abstraction a couple times.  And one of the things that I find so powerful about that is we’ve long had this vision within the precision health community that we should be able to have a learning hospital system. We should be able to actually learn from the actual real clinical experiences that are happening every day, so that we can stop practicing medicine based off averages.  There’s a lot of work that’s gone on for the last 20 years about how to actually do causal inference. That’s not an AI question. That’s a statistical question. The bottleneck, the reason why we haven’t been able to do that is because most of that information is locked up in unstructured text. And these other tools need essentially a table.  And so now you can decompose this problem, say, well, what if I can use AI not to get to the causal answer, but to just structure the information. So now I can put it into the causal inference tool. And these sorts of patterns I think again become very, not just powerful for a programmer, but they start pulling together different specialties. And I think we’ll really see an acceleration, really, of collaboration across disciplines because of this.  CARLSON: So, when I joined Microsoft Research 18 years ago, I was doing work in computational biology. And I would always have to answer the question: why is Microsoft in biomedicine? And I would always kind of joke saying, well, it is. We sell Office and Windows to every health SALIGRAMA: A lot of healthcare organizations already use Microsoft productivity tools, as you mentioned. So, they asked the developers, build these agents, and use our healthcare orchestrations, to plug in these agents and expose these in these productivity tools. They will get access to all these healthcare workers. So the healthcare agent orchestrator we have today integrates with Microsoft Teams, and it showcases an example of how you can at (@) mention these agents and talk to them like you were talking to another person in a Teams chat. And then it also provides examples of these agents and how they can use these productivity tools. One of the examples we have there is how they can summarize the assessments of this whole chat into a Word Doc, or even convert that into a PowerPoint presentation, for later on. CARLSON: One of the things that has struck me is how easy it is to do. I mean, Will, I don’t know if you’ve worked with folks that have gone from 0 to 60, like, how fast? What does that look like?  GUYMAN: Yeah, it’s funny for us, the technology to transfer all this context into a Word Document or PowerPoint presentation for a doctor to take to a meeting is relatively straightforward compared to the complicated clinical trial matching multimodal processing. The feedback has been tremendous in terms of, wow, that saves so much time to have this organized report that then I can show up to meeting with and the agents can come with me to that meeting because they’re literally having a Teams meeting, often with other human specialists. And the agents can be there and ask and answer questions and fact check and source all the right information on the fly. So, there’s a nice integration into these existing tools.  LUNGREN: We worked with several different centers just to kind of understand, you know, where this might be useful. And, like, as I think we talked about before, the ideas that we’ve come up with again, this is a great one because it’s complex. It’s kind of hairy. There’s a lot of things happening under the hood that don’t necessarily require a medical license to do, right, to prepare for a tumor board and to organize data. But, it’s fascinating, actually. So, you know, folks have come up with ideas of, could I have an agent that can operate an MRI machine, and I can ask the agent to change some parameters or redo a protocol. We thought that was a pretty powerful use case. We’ve had others that have just said, you know, I really want to have a specific agent that’s able to kind of act like deep research does for the consumer side, but based on the context of my patient, so that it can search all the literature and pull the data in the papers that are relevant to this case. And the list goes on and on from operations all the way to clinical, you know, sort of decision making at some level. And I think that the research community that’s going to sprout around this will help us, guide us, I guess, to see what is the most high-impact use cases. Where is this effective? And maybe where it’s not effective. But to me, the part that makes me so, I guess excited about this is just that I don’t have to think about, okay, well, then we have to figure out Health IT. Because it’s always, you know, we always have great ideas and research, and it always feels like there’s such a huge chasm to get it in front of the health care workers that might want to test this out. And it feels like, again, this productivity tool use case again with the enterprise security, the possibility for bringing in third parties to contribute really does feel like it’s a new surface area for innovation. CARLSON: Yeah, I love that. Look. Let me end by putting you all on the spot. So, in three years, multimodal agents will do what? Matt, I’ll start with you.  LUNGREN: I am convinced that it’s going to save massive amount of time before it saves many lives.  RUNDE: I’ll focus on the patient care journey and diagnostic journey. I think it will kind of transform that process for the patient itself and shorten that process.  GUYMAN: Yeah, I think we’ve seen already papers recently showing that different modalities surfaced complementary information. And so we’ll see kind of this AI and these agents becoming an essential companion to the physician, surfacing insights that would have been overlooked otherwise.  SALIGRAMA: And similar to what you guys were saying, agents will become important assistants to healthcare workers, reducing a lot of documentation and workflow, excess work they have to do.  CARLSON: I love that. And I guess for my part, I think really what we’re going to see is a massive unleash of creativity. We’ve had a lot of folks that have been innovating in this space, but they haven’t had a way to actually get it into the hands of early adopters. And I think we’re going to see that really lead to an explosion of creativity across the ecosystem.  LUNGREN: So, where do we get started? Like where are the developers who are listening to this? The folks that are at, you know, labs, research labs and developing health care solutions. Where do they go to get started with the Foundry, the models we’ve talked about, the healthcare agent orchestrator. Where do they go? GUYMAN: So AI.azure.com is the AI Foundry. It’s a website you can go as a developer. You can sign in with your Azure subscription, get your Azure account, your own VM, all that stuff. And you have an agent catalog, the model catalog. You can start from there. There is documentation and templates that you can then deploy to Teams or other applications.  LUNGREN: And tutorials are coming. Right. We have recordings of tutorials. We’ll have Hackathons, some sessions and then more to come. Yeah, we’re really excited.  [MUSIC]  LUNGREN: Thank you so much, guys for joining us.  CARLSON: Yes. Yeah. Thanks.  SALIGRAMA: Thanks for having us.  [MUSIC FADES] 
    0 Комментарии 0 Поделились
  • A Step-by-Step Coding Guide to Efficiently Fine-Tune Qwen3-14B Using Unsloth AI on Google Colab with Mixed Datasets and LoRA Optimization

    Fine-tuning LLMs often requires extensive resources, time, and memory, challenges that can hinder rapid experimentation and deployment. Unsloth AI revolutionizes this process by enabling fast, efficient fine-tuning state-of-the-art models like Qwen3-14B with minimal GPU memory, leveraging advanced techniques such as 4-bit quantization and LoRA. In this tutorial, we walk through a practical implementation on Google Colab to fine-tune Qwen3-14B using a combination of reasoning and instruction-following datasets, combining Unsloth’s FastLanguageModel utilities with trl.SFTTrainer users can achieve powerful fine-tuning performance with just consumer-grade hardware.
    %%capture
    import os
    if "COLAB_" not in "".join):
    !pip install unsloth
    else:
    !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo
    !pip install sentencepiece protobuf "datasets>=3.4.1" huggingface_hub hf_transfer
    !pip install --no-deps unsloth
    We install all the essential libraries required for fine-tuning the Qwen3 model using Unsloth AI. It conditionally installs dependencies based on the environment, using a lightweight approach on Colab to ensure compatibility and reduce overhead. Key components like bitsandbytes, trl, xformers, and unsloth_zoo are included to enable 4-bit quantized training and LoRA-based optimization.
    from unsloth import FastLanguageModel
    import torch

    model, tokenizer = FastLanguageModel.from_pretrainedWe load the Qwen3-14B model using FastLanguageModel from the Unsloth library, which is optimized for efficient fine-tuning. It initializes the model with a context length of 2048 tokens and loads it in 4-bit precision, significantly reducing memory usage. Full fine-tuning is disabled, making it suitable for lightweight parameter-efficient techniques like LoRA.
    model = FastLanguageModel.get_peft_modelWe apply LoRAto the Qwen3 model using FastLanguageModel.get_peft_model. It injects trainable adapters into specific transformer layerswith a rank of 32, enabling efficient fine-tuning while keeping most model weights frozen. Using “unsloth” gradient checkpointing further optimizes memory usage, making it suitable for training large models on limited hardware.
    from datasets import load_dataset

    reasoning_dataset = load_datasetnon_reasoning_dataset = load_datasetWe load two pre-curated datasets from the Hugging Face Hub using the library. The reasoning_dataset contains chain-of-thoughtproblems from Unsloth’s OpenMathReasoning-mini, designed to enhance logical reasoning in the model. The non_reasoning_dataset pulls general instruction-following data from mlabonne’s FineTome-100k, which helps the model learn broader conversational and task-oriented skills. Together, these datasets support a well-rounded fine-tuning objective.
    def generate_conversation:
    problems = examplessolutions = examplesconversations =for problem, solution in zip:
    conversations.appendreturn {"conversations": conversations}
    This function, generate_conversation, transforms raw question–answer pairs from the reasoning dataset into a chat-style format suitable for fine-tuning. For each problem and its corresponding generated solution, a conversation is conducted in which the user asks a question and the assistant provides the answer. The output is a list of dictionaries following the structure expected by chat-based language models, preparing the data for tokenization with a chat template.
    reasoning_conversations = tokenizer.apply_chat_templatefrom unsloth.chat_templates import standardize_sharegpt
    dataset = standardize_sharegptnon_reasoning_conversations = tokenizer.apply_chat_templateimport pandas as pd

    chat_percentage = 0.75
    non_reasoning_subset = pd.Series.sample*),
    random_state=2407,
    )

    data = pd.concat,
    pd.Series])
    data.name = "text"
    We prepare the fine-tuning dataset by converting the reasoning and instruction datasets into a consistent chat format and then combining them. It first applies the tokenizer’s apply_chat_template to convert structured conversations into tokenizable strings. The standardize_sharegpt function normalizes the instruction dataset into a compatible structure. Then, a 75-25 mix is created by sampling 25% of the non-reasoningconversations and combining them with the reasoning data. This blend ensures the model is exposed to logical reasoning and general instruction-following tasks, improving its versatility during training. The final combined data is stored as a single-column Pandas Series named “text”.
    from datasets import Dataset

    combined_dataset = Dataset.from_pandas)
    combined_dataset = combined_dataset.shufflefrom trl import SFTTrainer, SFTConfig

    trainer = SFTTrainer)

    We take the preprocessed conversations, wrap them into a Hugging Face Dataset, and shuffle the dataset with a fixed seed for reproducibility. Then, the fine-tuning trainer is initialized using trl’s SFTTrainer and SFTConfig. The trainer is set up to use the combined datasetand defines training hyperparameters like batch size, gradient accumulation, number of warmup and training steps, learning rate, optimizer parameters, and a linear learning rate scheduler. This configuration is geared towards efficient fine-tuning while maintaining reproducibility and logging minimal details.
    trainer.traintrainer.trainstarts the fine-tuning process for the Qwen3-14B model using the SFTTrainer. It trains the model on the prepared mixed dataset of reasoning and instruction-following conversations, optimizing only the LoRA-adapted parameters thanks to the underlying Unsloth setup. Training will proceed according to the configuration specified earlier, and progress will be printed every logging step. This final command launches the actual model adaptation based on your custom data.
    model.save_pretrainedtokenizer.save_pretrainedWe save the fine-tuned model and tokenizer locally to the “qwen3-finetuned-colab” directory. By calling save_pretrained, the adapted weights and tokenizer configuration can be reloaded later for inference or further training, locally or for uploading to the Hugging Face Hub.
    In conclusion, with the help of Unsloth AI, fine-tuning massive LLMs like Qwen3-14B becomes feasible, using limited resources, and is highly efficient and accessible. This tutorial demonstrated how to load a 4-bit quantized version of the model, apply structured chat templates, mix multiple datasets for better generalization, and train using TRL’s SFTTrainer. Whether you’re building custom assistants or specialized domain models, Unsloth’s tools dramatically reduce the barrier to fine-tuning at scale. As open-source fine-tuning ecosystems evolve, Unsloth continues to lead the way in making LLM training faster, cheaper, and more practical for everyone.

    Check out the COLAB NOTEBOOK. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Chain-of-Thought May Not Be a Window into AI’s Reasoning: Anthropic’s New Study Reveals Hidden GapsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/How to Build a Powerful and Intelligent Question-Answering System by Using Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain FrameworkAsif Razzaqhttps://www.marktechpost.com/author/6flvq/AWS Open-Sources Strands Agents SDK to Simplify AI Agent DevelopmentAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering

    Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub!
    #stepbystep #coding #guide #efficiently #finetune
    A Step-by-Step Coding Guide to Efficiently Fine-Tune Qwen3-14B Using Unsloth AI on Google Colab with Mixed Datasets and LoRA Optimization
    Fine-tuning LLMs often requires extensive resources, time, and memory, challenges that can hinder rapid experimentation and deployment. Unsloth AI revolutionizes this process by enabling fast, efficient fine-tuning state-of-the-art models like Qwen3-14B with minimal GPU memory, leveraging advanced techniques such as 4-bit quantization and LoRA. In this tutorial, we walk through a practical implementation on Google Colab to fine-tune Qwen3-14B using a combination of reasoning and instruction-following datasets, combining Unsloth’s FastLanguageModel utilities with trl.SFTTrainer users can achieve powerful fine-tuning performance with just consumer-grade hardware. %%capture import os if "COLAB_" not in "".join): !pip install unsloth else: !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo !pip install sentencepiece protobuf "datasets>=3.4.1" huggingface_hub hf_transfer !pip install --no-deps unsloth We install all the essential libraries required for fine-tuning the Qwen3 model using Unsloth AI. It conditionally installs dependencies based on the environment, using a lightweight approach on Colab to ensure compatibility and reduce overhead. Key components like bitsandbytes, trl, xformers, and unsloth_zoo are included to enable 4-bit quantized training and LoRA-based optimization. from unsloth import FastLanguageModel import torch model, tokenizer = FastLanguageModel.from_pretrainedWe load the Qwen3-14B model using FastLanguageModel from the Unsloth library, which is optimized for efficient fine-tuning. It initializes the model with a context length of 2048 tokens and loads it in 4-bit precision, significantly reducing memory usage. Full fine-tuning is disabled, making it suitable for lightweight parameter-efficient techniques like LoRA. model = FastLanguageModel.get_peft_modelWe apply LoRAto the Qwen3 model using FastLanguageModel.get_peft_model. It injects trainable adapters into specific transformer layerswith a rank of 32, enabling efficient fine-tuning while keeping most model weights frozen. Using “unsloth” gradient checkpointing further optimizes memory usage, making it suitable for training large models on limited hardware. from datasets import load_dataset reasoning_dataset = load_datasetnon_reasoning_dataset = load_datasetWe load two pre-curated datasets from the Hugging Face Hub using the library. The reasoning_dataset contains chain-of-thoughtproblems from Unsloth’s OpenMathReasoning-mini, designed to enhance logical reasoning in the model. The non_reasoning_dataset pulls general instruction-following data from mlabonne’s FineTome-100k, which helps the model learn broader conversational and task-oriented skills. Together, these datasets support a well-rounded fine-tuning objective. def generate_conversation: problems = examplessolutions = examplesconversations =for problem, solution in zip: conversations.appendreturn {"conversations": conversations} This function, generate_conversation, transforms raw question–answer pairs from the reasoning dataset into a chat-style format suitable for fine-tuning. For each problem and its corresponding generated solution, a conversation is conducted in which the user asks a question and the assistant provides the answer. The output is a list of dictionaries following the structure expected by chat-based language models, preparing the data for tokenization with a chat template. reasoning_conversations = tokenizer.apply_chat_templatefrom unsloth.chat_templates import standardize_sharegpt dataset = standardize_sharegptnon_reasoning_conversations = tokenizer.apply_chat_templateimport pandas as pd chat_percentage = 0.75 non_reasoning_subset = pd.Series.sample*), random_state=2407, ) data = pd.concat, pd.Series]) data.name = "text" We prepare the fine-tuning dataset by converting the reasoning and instruction datasets into a consistent chat format and then combining them. It first applies the tokenizer’s apply_chat_template to convert structured conversations into tokenizable strings. The standardize_sharegpt function normalizes the instruction dataset into a compatible structure. Then, a 75-25 mix is created by sampling 25% of the non-reasoningconversations and combining them with the reasoning data. This blend ensures the model is exposed to logical reasoning and general instruction-following tasks, improving its versatility during training. The final combined data is stored as a single-column Pandas Series named “text”. from datasets import Dataset combined_dataset = Dataset.from_pandas) combined_dataset = combined_dataset.shufflefrom trl import SFTTrainer, SFTConfig trainer = SFTTrainer) We take the preprocessed conversations, wrap them into a Hugging Face Dataset, and shuffle the dataset with a fixed seed for reproducibility. Then, the fine-tuning trainer is initialized using trl’s SFTTrainer and SFTConfig. The trainer is set up to use the combined datasetand defines training hyperparameters like batch size, gradient accumulation, number of warmup and training steps, learning rate, optimizer parameters, and a linear learning rate scheduler. This configuration is geared towards efficient fine-tuning while maintaining reproducibility and logging minimal details. trainer.traintrainer.trainstarts the fine-tuning process for the Qwen3-14B model using the SFTTrainer. It trains the model on the prepared mixed dataset of reasoning and instruction-following conversations, optimizing only the LoRA-adapted parameters thanks to the underlying Unsloth setup. Training will proceed according to the configuration specified earlier, and progress will be printed every logging step. This final command launches the actual model adaptation based on your custom data. model.save_pretrainedtokenizer.save_pretrainedWe save the fine-tuned model and tokenizer locally to the “qwen3-finetuned-colab” directory. By calling save_pretrained, the adapted weights and tokenizer configuration can be reloaded later for inference or further training, locally or for uploading to the Hugging Face Hub. In conclusion, with the help of Unsloth AI, fine-tuning massive LLMs like Qwen3-14B becomes feasible, using limited resources, and is highly efficient and accessible. This tutorial demonstrated how to load a 4-bit quantized version of the model, apply structured chat templates, mix multiple datasets for better generalization, and train using TRL’s SFTTrainer. Whether you’re building custom assistants or specialized domain models, Unsloth’s tools dramatically reduce the barrier to fine-tuning at scale. As open-source fine-tuning ecosystems evolve, Unsloth continues to lead the way in making LLM training faster, cheaper, and more practical for everyone. Check out the COLAB NOTEBOOK. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Chain-of-Thought May Not Be a Window into AI’s Reasoning: Anthropic’s New Study Reveals Hidden GapsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/How to Build a Powerful and Intelligent Question-Answering System by Using Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain FrameworkAsif Razzaqhttps://www.marktechpost.com/author/6flvq/AWS Open-Sources Strands Agents SDK to Simplify AI Agent DevelopmentAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering 🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub! #stepbystep #coding #guide #efficiently #finetune
    WWW.MARKTECHPOST.COM
    A Step-by-Step Coding Guide to Efficiently Fine-Tune Qwen3-14B Using Unsloth AI on Google Colab with Mixed Datasets and LoRA Optimization
    Fine-tuning LLMs often requires extensive resources, time, and memory, challenges that can hinder rapid experimentation and deployment. Unsloth AI revolutionizes this process by enabling fast, efficient fine-tuning state-of-the-art models like Qwen3-14B with minimal GPU memory, leveraging advanced techniques such as 4-bit quantization and LoRA (Low-Rank Adaptation). In this tutorial, we walk through a practical implementation on Google Colab to fine-tune Qwen3-14B using a combination of reasoning and instruction-following datasets, combining Unsloth’s FastLanguageModel utilities with trl.SFTTrainer users can achieve powerful fine-tuning performance with just consumer-grade hardware. %%capture import os if "COLAB_" not in "".join(os.environ.keys()): !pip install unsloth else: !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo !pip install sentencepiece protobuf "datasets>=3.4.1" huggingface_hub hf_transfer !pip install --no-deps unsloth We install all the essential libraries required for fine-tuning the Qwen3 model using Unsloth AI. It conditionally installs dependencies based on the environment, using a lightweight approach on Colab to ensure compatibility and reduce overhead. Key components like bitsandbytes, trl, xformers, and unsloth_zoo are included to enable 4-bit quantized training and LoRA-based optimization. from unsloth import FastLanguageModel import torch model, tokenizer = FastLanguageModel.from_pretrained( model_name = "unsloth/Qwen3-14B", max_seq_length = 2048, load_in_4bit = True, load_in_8bit = False, full_finetuning = False, ) We load the Qwen3-14B model using FastLanguageModel from the Unsloth library, which is optimized for efficient fine-tuning. It initializes the model with a context length of 2048 tokens and loads it in 4-bit precision, significantly reducing memory usage. Full fine-tuning is disabled, making it suitable for lightweight parameter-efficient techniques like LoRA. model = FastLanguageModel.get_peft_model( model, r = 32, target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], lora_alpha = 32, lora_dropout = 0, bias = "none", use_gradient_checkpointing = "unsloth", random_state = 3407, use_rslora = False, loftq_config = None, ) We apply LoRA (Low-Rank Adaptation) to the Qwen3 model using FastLanguageModel.get_peft_model. It injects trainable adapters into specific transformer layers (like q_proj, v_proj, etc.) with a rank of 32, enabling efficient fine-tuning while keeping most model weights frozen. Using “unsloth” gradient checkpointing further optimizes memory usage, making it suitable for training large models on limited hardware. from datasets import load_dataset reasoning_dataset = load_dataset("unsloth/OpenMathReasoning-mini", split="cot") non_reasoning_dataset = load_dataset("mlabonne/FineTome-100k", split="train") We load two pre-curated datasets from the Hugging Face Hub using the library. The reasoning_dataset contains chain-of-thought (CoT) problems from Unsloth’s OpenMathReasoning-mini, designed to enhance logical reasoning in the model. The non_reasoning_dataset pulls general instruction-following data from mlabonne’s FineTome-100k, which helps the model learn broader conversational and task-oriented skills. Together, these datasets support a well-rounded fine-tuning objective. def generate_conversation(examples): problems = examples["problem"] solutions = examples["generated_solution"] conversations = [] for problem, solution in zip(problems, solutions): conversations.append([ {"role": "user", "content": problem}, {"role": "assistant", "content": solution}, ]) return {"conversations": conversations} This function, generate_conversation, transforms raw question–answer pairs from the reasoning dataset into a chat-style format suitable for fine-tuning. For each problem and its corresponding generated solution, a conversation is conducted in which the user asks a question and the assistant provides the answer. The output is a list of dictionaries following the structure expected by chat-based language models, preparing the data for tokenization with a chat template. reasoning_conversations = tokenizer.apply_chat_template( reasoning_dataset["conversations"], tokenize=False, ) from unsloth.chat_templates import standardize_sharegpt dataset = standardize_sharegpt(non_reasoning_dataset) non_reasoning_conversations = tokenizer.apply_chat_template( dataset["conversations"], tokenize=False, ) import pandas as pd chat_percentage = 0.75 non_reasoning_subset = pd.Series(non_reasoning_conversations).sample( int(len(reasoning_conversations) * (1.0 - chat_percentage)), random_state=2407, ) data = pd.concat([ pd.Series(reasoning_conversations), pd.Series(non_reasoning_subset) ]) data.name = "text" We prepare the fine-tuning dataset by converting the reasoning and instruction datasets into a consistent chat format and then combining them. It first applies the tokenizer’s apply_chat_template to convert structured conversations into tokenizable strings. The standardize_sharegpt function normalizes the instruction dataset into a compatible structure. Then, a 75-25 mix is created by sampling 25% of the non-reasoning (instruction) conversations and combining them with the reasoning data. This blend ensures the model is exposed to logical reasoning and general instruction-following tasks, improving its versatility during training. The final combined data is stored as a single-column Pandas Series named “text”. from datasets import Dataset combined_dataset = Dataset.from_pandas(pd.DataFrame(data)) combined_dataset = combined_dataset.shuffle(seed=3407) from trl import SFTTrainer, SFTConfig trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=combined_dataset, eval_dataset=None, args=SFTConfig( dataset_text_field="text", per_device_train_batch_size=2, gradient_accumulation_steps=4, warmup_steps=5, max_steps=30, learning_rate=2e-4, logging_steps=1, optim="adamw_8bit", weight_decay=0.01, lr_scheduler_type="linear", seed=3407, report_to="none", ) ) We take the preprocessed conversations, wrap them into a Hugging Face Dataset (ensuring the data is in a consistent format), and shuffle the dataset with a fixed seed for reproducibility. Then, the fine-tuning trainer is initialized using trl’s SFTTrainer and SFTConfig. The trainer is set up to use the combined dataset (with the text column field named “text”) and defines training hyperparameters like batch size, gradient accumulation, number of warmup and training steps, learning rate, optimizer parameters, and a linear learning rate scheduler. This configuration is geared towards efficient fine-tuning while maintaining reproducibility and logging minimal details (with report_to=”none”). trainer.train() trainer.train() starts the fine-tuning process for the Qwen3-14B model using the SFTTrainer. It trains the model on the prepared mixed dataset of reasoning and instruction-following conversations, optimizing only the LoRA-adapted parameters thanks to the underlying Unsloth setup. Training will proceed according to the configuration specified earlier (e.g., max_steps=30, batch_size=2, lr=2e-4), and progress will be printed every logging step. This final command launches the actual model adaptation based on your custom data. model.save_pretrained("qwen3-finetuned-colab") tokenizer.save_pretrained("qwen3-finetuned-colab") We save the fine-tuned model and tokenizer locally to the “qwen3-finetuned-colab” directory. By calling save_pretrained(), the adapted weights and tokenizer configuration can be reloaded later for inference or further training, locally or for uploading to the Hugging Face Hub. In conclusion, with the help of Unsloth AI, fine-tuning massive LLMs like Qwen3-14B becomes feasible, using limited resources, and is highly efficient and accessible. This tutorial demonstrated how to load a 4-bit quantized version of the model, apply structured chat templates, mix multiple datasets for better generalization, and train using TRL’s SFTTrainer. Whether you’re building custom assistants or specialized domain models, Unsloth’s tools dramatically reduce the barrier to fine-tuning at scale. As open-source fine-tuning ecosystems evolve, Unsloth continues to lead the way in making LLM training faster, cheaper, and more practical for everyone. Check out the COLAB NOTEBOOK. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Chain-of-Thought May Not Be a Window into AI’s Reasoning: Anthropic’s New Study Reveals Hidden GapsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/How to Build a Powerful and Intelligent Question-Answering System by Using Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain FrameworkAsif Razzaqhttps://www.marktechpost.com/author/6flvq/AWS Open-Sources Strands Agents SDK to Simplify AI Agent DevelopmentAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering 🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub! (Promoted)
    0 Комментарии 0 Поделились
  • Microsoft Surface Pro 12 Review: Compact Copilot+ Windows device built for silence, stamina, and adaptability

    PROS:
    Highly Portable: Lightweight and compact with balanced ergonomics for easy one-handed use
    Quiet, Efficient Performance: Fanless design runs silently while handling daily tasks smoothly
    Improved Input Design: Redesigned keyboard and pen integration enhance usability
    Eco-Friendly Materials: Uses recycled cobalt, aluminum, and packaging to reduce impact
    CONS:
    Accessories Sold Separately: Keyboard and charger increase total cost significantly
    Limited Ports: No USB-A or headphone jack requires adapters
    Not Built for Heavy Creative Work: Struggles with intensive editing or gaming tasks

    RATINGS:
    AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:Smart, silent, and travel-ready. The Surface Pro 12 cuts the bulk while keeping the features that matter for real work and play.
    Microsoft’s Surface Pro 12 arrives with the subtlety of a whisper and the impact of a shout. The newest addition to Microsoft’s 2-in-1 lineup doesn’t announce itself with flashy gimmicks or revolutionary redesigns. Instead, it quietly refines what we’ve come to expect from the Surface family while carving out its own distinct identity in an increasingly crowded market. Smaller, lighter, and more nimble than its predecessors, this 12-inch tablet-laptop hybrid represents Microsoft’s most focused attempt yet at balancing power and portability.
    Designer: Microsoft
    I’ve spent considerable time with this device, exploring its capabilities and limitations across various use cases. What emerges is a fascinating study in compromise and calculation.
    The Surface Pro 12 exists in an interesting middle ground. It’s not the most powerful Surface device you can buy. It’s not the largest or the most premium. But that’s precisely the point. Microsoft has crafted something deliberately positioned to appeal to users who found previous Surface models either too unwieldy or too expensive.
    Does it succeed? That depends entirely on what you’re looking for.
    For some, the 12-inch form factor will feel like the Goldilocks zone. Not too big, not too small, but just right. For others, the compromises made to achieve this more compact design might prove frustrating. And hovering over everything is the question of value: at for the base model, is this the Surface that finally makes sense for mainstream consumers?

    The timing couldn’t be more interesting. As Microsoft pushes forward with its Copilot+ PC initiative, the Surface Pro 12 arrives as one of the standard-bearers for this new AI-focused computing paradigm. With its Snapdragon X Plus processor and dedicated NPU delivering 45 TOPS of AI performance, this diminutive device packs surprising computational muscle specifically tuned for the next generation of AI-powered applications.
    But specs only tell part of the story. The real question is how all this technology comes together in daily use. Can the Surface Pro 12 deliver on Microsoft’s promises of all-day battery life and responsive performance in a more portable package? And perhaps more importantly, does it justify its existence in a lineup that already includes the more powerful Surface Pro 13-inch?
    Let’s find out.
    Design and Ergonomics
    Pick up the Surface Pro 12, and something immediately feels different. The weight distribution. The rounded edges. The way it nestles into your palm with unexpected comfort. At just 1.5 pounds, this isn’t Microsoft’s lightest device ever, but it might be their most thoughtfully balanced.
    I found myself reaching for it instinctively throughout the day. Its 0.30-inch thickness, combined with its compact footprint, makes it substantially more comfortable to hold in one hand than previous Surface models. This matters tremendously for a device meant to transition seamlessly between laptop and tablet modes.

    Microsoft has embraced a more organic design language here. Gone are the sharper edges of previous generations, replaced by gently rounded corners that echo the aesthetic of modern tablets. The bezels have shrunk considerably, though they’re still present enough to provide a comfortable grip without triggering accidental touches. The overall effect is subtle but significant. This feels less like a business tool and more like a personal device.
    The color options deserve special mention. Beyond the standard Platinum, Microsoft offers Oceanand Violet. These aren’t the bold, saturated hues you might expect from consumer electronics, but rather subdued, mature tones that manage to feel both professional and personal. The Violet, in particular, strikes an interesting balance. It is distinctive without being flashy.
    Flip the device around and you’ll notice the integrated kickstand, a Surface hallmark that continues to distinguish these devices from iPad competitors. The hinge feels remarkably solid, with 165 degrees of smooth, consistent resistance. You can position it at virtually any angle, from nearly flat to upright, and it stays exactly where you place it. This flexibility proves invaluable when using the device on uneven surfaces like your lap or a bed.

    The port selection remains minimal. Two USB-C 3.2 ports with DisplayPort 1.4a support handle all your connectivity needs. They’re well-positioned and work with a wide range of accessories, but the absence of a headphone jack or USB-A port means dongles will remain a fact of life for many users. This minimalist approach keeps the device slim but demands some adaptability from users with legacy peripherals.
    What about the keyboard? The optional Surface Pro 12-inch Keyboardrepresents a significant redesign. Microsoft has removed the Alcantara fabric from the palm rest, opting instead for a clean, monochromatic matte finish that feels premium to the touch. The fabric hasn’t disappeared entirely. It’s now relegated to the back of the keyboard cover, providing a pleasant tactile contrast when carrying the closed device.
    The typing experience surpasses expectations for such a compact keyboard. Key travel feels generous, with a satisfying tactile response that avoids the mushiness common to many tablet keyboards. The layout is thoughtfully designed, with full-sized keys in the central typing area and slightly compressed function and specialty keys at the edges. After a brief adjustment period, I was typing at nearly my full speed.
    The trackpad deserves equal praise. It’s responsive, accurate, and reasonably sized given the constraints of the 12-inch form factor. Microsoft has clearly prioritized quality over size here, and the result is a tracking surface that rarely frustrates.

    Perhaps the most significant ergonomic improvement involves the Surface Slim Pen. Rather than attaching to the keyboard as in previous models, it now magnetically snaps to the back of the tablet itself. The connection is surprisingly strong. You can shake the tablet vigorously without dislodging the pen. This redesign serves multiple purposes: it keeps the pen accessible whether you’re using the keyboard or not, it allows for wireless charging of the pen, and it slightly reduces the keyboard’s footprint.

    The front-facing camera placement requires some adjustment. Located at the top of the display when in landscape orientation, it creates a slightly downward-facing angle during video calls when using the kickstand. This isn’t ideal for presenting your best angle, though it’s a common compromise in tablet design. Switching to portrait orientation provides a more flattering angle but isn’t always practical for extended calls.

    Audio performance exceeds expectations for a device this size. The dual 2W stereo speakers with Dolby Atmos support deliver clear, room-filling sound with surprising bass response. They’re positioned perfectly to create a convincing stereo image when the device is in landscape orientation, making the Surface Pro 12 a legitimate option for casual movie watching without headphones.
    The most impressive aspect of the Surface Pro 12’s design is not any one feature, but how all the elements work together cohesively. The proportions feel natural, the weight distribution is balanced, and the materials and finishes complement each other nicely. This device has been refined over several generations, and that accumulated knowledge is evident in numerous small details.
    Performance
    The Surface Pro 12 introduces an intriguing performance proposition. Microsoft has equipped this compact device with Qualcomm’s Snapdragon X Plus processor, an 8-core variant of the chip powering many of this year’s AI-focused laptops. This marks a significant departure from Intel-based Surface devices of the past. The question isn’t whether this processor is powerful. It is. The question is whether it’s the right kind of powerful for your specific needs.
    For everyday computing, the answer is a resounding yes. The system boots instantly, apps launch without hesitation, and multitasking feels remarkably fluid. I routinely ran multiple Office applications alongside dozens of browser tabs without encountering any slowdown. This responsiveness extends to more demanding productivity tasks like photo editing in Adobe Lightroom, where the device handled 20+ megapixel RAW files with surprising agility.
    What makes this performance particularly impressive is the complete absence of fan noise. The Surface Pro 12 features a fanless design with no vents whatsoever. Even under sustained workloads, the device remains silent, with only minimal warming of the chassis. This thermal efficiency represents a significant quality-of-life improvement over previous Surface models, especially in quiet environments like libraries or meeting rooms.

    Benchmark results confirm these subjective impressions. In Geekbench 6, the Surface Pro 12 scored around 2,250 for single-core and 9,500 for multi-core performance. These numbers put it in the same neighborhood as many Intel Core Ultra 5-powered laptops, particularly for single-core tasks where the Snapdragon X Plus shows impressive efficiency. Cinebench results tell a similar story, with scores that would have been considered high-end just a couple of generations ago.
    Battery life represents perhaps the most significant performance advantage. Microsoft claims up to 16 hours of video playback and 12 hours of active web usage. In my testing, these numbers proved surprisingly accurate. A full day of mixed productivity workleft me with 25 to 30 percent battery remaining. More impressively, the device sips power when idle, losing just a few percentage points overnight. This efficiency means you can confidently leave your charger at home for most workdays.
    When you do need to charge, the process is refreshingly quick. Using the optional 45-watt USB-C charger, the Surface Pro 12 reaches 50 percent battery in approximately 30 minutes and 80 percent in about an hour. This rapid charging capability further enhances the device’s practicality for mobile professionals.
    The neural processing unitdeserves special attention. With 45 TOPS of AI performance, the Qualcomm Hexagon NPU positions the Surface Pro 12 as a capable platform for Microsoft’s growing ecosystem of AI-enhanced applications. Features like Windows Studio Effects, which provides background blur and eye contact correction during video calls, run smoothly without taxing the main CPU. The upcoming Recall feature, which promises to help you find anything you’ve seen on your PC, also leverages this dedicated AI hardware.
    Memory and storage configurations are straightforward. All models include 16GB of LPDDR5x RAM, which proves ample for most productivity workflows. Storage options include either 256GB or 512GB of UFS storage. While not as fast as the PCIe SSDs found in premium laptops, these storage solutions deliver respectable performance for everyday tasks. The absence of user-upgradeable components means choosing the right configuration at purchase time is crucial.
    Connectivity options enhance the overall performance picture. Wi-Fi 7 support ensures the fastest possible wireless connections on compatible networks, while Bluetooth 5.4 provides reliable connections to peripherals. The two USB-C ports support DisplayPort 1.4a, allowing you to drive up to two 4K monitors at 60Hz, a significant upgrade for productivity.

    Where does the Surface Pro 12 fall short? Demanding creative applications like video editing or 3D rendering will push this system to its limits. While it can handle these tasks, you’ll experience longer render times compared to more powerful systems. Similarly, gaming capabilities are limited to older titles, cloud gaming services, or less demanding indie games. This isn’t a gaming machine by any stretch.
    It’s also worth noting that while Windows on ARM compatibility has improved dramatically, you may occasionally encounter software that doesn’t run optimally or requires emulation. Microsoft’s Rosetta-like translation layer handles most x86 applications admirably, but with some performance penalty. Fortunately, major productivity applications like the Microsoft Office suite and Adobe Creative Cloud now offer native ARM versions that run beautifully.
    The performance story of the Surface Pro 12 is ultimately about balance. Microsoft has created a device that delivers impressive responsiveness for everyday tasks while maximizing battery life and eliminating fan noise. For the target audience, this balance hits a sweet spot that many will find compelling.
    Sustainability
    Surface devices have rarely been evaluated through an environmental lens. That shifts with the Surface Pro 12. Microsoft’s latest tablet-laptop hybrid takes a material-first approach to reducing its ecological footprint, applying tangible revisions in sourcing, assembly, and lifecycle design.
    The battery introduces a foundational change. This is the first Surface Pro to use 100 percent recycled cobalt inside the cell. The shift matters. Cobalt extraction is linked to heavy environmental degradation and labor violations, particularly in regions where the material is most abundant. Using recycled cobalt minimizes dependency on these supply chains while maintaining performance.

    Microsoft applies similar logic to the enclosure. The casing incorporates at least 82.9 percent recycled content, including fully recycled aluminum alloy and rare earth elements. These metals are essential to core functions like audio and haptic feedback, but traditional sourcing is energy-intensive and harmful to ecosystems. Recycling them cuts the carbon load while preserving durability. The recycled aluminum, in particular, reduces energy consumption by over 90 percent compared to newly smelted metal.
    Packaging aligns with this direction. Microsoft states that 71 percent of wood-fiber packaging uses recycled material, and all virgin paper is sourced from responsibly managed forests. The result feels considered and premium, but without the typical waste profile seen in high-end electronics.
    Power efficiency is handled by both certification and architecture. The Surface Pro 12 meets ENERGY STAR criteria. Its Snapdragon processor operates on a performance-per-watt model, reducing heat and load during basic workflows without sacrificing responsiveness.
    Repairability has also improved. Microsoft includes labeled components and internal diagrams that support technician-guided part replacements. These efforts fall short of true user-repairability, but they increase the odds that broken devices will be fixed rather than discarded.
    A trade-in program supports hardware recovery for U.S. commercial customers. The initiative encourages responsible disposal and keeps materials in circulation longer.
    This model moves the Surface series closer to a lower-impact future. Microsoft still relies on proprietary accessories that may not carry forward. The keyboard and pen are not backward compatible with earlier models. That limits cross-generation reuse and could introduce avoidable waste. True modularity is still missing.
    Even with those constraints, the Surface Pro 12 represents the most focused sustainability effort in the product line to date. Material sourcing, energy use, and packaging all reflect an intention to lower the cost to the planet without compromising design or performance.
    Value and Wrap-up
    The Surface Pro 12 redefines how compact Windows hardware can serve practical, real-world needs. Its value isn’t rooted in technical dominance or low pricing. It comes from how effectively the device supports a mobile, focused workflow.
    This model favors portability and responsiveness over excess. It’s built for those who move constantly between meetings, transit, and flexible workspaces, without wanting to sacrifice the continuity of a full Windows environment. The smaller form factor isn’t a downgrade. It’s deliberate, eliminating clutter and favoring daily-use speed, comfort, and silence.

    Microsoft’s design choices reflect this purpose. From the near-instant wake time to the magnetic keyboard closure, the experience is tuned to reduce friction. That fluidity helps the device become second nature. It’s not about raw performance. It’s about always being ready.
    The inclusion of dedicated AI hardware gives the Surface Pro 12 another dimension. As more Windows features become NPU-dependent, this machine stays relevant. You’re not just buying current functionality. You’re investing in a platform with a longer upgrade arc.
    The accessory pricing remains clunky. But over time, the value balances out through longevity and reduced dependency on external gear. Build quality, battery endurance, and AI readiness all support longer ownership without the usual performance decay.
    What makes the Surface Pro 12 stand out is discipline. Microsoft didn’t stretch this device to cover every use case. Instead, it doubled down on a clear objective: make a serious, portable Windows tool that respects your time and space. The result is confident and complete.The post Microsoft Surface Pro 12 Review: Compact Copilot+ Windows device built for silence, stamina, and adaptability first appeared on Yanko Design.
    #microsoft #surface #pro #review #compact
    Microsoft Surface Pro 12 Review: Compact Copilot+ Windows device built for silence, stamina, and adaptability
    PROS: Highly Portable: Lightweight and compact with balanced ergonomics for easy one-handed use Quiet, Efficient Performance: Fanless design runs silently while handling daily tasks smoothly Improved Input Design: Redesigned keyboard and pen integration enhance usability Eco-Friendly Materials: Uses recycled cobalt, aluminum, and packaging to reduce impact CONS: Accessories Sold Separately: Keyboard and charger increase total cost significantly Limited Ports: No USB-A or headphone jack requires adapters Not Built for Heavy Creative Work: Struggles with intensive editing or gaming tasks RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:Smart, silent, and travel-ready. The Surface Pro 12 cuts the bulk while keeping the features that matter for real work and play. Microsoft’s Surface Pro 12 arrives with the subtlety of a whisper and the impact of a shout. The newest addition to Microsoft’s 2-in-1 lineup doesn’t announce itself with flashy gimmicks or revolutionary redesigns. Instead, it quietly refines what we’ve come to expect from the Surface family while carving out its own distinct identity in an increasingly crowded market. Smaller, lighter, and more nimble than its predecessors, this 12-inch tablet-laptop hybrid represents Microsoft’s most focused attempt yet at balancing power and portability. Designer: Microsoft I’ve spent considerable time with this device, exploring its capabilities and limitations across various use cases. What emerges is a fascinating study in compromise and calculation. The Surface Pro 12 exists in an interesting middle ground. It’s not the most powerful Surface device you can buy. It’s not the largest or the most premium. But that’s precisely the point. Microsoft has crafted something deliberately positioned to appeal to users who found previous Surface models either too unwieldy or too expensive. Does it succeed? That depends entirely on what you’re looking for. For some, the 12-inch form factor will feel like the Goldilocks zone. Not too big, not too small, but just right. For others, the compromises made to achieve this more compact design might prove frustrating. And hovering over everything is the question of value: at for the base model, is this the Surface that finally makes sense for mainstream consumers? The timing couldn’t be more interesting. As Microsoft pushes forward with its Copilot+ PC initiative, the Surface Pro 12 arrives as one of the standard-bearers for this new AI-focused computing paradigm. With its Snapdragon X Plus processor and dedicated NPU delivering 45 TOPS of AI performance, this diminutive device packs surprising computational muscle specifically tuned for the next generation of AI-powered applications. But specs only tell part of the story. The real question is how all this technology comes together in daily use. Can the Surface Pro 12 deliver on Microsoft’s promises of all-day battery life and responsive performance in a more portable package? And perhaps more importantly, does it justify its existence in a lineup that already includes the more powerful Surface Pro 13-inch? Let’s find out. Design and Ergonomics Pick up the Surface Pro 12, and something immediately feels different. The weight distribution. The rounded edges. The way it nestles into your palm with unexpected comfort. At just 1.5 pounds, this isn’t Microsoft’s lightest device ever, but it might be their most thoughtfully balanced. I found myself reaching for it instinctively throughout the day. Its 0.30-inch thickness, combined with its compact footprint, makes it substantially more comfortable to hold in one hand than previous Surface models. This matters tremendously for a device meant to transition seamlessly between laptop and tablet modes. Microsoft has embraced a more organic design language here. Gone are the sharper edges of previous generations, replaced by gently rounded corners that echo the aesthetic of modern tablets. The bezels have shrunk considerably, though they’re still present enough to provide a comfortable grip without triggering accidental touches. The overall effect is subtle but significant. This feels less like a business tool and more like a personal device. The color options deserve special mention. Beyond the standard Platinum, Microsoft offers Oceanand Violet. These aren’t the bold, saturated hues you might expect from consumer electronics, but rather subdued, mature tones that manage to feel both professional and personal. The Violet, in particular, strikes an interesting balance. It is distinctive without being flashy. Flip the device around and you’ll notice the integrated kickstand, a Surface hallmark that continues to distinguish these devices from iPad competitors. The hinge feels remarkably solid, with 165 degrees of smooth, consistent resistance. You can position it at virtually any angle, from nearly flat to upright, and it stays exactly where you place it. This flexibility proves invaluable when using the device on uneven surfaces like your lap or a bed. The port selection remains minimal. Two USB-C 3.2 ports with DisplayPort 1.4a support handle all your connectivity needs. They’re well-positioned and work with a wide range of accessories, but the absence of a headphone jack or USB-A port means dongles will remain a fact of life for many users. This minimalist approach keeps the device slim but demands some adaptability from users with legacy peripherals. What about the keyboard? The optional Surface Pro 12-inch Keyboardrepresents a significant redesign. Microsoft has removed the Alcantara fabric from the palm rest, opting instead for a clean, monochromatic matte finish that feels premium to the touch. The fabric hasn’t disappeared entirely. It’s now relegated to the back of the keyboard cover, providing a pleasant tactile contrast when carrying the closed device. The typing experience surpasses expectations for such a compact keyboard. Key travel feels generous, with a satisfying tactile response that avoids the mushiness common to many tablet keyboards. The layout is thoughtfully designed, with full-sized keys in the central typing area and slightly compressed function and specialty keys at the edges. After a brief adjustment period, I was typing at nearly my full speed. The trackpad deserves equal praise. It’s responsive, accurate, and reasonably sized given the constraints of the 12-inch form factor. Microsoft has clearly prioritized quality over size here, and the result is a tracking surface that rarely frustrates. Perhaps the most significant ergonomic improvement involves the Surface Slim Pen. Rather than attaching to the keyboard as in previous models, it now magnetically snaps to the back of the tablet itself. The connection is surprisingly strong. You can shake the tablet vigorously without dislodging the pen. This redesign serves multiple purposes: it keeps the pen accessible whether you’re using the keyboard or not, it allows for wireless charging of the pen, and it slightly reduces the keyboard’s footprint. The front-facing camera placement requires some adjustment. Located at the top of the display when in landscape orientation, it creates a slightly downward-facing angle during video calls when using the kickstand. This isn’t ideal for presenting your best angle, though it’s a common compromise in tablet design. Switching to portrait orientation provides a more flattering angle but isn’t always practical for extended calls. Audio performance exceeds expectations for a device this size. The dual 2W stereo speakers with Dolby Atmos support deliver clear, room-filling sound with surprising bass response. They’re positioned perfectly to create a convincing stereo image when the device is in landscape orientation, making the Surface Pro 12 a legitimate option for casual movie watching without headphones. The most impressive aspect of the Surface Pro 12’s design is not any one feature, but how all the elements work together cohesively. The proportions feel natural, the weight distribution is balanced, and the materials and finishes complement each other nicely. This device has been refined over several generations, and that accumulated knowledge is evident in numerous small details. Performance The Surface Pro 12 introduces an intriguing performance proposition. Microsoft has equipped this compact device with Qualcomm’s Snapdragon X Plus processor, an 8-core variant of the chip powering many of this year’s AI-focused laptops. This marks a significant departure from Intel-based Surface devices of the past. The question isn’t whether this processor is powerful. It is. The question is whether it’s the right kind of powerful for your specific needs. For everyday computing, the answer is a resounding yes. The system boots instantly, apps launch without hesitation, and multitasking feels remarkably fluid. I routinely ran multiple Office applications alongside dozens of browser tabs without encountering any slowdown. This responsiveness extends to more demanding productivity tasks like photo editing in Adobe Lightroom, where the device handled 20+ megapixel RAW files with surprising agility. What makes this performance particularly impressive is the complete absence of fan noise. The Surface Pro 12 features a fanless design with no vents whatsoever. Even under sustained workloads, the device remains silent, with only minimal warming of the chassis. This thermal efficiency represents a significant quality-of-life improvement over previous Surface models, especially in quiet environments like libraries or meeting rooms. Benchmark results confirm these subjective impressions. In Geekbench 6, the Surface Pro 12 scored around 2,250 for single-core and 9,500 for multi-core performance. These numbers put it in the same neighborhood as many Intel Core Ultra 5-powered laptops, particularly for single-core tasks where the Snapdragon X Plus shows impressive efficiency. Cinebench results tell a similar story, with scores that would have been considered high-end just a couple of generations ago. Battery life represents perhaps the most significant performance advantage. Microsoft claims up to 16 hours of video playback and 12 hours of active web usage. In my testing, these numbers proved surprisingly accurate. A full day of mixed productivity workleft me with 25 to 30 percent battery remaining. More impressively, the device sips power when idle, losing just a few percentage points overnight. This efficiency means you can confidently leave your charger at home for most workdays. When you do need to charge, the process is refreshingly quick. Using the optional 45-watt USB-C charger, the Surface Pro 12 reaches 50 percent battery in approximately 30 minutes and 80 percent in about an hour. This rapid charging capability further enhances the device’s practicality for mobile professionals. The neural processing unitdeserves special attention. With 45 TOPS of AI performance, the Qualcomm Hexagon NPU positions the Surface Pro 12 as a capable platform for Microsoft’s growing ecosystem of AI-enhanced applications. Features like Windows Studio Effects, which provides background blur and eye contact correction during video calls, run smoothly without taxing the main CPU. The upcoming Recall feature, which promises to help you find anything you’ve seen on your PC, also leverages this dedicated AI hardware. Memory and storage configurations are straightforward. All models include 16GB of LPDDR5x RAM, which proves ample for most productivity workflows. Storage options include either 256GB or 512GB of UFS storage. While not as fast as the PCIe SSDs found in premium laptops, these storage solutions deliver respectable performance for everyday tasks. The absence of user-upgradeable components means choosing the right configuration at purchase time is crucial. Connectivity options enhance the overall performance picture. Wi-Fi 7 support ensures the fastest possible wireless connections on compatible networks, while Bluetooth 5.4 provides reliable connections to peripherals. The two USB-C ports support DisplayPort 1.4a, allowing you to drive up to two 4K monitors at 60Hz, a significant upgrade for productivity. Where does the Surface Pro 12 fall short? Demanding creative applications like video editing or 3D rendering will push this system to its limits. While it can handle these tasks, you’ll experience longer render times compared to more powerful systems. Similarly, gaming capabilities are limited to older titles, cloud gaming services, or less demanding indie games. This isn’t a gaming machine by any stretch. It’s also worth noting that while Windows on ARM compatibility has improved dramatically, you may occasionally encounter software that doesn’t run optimally or requires emulation. Microsoft’s Rosetta-like translation layer handles most x86 applications admirably, but with some performance penalty. Fortunately, major productivity applications like the Microsoft Office suite and Adobe Creative Cloud now offer native ARM versions that run beautifully. The performance story of the Surface Pro 12 is ultimately about balance. Microsoft has created a device that delivers impressive responsiveness for everyday tasks while maximizing battery life and eliminating fan noise. For the target audience, this balance hits a sweet spot that many will find compelling. Sustainability Surface devices have rarely been evaluated through an environmental lens. That shifts with the Surface Pro 12. Microsoft’s latest tablet-laptop hybrid takes a material-first approach to reducing its ecological footprint, applying tangible revisions in sourcing, assembly, and lifecycle design. The battery introduces a foundational change. This is the first Surface Pro to use 100 percent recycled cobalt inside the cell. The shift matters. Cobalt extraction is linked to heavy environmental degradation and labor violations, particularly in regions where the material is most abundant. Using recycled cobalt minimizes dependency on these supply chains while maintaining performance. Microsoft applies similar logic to the enclosure. The casing incorporates at least 82.9 percent recycled content, including fully recycled aluminum alloy and rare earth elements. These metals are essential to core functions like audio and haptic feedback, but traditional sourcing is energy-intensive and harmful to ecosystems. Recycling them cuts the carbon load while preserving durability. The recycled aluminum, in particular, reduces energy consumption by over 90 percent compared to newly smelted metal. Packaging aligns with this direction. Microsoft states that 71 percent of wood-fiber packaging uses recycled material, and all virgin paper is sourced from responsibly managed forests. The result feels considered and premium, but without the typical waste profile seen in high-end electronics. Power efficiency is handled by both certification and architecture. The Surface Pro 12 meets ENERGY STAR criteria. Its Snapdragon processor operates on a performance-per-watt model, reducing heat and load during basic workflows without sacrificing responsiveness. Repairability has also improved. Microsoft includes labeled components and internal diagrams that support technician-guided part replacements. These efforts fall short of true user-repairability, but they increase the odds that broken devices will be fixed rather than discarded. A trade-in program supports hardware recovery for U.S. commercial customers. The initiative encourages responsible disposal and keeps materials in circulation longer. This model moves the Surface series closer to a lower-impact future. Microsoft still relies on proprietary accessories that may not carry forward. The keyboard and pen are not backward compatible with earlier models. That limits cross-generation reuse and could introduce avoidable waste. True modularity is still missing. Even with those constraints, the Surface Pro 12 represents the most focused sustainability effort in the product line to date. Material sourcing, energy use, and packaging all reflect an intention to lower the cost to the planet without compromising design or performance. Value and Wrap-up The Surface Pro 12 redefines how compact Windows hardware can serve practical, real-world needs. Its value isn’t rooted in technical dominance or low pricing. It comes from how effectively the device supports a mobile, focused workflow. This model favors portability and responsiveness over excess. It’s built for those who move constantly between meetings, transit, and flexible workspaces, without wanting to sacrifice the continuity of a full Windows environment. The smaller form factor isn’t a downgrade. It’s deliberate, eliminating clutter and favoring daily-use speed, comfort, and silence. Microsoft’s design choices reflect this purpose. From the near-instant wake time to the magnetic keyboard closure, the experience is tuned to reduce friction. That fluidity helps the device become second nature. It’s not about raw performance. It’s about always being ready. The inclusion of dedicated AI hardware gives the Surface Pro 12 another dimension. As more Windows features become NPU-dependent, this machine stays relevant. You’re not just buying current functionality. You’re investing in a platform with a longer upgrade arc. The accessory pricing remains clunky. But over time, the value balances out through longevity and reduced dependency on external gear. Build quality, battery endurance, and AI readiness all support longer ownership without the usual performance decay. What makes the Surface Pro 12 stand out is discipline. Microsoft didn’t stretch this device to cover every use case. Instead, it doubled down on a clear objective: make a serious, portable Windows tool that respects your time and space. The result is confident and complete.The post Microsoft Surface Pro 12 Review: Compact Copilot+ Windows device built for silence, stamina, and adaptability first appeared on Yanko Design. #microsoft #surface #pro #review #compact
    WWW.YANKODESIGN.COM
    Microsoft Surface Pro 12 Review: Compact Copilot+ Windows device built for silence, stamina, and adaptability
    PROS: Highly Portable: Lightweight and compact with balanced ergonomics for easy one-handed use Quiet, Efficient Performance: Fanless design runs silently while handling daily tasks smoothly Improved Input Design: Redesigned keyboard and pen integration enhance usability Eco-Friendly Materials: Uses recycled cobalt, aluminum, and packaging to reduce impact CONS: Accessories Sold Separately: Keyboard and charger increase total cost significantly Limited Ports: No USB-A or headphone jack requires adapters Not Built for Heavy Creative Work: Struggles with intensive editing or gaming tasks RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:Smart, silent, and travel-ready. The Surface Pro 12 cuts the bulk while keeping the features that matter for real work and play. Microsoft’s Surface Pro 12 arrives with the subtlety of a whisper and the impact of a shout. The newest addition to Microsoft’s 2-in-1 lineup doesn’t announce itself with flashy gimmicks or revolutionary redesigns. Instead, it quietly refines what we’ve come to expect from the Surface family while carving out its own distinct identity in an increasingly crowded market. Smaller, lighter, and more nimble than its predecessors, this 12-inch tablet-laptop hybrid represents Microsoft’s most focused attempt yet at balancing power and portability. Designer: Microsoft I’ve spent considerable time with this device, exploring its capabilities and limitations across various use cases. What emerges is a fascinating study in compromise and calculation. The Surface Pro 12 exists in an interesting middle ground. It’s not the most powerful Surface device you can buy. It’s not the largest or the most premium. But that’s precisely the point. Microsoft has crafted something deliberately positioned to appeal to users who found previous Surface models either too unwieldy or too expensive. Does it succeed? That depends entirely on what you’re looking for. For some, the 12-inch form factor will feel like the Goldilocks zone. Not too big, not too small, but just right. For others, the compromises made to achieve this more compact design might prove frustrating. And hovering over everything is the question of value: at $799 for the base model (without keyboard or pen), is this the Surface that finally makes sense for mainstream consumers? The timing couldn’t be more interesting. As Microsoft pushes forward with its Copilot+ PC initiative, the Surface Pro 12 arrives as one of the standard-bearers for this new AI-focused computing paradigm. With its Snapdragon X Plus processor and dedicated NPU delivering 45 TOPS of AI performance, this diminutive device packs surprising computational muscle specifically tuned for the next generation of AI-powered applications. But specs only tell part of the story. The real question is how all this technology comes together in daily use. Can the Surface Pro 12 deliver on Microsoft’s promises of all-day battery life and responsive performance in a more portable package? And perhaps more importantly, does it justify its existence in a lineup that already includes the more powerful Surface Pro 13-inch? Let’s find out. Design and Ergonomics Pick up the Surface Pro 12, and something immediately feels different. The weight distribution. The rounded edges. The way it nestles into your palm with unexpected comfort. At just 1.5 pounds (686g), this isn’t Microsoft’s lightest device ever, but it might be their most thoughtfully balanced. I found myself reaching for it instinctively throughout the day. Its 0.30-inch thickness, combined with its compact footprint, makes it substantially more comfortable to hold in one hand than previous Surface models. This matters tremendously for a device meant to transition seamlessly between laptop and tablet modes. Microsoft has embraced a more organic design language here. Gone are the sharper edges of previous generations, replaced by gently rounded corners that echo the aesthetic of modern tablets. The bezels have shrunk considerably, though they’re still present enough to provide a comfortable grip without triggering accidental touches. The overall effect is subtle but significant. This feels less like a business tool and more like a personal device. The color options deserve special mention. Beyond the standard Platinum, Microsoft offers Ocean (a sophisticated blue-gray) and Violet. These aren’t the bold, saturated hues you might expect from consumer electronics, but rather subdued, mature tones that manage to feel both professional and personal. The Violet, in particular, strikes an interesting balance. It is distinctive without being flashy. Flip the device around and you’ll notice the integrated kickstand, a Surface hallmark that continues to distinguish these devices from iPad competitors. The hinge feels remarkably solid, with 165 degrees of smooth, consistent resistance. You can position it at virtually any angle, from nearly flat to upright, and it stays exactly where you place it. This flexibility proves invaluable when using the device on uneven surfaces like your lap or a bed. The port selection remains minimal. Two USB-C 3.2 ports with DisplayPort 1.4a support handle all your connectivity needs. They’re well-positioned and work with a wide range of accessories, but the absence of a headphone jack or USB-A port means dongles will remain a fact of life for many users. This minimalist approach keeps the device slim but demands some adaptability from users with legacy peripherals. What about the keyboard? The optional Surface Pro 12-inch Keyboard ($149 without pen, $249 with Slim Pen) represents a significant redesign. Microsoft has removed the Alcantara fabric from the palm rest, opting instead for a clean, monochromatic matte finish that feels premium to the touch. The fabric hasn’t disappeared entirely. It’s now relegated to the back of the keyboard cover, providing a pleasant tactile contrast when carrying the closed device. The typing experience surpasses expectations for such a compact keyboard. Key travel feels generous, with a satisfying tactile response that avoids the mushiness common to many tablet keyboards. The layout is thoughtfully designed, with full-sized keys in the central typing area and slightly compressed function and specialty keys at the edges. After a brief adjustment period, I was typing at nearly my full speed. The trackpad deserves equal praise. It’s responsive, accurate, and reasonably sized given the constraints of the 12-inch form factor. Microsoft has clearly prioritized quality over size here, and the result is a tracking surface that rarely frustrates. Perhaps the most significant ergonomic improvement involves the Surface Slim Pen. Rather than attaching to the keyboard as in previous models, it now magnetically snaps to the back of the tablet itself. The connection is surprisingly strong. You can shake the tablet vigorously without dislodging the pen. This redesign serves multiple purposes: it keeps the pen accessible whether you’re using the keyboard or not, it allows for wireless charging of the pen, and it slightly reduces the keyboard’s footprint. The front-facing camera placement requires some adjustment. Located at the top of the display when in landscape orientation, it creates a slightly downward-facing angle during video calls when using the kickstand. This isn’t ideal for presenting your best angle, though it’s a common compromise in tablet design. Switching to portrait orientation provides a more flattering angle but isn’t always practical for extended calls. Audio performance exceeds expectations for a device this size. The dual 2W stereo speakers with Dolby Atmos support deliver clear, room-filling sound with surprising bass response. They’re positioned perfectly to create a convincing stereo image when the device is in landscape orientation, making the Surface Pro 12 a legitimate option for casual movie watching without headphones. The most impressive aspect of the Surface Pro 12’s design is not any one feature, but how all the elements work together cohesively. The proportions feel natural, the weight distribution is balanced, and the materials and finishes complement each other nicely. This device has been refined over several generations, and that accumulated knowledge is evident in numerous small details. Performance The Surface Pro 12 introduces an intriguing performance proposition. Microsoft has equipped this compact device with Qualcomm’s Snapdragon X Plus processor, an 8-core variant of the chip powering many of this year’s AI-focused laptops. This marks a significant departure from Intel-based Surface devices of the past. The question isn’t whether this processor is powerful. It is. The question is whether it’s the right kind of powerful for your specific needs. For everyday computing, the answer is a resounding yes. The system boots instantly, apps launch without hesitation, and multitasking feels remarkably fluid. I routinely ran multiple Office applications alongside dozens of browser tabs without encountering any slowdown. This responsiveness extends to more demanding productivity tasks like photo editing in Adobe Lightroom, where the device handled 20+ megapixel RAW files with surprising agility. What makes this performance particularly impressive is the complete absence of fan noise. The Surface Pro 12 features a fanless design with no vents whatsoever. Even under sustained workloads, the device remains silent, with only minimal warming of the chassis. This thermal efficiency represents a significant quality-of-life improvement over previous Surface models, especially in quiet environments like libraries or meeting rooms. Benchmark results confirm these subjective impressions. In Geekbench 6, the Surface Pro 12 scored around 2,250 for single-core and 9,500 for multi-core performance. These numbers put it in the same neighborhood as many Intel Core Ultra 5-powered laptops, particularly for single-core tasks where the Snapdragon X Plus shows impressive efficiency. Cinebench results tell a similar story, with scores that would have been considered high-end just a couple of generations ago. Battery life represents perhaps the most significant performance advantage. Microsoft claims up to 16 hours of video playback and 12 hours of active web usage. In my testing, these numbers proved surprisingly accurate. A full day of mixed productivity work (writing, web browsing, video calls, and occasional photo editing) left me with 25 to 30 percent battery remaining. More impressively, the device sips power when idle, losing just a few percentage points overnight. This efficiency means you can confidently leave your charger at home for most workdays. When you do need to charge, the process is refreshingly quick. Using the optional 45-watt USB-C charger ($70), the Surface Pro 12 reaches 50 percent battery in approximately 30 minutes and 80 percent in about an hour. This rapid charging capability further enhances the device’s practicality for mobile professionals. The neural processing unit (NPU) deserves special attention. With 45 TOPS of AI performance, the Qualcomm Hexagon NPU positions the Surface Pro 12 as a capable platform for Microsoft’s growing ecosystem of AI-enhanced applications. Features like Windows Studio Effects, which provides background blur and eye contact correction during video calls, run smoothly without taxing the main CPU. The upcoming Recall feature, which promises to help you find anything you’ve seen on your PC, also leverages this dedicated AI hardware. Memory and storage configurations are straightforward. All models include 16GB of LPDDR5x RAM, which proves ample for most productivity workflows. Storage options include either 256GB or 512GB of UFS storage. While not as fast as the PCIe SSDs found in premium laptops, these storage solutions deliver respectable performance for everyday tasks. The absence of user-upgradeable components means choosing the right configuration at purchase time is crucial. Connectivity options enhance the overall performance picture. Wi-Fi 7 support ensures the fastest possible wireless connections on compatible networks, while Bluetooth 5.4 provides reliable connections to peripherals. The two USB-C ports support DisplayPort 1.4a, allowing you to drive up to two 4K monitors at 60Hz, a significant upgrade for productivity. Where does the Surface Pro 12 fall short? Demanding creative applications like video editing or 3D rendering will push this system to its limits. While it can handle these tasks, you’ll experience longer render times compared to more powerful systems. Similarly, gaming capabilities are limited to older titles, cloud gaming services, or less demanding indie games. This isn’t a gaming machine by any stretch. It’s also worth noting that while Windows on ARM compatibility has improved dramatically, you may occasionally encounter software that doesn’t run optimally or requires emulation. Microsoft’s Rosetta-like translation layer handles most x86 applications admirably, but with some performance penalty. Fortunately, major productivity applications like the Microsoft Office suite and Adobe Creative Cloud now offer native ARM versions that run beautifully. The performance story of the Surface Pro 12 is ultimately about balance. Microsoft has created a device that delivers impressive responsiveness for everyday tasks while maximizing battery life and eliminating fan noise. For the target audience (mobile professionals, students, and productivity-focused users), this balance hits a sweet spot that many will find compelling. Sustainability Surface devices have rarely been evaluated through an environmental lens. That shifts with the Surface Pro 12. Microsoft’s latest tablet-laptop hybrid takes a material-first approach to reducing its ecological footprint, applying tangible revisions in sourcing, assembly, and lifecycle design. The battery introduces a foundational change. This is the first Surface Pro to use 100 percent recycled cobalt inside the cell. The shift matters. Cobalt extraction is linked to heavy environmental degradation and labor violations, particularly in regions where the material is most abundant. Using recycled cobalt minimizes dependency on these supply chains while maintaining performance. Microsoft applies similar logic to the enclosure. The casing incorporates at least 82.9 percent recycled content, including fully recycled aluminum alloy and rare earth elements. These metals are essential to core functions like audio and haptic feedback, but traditional sourcing is energy-intensive and harmful to ecosystems. Recycling them cuts the carbon load while preserving durability. The recycled aluminum, in particular, reduces energy consumption by over 90 percent compared to newly smelted metal. Packaging aligns with this direction. Microsoft states that 71 percent of wood-fiber packaging uses recycled material, and all virgin paper is sourced from responsibly managed forests. The result feels considered and premium, but without the typical waste profile seen in high-end electronics. Power efficiency is handled by both certification and architecture. The Surface Pro 12 meets ENERGY STAR criteria. Its Snapdragon processor operates on a performance-per-watt model, reducing heat and load during basic workflows without sacrificing responsiveness. Repairability has also improved. Microsoft includes labeled components and internal diagrams that support technician-guided part replacements. These efforts fall short of true user-repairability, but they increase the odds that broken devices will be fixed rather than discarded. A trade-in program supports hardware recovery for U.S. commercial customers. The initiative encourages responsible disposal and keeps materials in circulation longer. This model moves the Surface series closer to a lower-impact future. Microsoft still relies on proprietary accessories that may not carry forward. The keyboard and pen are not backward compatible with earlier models. That limits cross-generation reuse and could introduce avoidable waste. True modularity is still missing. Even with those constraints, the Surface Pro 12 represents the most focused sustainability effort in the product line to date. Material sourcing, energy use, and packaging all reflect an intention to lower the cost to the planet without compromising design or performance. Value and Wrap-up The Surface Pro 12 redefines how compact Windows hardware can serve practical, real-world needs. Its value isn’t rooted in technical dominance or low pricing. It comes from how effectively the device supports a mobile, focused workflow. This model favors portability and responsiveness over excess. It’s built for those who move constantly between meetings, transit, and flexible workspaces, without wanting to sacrifice the continuity of a full Windows environment. The smaller form factor isn’t a downgrade. It’s deliberate, eliminating clutter and favoring daily-use speed, comfort, and silence. Microsoft’s design choices reflect this purpose. From the near-instant wake time to the magnetic keyboard closure, the experience is tuned to reduce friction. That fluidity helps the device become second nature. It’s not about raw performance. It’s about always being ready. The inclusion of dedicated AI hardware gives the Surface Pro 12 another dimension. As more Windows features become NPU-dependent, this machine stays relevant. You’re not just buying current functionality. You’re investing in a platform with a longer upgrade arc. The accessory pricing remains clunky. But over time, the value balances out through longevity and reduced dependency on external gear. Build quality, battery endurance, and AI readiness all support longer ownership without the usual performance decay. What makes the Surface Pro 12 stand out is discipline. Microsoft didn’t stretch this device to cover every use case. Instead, it doubled down on a clear objective: make a serious, portable Windows tool that respects your time and space. The result is confident and complete.The post Microsoft Surface Pro 12 Review: Compact Copilot+ Windows device built for silence, stamina, and adaptability first appeared on Yanko Design.
    0 Комментарии 0 Поделились
  • New Sunlu FilaDryer SP2: Technical specifications and pricing

    Chinese 3D printing technology firm SUNLU has released a new filament drying system designed to offer more flexibility for 3D printing users managing multiple spools. 
    Called the FilaDryer SP2, the unit separates its heating base from the drying chambers, allowing up to three chambers to be stacked vertically. This setup is aimed at users running several printers at once, where space is limited and filament drying often becomes a bottleneck.
    Founded in 2013 in Zhuhai, China, SUNLU specializes in 3D printing materials and hardware, with a product range that includes filaments, resins, and accessories. Having sold more than 25 million units, the company runs over 140 production lines and holds over 200 granted IP rights related to materials and equipment design.
    FilaDryer SP2 Capacity. Image via Sunlu.
    Scalable design with efficient drying
    One of the key goals behind the SP2 is to improve drying efficiency. In repeated tests with PETG, the system brought moisture levels down from 0.6% to 0.2% in just four hours. That’s around 30% faster than other dryers in the same category. The adjustable temperature range, which runs from 35°C to 70°C, makes it suitable for common materials like PLA as well as moisture-sensitive filaments such as nylon. 
    Each chamber includes a built-in humidity sensor for real-time monitoring and offers enough space to hold either two 1kg spools, a single 2–3kg spool, or spools measuring up to 250 mm in diameter and 153 mm in width. The generous capacity is especially useful during long print jobs, where smaller dryers often fall short. In one example, saturated PLA reached optimal moisture levels within four to six hours at 55°C, demonstrating the system’s effectiveness for extended use cases.
    Safety features are built into the SP2’s design. It uses a ceramic PTC heater combined with a smart fan to ensure heat is evenly distributed. A mechanical cutoff switch activates if temperatures spike unexpectedly, while a PID-controlled microprocessor keeps the system stable within a one-degree range. 
    The device has passed 72-hour continuous operation tests and was also run through high-stress scenarios to confirm reliability. Additional silicone gaskets at connection points help keep moisture out and maintain consistent drying conditions.
    Compared to conventional dryers, which are typically fixed in design and limited to a single chamber, the SP2 offers a more scalable solution. Instead of purchasing an entirely new unit, users can expand capacity by adding additional chambers. It also accommodates smaller spools without the need for adapters or modifications. 
    Additionally, internal testing indicates up to 35% lower energy consumption than similar products. In early use, the system has also been noted for its quiet operation at 42dB, stable stacking, and consistent temperature control across extended sessions.
    Dry and store filaments with built-in heating and humidity control. Image via Sunlu.
    Technical specifications and pricing
    For pricing details and more information, readers can visit Sunlu’s website.
    Product NameSUNLU Filament Dryer SP2 / Storage Box SetProduct Dimension278mm × 208mm × 396mmInternal Dimensions265mm × 193mm × 274mmPackage Size337mm × 263mm × 380mmN.W.FilaDryer SP2 Set: 3.6KG, Storage Box Set: 5.0KGG.W.FilaDryer SP2 Set: 4.1KG, Storage Box Set: 5.5KGOperating EnvironmentTemperature: 10°C-35°C, Relative Humidity: ≤95%Compatible Filament DiametersΦ1.75mm/Φ2.85mmMaximum Spool SizeΦ250mm × 153mmWork Temperature Range35°C-70°CHumidity Display RangeRefer to the Hygrometer: 10%-90%Time Setting Range0-99hPower Input SpecificationsAC 110V 60Hz 0r AC 220V 50HzMaximum Operating Current2.2A@230V, 4.2A@120VMaximum Working Power250wStand by Power≤1WPackage Contents: FilaDryer SP2 SetStorage Box *1, Heating Base *1, Hygrometer *1, Desiccant *1, Desiccant Box *1, Power Cable *1, Spool Roller *1, Air lock *2, PTFE Tube 8cm *2, PTFE Tube 1m *2, User Manual *1Storage Box SetStorage Box *1, PLA+2.0 Black *2, Hygrometer *1, Desiccant *1, Desiccant Box *1, Spool Roller *1, Air lock *2, PTFE Tube 8cm *2, PTFE Tube 1m *2, User Manual *1
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    What 3D printing trends should you watch out for in 2025?
    How is the future of 3D printing shaping up?
    To stay up to date with the latest 3D printing news, don’t forget to subscribe to the 3D Printing Industry newsletter or follow us on Twitter, or like our page on Facebook.
    While you’re here, why not subscribe to our Youtube channel? Featuring discussion, debriefs, video shorts, and webinar replays.
    Featured image shows the Sunlu FilaDryer SP2. Image via Sunlu.

    Ada Shaikhnag
    With a background in journalism, Ada has a keen interest in frontier technology and its application in the wider world. Ada reports on aspects of 3D printing ranging from aerospace and automotive to medical and dental.
    #new #sunlu #filadryer #sp2 #technical
    New Sunlu FilaDryer SP2: Technical specifications and pricing
    Chinese 3D printing technology firm SUNLU has released a new filament drying system designed to offer more flexibility for 3D printing users managing multiple spools.  Called the FilaDryer SP2, the unit separates its heating base from the drying chambers, allowing up to three chambers to be stacked vertically. This setup is aimed at users running several printers at once, where space is limited and filament drying often becomes a bottleneck. Founded in 2013 in Zhuhai, China, SUNLU specializes in 3D printing materials and hardware, with a product range that includes filaments, resins, and accessories. Having sold more than 25 million units, the company runs over 140 production lines and holds over 200 granted IP rights related to materials and equipment design. FilaDryer SP2 Capacity. Image via Sunlu. Scalable design with efficient drying One of the key goals behind the SP2 is to improve drying efficiency. In repeated tests with PETG, the system brought moisture levels down from 0.6% to 0.2% in just four hours. That’s around 30% faster than other dryers in the same category. The adjustable temperature range, which runs from 35°C to 70°C, makes it suitable for common materials like PLA as well as moisture-sensitive filaments such as nylon.  Each chamber includes a built-in humidity sensor for real-time monitoring and offers enough space to hold either two 1kg spools, a single 2–3kg spool, or spools measuring up to 250 mm in diameter and 153 mm in width. The generous capacity is especially useful during long print jobs, where smaller dryers often fall short. In one example, saturated PLA reached optimal moisture levels within four to six hours at 55°C, demonstrating the system’s effectiveness for extended use cases. Safety features are built into the SP2’s design. It uses a ceramic PTC heater combined with a smart fan to ensure heat is evenly distributed. A mechanical cutoff switch activates if temperatures spike unexpectedly, while a PID-controlled microprocessor keeps the system stable within a one-degree range.  The device has passed 72-hour continuous operation tests and was also run through high-stress scenarios to confirm reliability. Additional silicone gaskets at connection points help keep moisture out and maintain consistent drying conditions. Compared to conventional dryers, which are typically fixed in design and limited to a single chamber, the SP2 offers a more scalable solution. Instead of purchasing an entirely new unit, users can expand capacity by adding additional chambers. It also accommodates smaller spools without the need for adapters or modifications.  Additionally, internal testing indicates up to 35% lower energy consumption than similar products. In early use, the system has also been noted for its quiet operation at 42dB, stable stacking, and consistent temperature control across extended sessions. Dry and store filaments with built-in heating and humidity control. Image via Sunlu. Technical specifications and pricing For pricing details and more information, readers can visit Sunlu’s website. Product NameSUNLU Filament Dryer SP2 / Storage Box SetProduct Dimension278mm × 208mm × 396mmInternal Dimensions265mm × 193mm × 274mmPackage Size337mm × 263mm × 380mmN.W.FilaDryer SP2 Set: 3.6KG, Storage Box Set: 5.0KGG.W.FilaDryer SP2 Set: 4.1KG, Storage Box Set: 5.5KGOperating EnvironmentTemperature: 10°C-35°C, Relative Humidity: ≤95%Compatible Filament DiametersΦ1.75mm/Φ2.85mmMaximum Spool SizeΦ250mm × 153mmWork Temperature Range35°C-70°CHumidity Display RangeRefer to the Hygrometer: 10%-90%Time Setting Range0-99hPower Input SpecificationsAC 110V 60Hz 0r AC 220V 50HzMaximum Operating Current2.2A@230V, 4.2A@120VMaximum Working Power250wStand by Power≤1WPackage Contents: FilaDryer SP2 SetStorage Box *1, Heating Base *1, Hygrometer *1, Desiccant *1, Desiccant Box *1, Power Cable *1, Spool Roller *1, Air lock *2, PTFE Tube 8cm *2, PTFE Tube 1m *2, User Manual *1Storage Box SetStorage Box *1, PLA+2.0 Black *2, Hygrometer *1, Desiccant *1, Desiccant Box *1, Spool Roller *1, Air lock *2, PTFE Tube 8cm *2, PTFE Tube 1m *2, User Manual *1 Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. What 3D printing trends should you watch out for in 2025? How is the future of 3D printing shaping up? To stay up to date with the latest 3D printing news, don’t forget to subscribe to the 3D Printing Industry newsletter or follow us on Twitter, or like our page on Facebook. While you’re here, why not subscribe to our Youtube channel? Featuring discussion, debriefs, video shorts, and webinar replays. Featured image shows the Sunlu FilaDryer SP2. Image via Sunlu. Ada Shaikhnag With a background in journalism, Ada has a keen interest in frontier technology and its application in the wider world. Ada reports on aspects of 3D printing ranging from aerospace and automotive to medical and dental. #new #sunlu #filadryer #sp2 #technical
    3DPRINTINGINDUSTRY.COM
    New Sunlu FilaDryer SP2: Technical specifications and pricing
    Chinese 3D printing technology firm SUNLU has released a new filament drying system designed to offer more flexibility for 3D printing users managing multiple spools.  Called the FilaDryer SP2, the unit separates its heating base from the drying chambers, allowing up to three chambers to be stacked vertically. This setup is aimed at users running several printers at once, where space is limited and filament drying often becomes a bottleneck. Founded in 2013 in Zhuhai, China, SUNLU specializes in 3D printing materials and hardware, with a product range that includes filaments, resins, and accessories. Having sold more than 25 million units, the company runs over 140 production lines and holds over 200 granted IP rights related to materials and equipment design. FilaDryer SP2 Capacity. Image via Sunlu. Scalable design with efficient drying One of the key goals behind the SP2 is to improve drying efficiency. In repeated tests with PETG, the system brought moisture levels down from 0.6% to 0.2% in just four hours. That’s around 30% faster than other dryers in the same category. The adjustable temperature range, which runs from 35°C to 70°C, makes it suitable for common materials like PLA as well as moisture-sensitive filaments such as nylon.  Each chamber includes a built-in humidity sensor for real-time monitoring and offers enough space to hold either two 1kg spools, a single 2–3kg spool, or spools measuring up to 250 mm in diameter and 153 mm in width. The generous capacity is especially useful during long print jobs, where smaller dryers often fall short. In one example, saturated PLA reached optimal moisture levels within four to six hours at 55°C, demonstrating the system’s effectiveness for extended use cases. Safety features are built into the SP2’s design. It uses a ceramic PTC heater combined with a smart fan to ensure heat is evenly distributed. A mechanical cutoff switch activates if temperatures spike unexpectedly, while a PID-controlled microprocessor keeps the system stable within a one-degree range.  The device has passed 72-hour continuous operation tests and was also run through high-stress scenarios to confirm reliability. Additional silicone gaskets at connection points help keep moisture out and maintain consistent drying conditions. Compared to conventional dryers, which are typically fixed in design and limited to a single chamber, the SP2 offers a more scalable solution. Instead of purchasing an entirely new unit, users can expand capacity by adding additional chambers. It also accommodates smaller spools without the need for adapters or modifications.  Additionally, internal testing indicates up to 35% lower energy consumption than similar products. In early use, the system has also been noted for its quiet operation at 42dB, stable stacking, and consistent temperature control across extended sessions. Dry and store filaments with built-in heating and humidity control. Image via Sunlu. Technical specifications and pricing For pricing details and more information, readers can visit Sunlu’s website. Product NameSUNLU Filament Dryer SP2 / Storage Box SetProduct Dimension278mm × 208mm × 396mm (L x W x H)Internal Dimensions265mm × 193mm × 274mm (L x W x H)Package Size337mm × 263mm × 380mm (L x W x H)N.W.FilaDryer SP2 Set: 3.6KG, Storage Box Set: 5.0KGG.W.FilaDryer SP2 Set: 4.1KG, Storage Box Set: 5.5KGOperating EnvironmentTemperature: 10°C-35°C, Relative Humidity: ≤95%Compatible Filament DiametersΦ1.75mm/Φ2.85mmMaximum Spool SizeΦ250mm × 153mm (1kg × 2 or 2kg × 1 or 3kg × 1)Work Temperature Range35°C-70°C (PLA, PLA+, PETG, ABS, TPU, PVA, PVB, ASA, PA/PC etc.)Humidity Display RangeRefer to the Hygrometer: 10%-90%Time Setting Range0-99hPower Input Specifications(Ensure the voltage matches your local supply before use)AC 110V 60Hz 0r AC 220V 50HzMaximum Operating Current2.2A@230V, 4.2A@120VMaximum Working Power250wStand by Power≤1WPackage Contents: FilaDryer SP2 SetStorage Box *1, Heating Base *1, Hygrometer *1, Desiccant *1, Desiccant Box *1, Power Cable *1, Spool Roller *1, Air lock *2, PTFE Tube 8cm *2, PTFE Tube 1m *2, User Manual *1Storage Box SetStorage Box *1, PLA+2.0 Black *2, Hygrometer *1, Desiccant *1, Desiccant Box *1, Spool Roller *1, Air lock *2, PTFE Tube 8cm *2, PTFE Tube 1m *2, User Manual *1 Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. What 3D printing trends should you watch out for in 2025? How is the future of 3D printing shaping up? To stay up to date with the latest 3D printing news, don’t forget to subscribe to the 3D Printing Industry newsletter or follow us on Twitter, or like our page on Facebook. While you’re here, why not subscribe to our Youtube channel? Featuring discussion, debriefs, video shorts, and webinar replays. Featured image shows the Sunlu FilaDryer SP2. Image via Sunlu. Ada Shaikhnag With a background in journalism, Ada has a keen interest in frontier technology and its application in the wider world. Ada reports on aspects of 3D printing ranging from aerospace and automotive to medical and dental.
    0 Комментарии 0 Поделились
Расширенные страницы