• WWW.GAMEDEVELOPER.COM
    Cronos: The New Dawn devs say remote work made its unique combat possible
    Polish studio Bloober Team is rolling out more details about its upcoming survival horror game Cronos: The New Dawn, and one fascinating new feature acts as a new mile marker for how far the Layers of Fear studio has come. Instead of returning to the company's environmental storytelling roots, Bloober is making a survival horror game more technically ambitious than 2024's remake of Silent Hill 2.As in Silent Hill 2, players explore a nightmarish world with limited ammo and gangs of monstrous enemies stalking their every move—but this time, the enemies aren't just hunting the player, but also other enemy corpses. The monsters absorb energy from downed bodies to evolve into greater threats, creating a combat loop where players don't just need to juggle threats coming in from all sides, but also keep an eye on the enemies they kill so they don't become food for the next enemy that spawns.Making a system like this juggles a number of unique disciplines—level design, AI programming, combat design, and more. If Bloober had years and years of experience making survival horror games, this might be expected. If the team inside Bloober making Silent Hill 2 started on Cronos after shipping the remake, it would feel like a natural evolution.But Cronos was made in parallel with Silent Hill 2. That means the developers—led by co-directors Jacek Zieba and Wojciech Piejko—could share insights with their colleagues, but were otherwise still learning how to implement survival horror combat for the first time.Related:What made that process possible? According to Zieba and Piejko, who we spoke with at the 2025 Game Developers Conference, a key factor for success was a trend companies have been pushing back on since the end of COVID-19 lockdown: remote hiring.Without talent from across Poland—and even outside the country—the pair say Cronos' unique combat wouldn't be the same.Complex combat in Cronos calls for creative collaboratorsLike Observer and The Medium, Cronos: The New Dawn is set in Bloober's hometown of Krakow, this time sending players on a time travel adventure taking them from Soviet rule in the 1980s to an apocalyptic hellscape in a future ravaged by an event called "The Change."As Zieba and Piejko walked through a private demo of Cronos at GDC, the pair broke down the specific design decisions that prop up the combat loop. The aforementioned harvesting mechanic at the heart of the enemy AI systems allows for multiple evolutions. Enemies don't just get stronger, they can also learn new attack patterns and mutate into new character models. The pair showed off how basic enemies have two tiers of evolution, meaning players who can't knock down every enemy will face an entirely new kind of monster that could turn the tide in a fight.Related:Image via Bloober Team.The pair explained that gunplay and traversal in Cronos isn't just an updated version of what Bloober made in Silent Hill 2. Player movement and shooting is meant to mimic what is often referred to as the "tank controls" of survival horror games from the late '90s and early 2000s. These games limited player movement, slowed down aiming, and used restrictive camera angles to make it harder to spot enemies at a distance.Though novel for their time, advancements in video game combat as a whole make "tank controls" a less appealing experience for most modern players. To recreate that sensation without relying on decades-old restrictions, Bloober made two key design decisions. The more subtle one was to mandate that guns not fire with a simple trigger press. They need to be charged up, giving enemies time to close the gap.The more surprising choice—one that greatly diverges from Silent Hill 2—was to not give players a "dodge" button. Players instead are given a resource-dependent flamethrower burst that can push enemies away or eliminate dead bodies used for power-ups. The resource-juggling is familiar to survival horror, but denying players a dodge button goes against recent trends. None of this, the pair said, could be done without Bloober increasing headcount and seeking out developers with specific skills.Related:"We bolster a team [with] specific people," said Zieba, noting that Bloober hired a dedicated "gunsmith" whose sole job would be making the charge-heavy guns. "He opened this door—it was easier for us to start [making the game]."Speaking of opening doors, it was Piejko who explained Bloober could only hire some of these specialists because the company still allowed for remote work. "It's easier to get better specialists who love survival horror like us," he said. Zieba said the studio at large still operates in a "hybrid" mode. "But for some of them, we know there is no going back," he said. "There is no reason to go back because we are stronger because of it."You can't make good games without the right peopleBloober Team's policy of continuing to allow for remote work offers a clear example of what CEO Piotr Babieno meant when he spoke to Game Developer about the process of sustainably growing a "safe" studio specializing in single-player horror games. If your studio is breaking into a new genre with complex systems no one on your team has experience with, you'll need to hire externally. And if the number of people in video games with that kind of experience isn't that high, you'll need to meet them where they're at—literally.This doesn't mean remote work is the right solution for every company. But even if you want your workers coming into the office, Bloober's approach offers a clear-cut example of what you can achieve when you seek out developers with specific skills and work within their needs.If your studio is breaking genre ground, or making a game larger than it ever has before, you're going to need talented people. And whether those people need a remote workstation, flexible hours to take care of their families, or even extra support for the cost and time of commuting into the office, meeting their needs—and not demanding they accommodate to yours—can put you on the path to making a Cronos all of your own.Game Developer and Game Developers Conference are sibling organizations under Informa Tech.
    0 Commentaires 0 Parts 25 Vue
  • WWW.THEVERGE.COM
    Donald Trump’s crusade against offshore wind just got more serious
    The Trump administration dealt a major blow to the fledgling US offshore wind industry yesterday by ordering a major wind project off the coast of New York to stop construction. Secretary the US Department of the Interior Doug Burgum announced the move on X yesterday, pausing the Empire Wind Project pending “further review of information that suggests the Biden administration rushed through its approval without sufficient analysis.” Donald Trump has painted offshore wind as an environmental bogeyman since the campaign trail, falsely linking proposed projects to whale deaths without evidence while promising to “drill, baby, drill” for oil and gas at the same time. Now, his administration is trying to stop offshore wind farms from being built, even those that have already gained federal approvals.Donald Trump has painted offshore wind as an environmental bogeymanTrump issued an executive order on his first day in office that stopped leasing and permitting for new offshore wind projects. Empire Wind, however, has had a federal lease since 2017 and already had state and federal permits in place. Equinor, the Norwegian company developing the project, confirmed in a press release today that it had suspended construction to comply with a notice it received from the Bureau of Ocean Energy Management. “Empire is engaging with relevant authorities to clarify this matter and is considering its legal remedies, including appealing the order,” the release says. Construction on Empire Wind, which Equinor says had a gross book value of roughly $2.5 billion, just started this month and was slated to finish in 2027. Once complete, it was supposed to produce enough carbon pollution-free electricity for 500,000 homes in New York. Construction employed 1,500 people, according to Equinor. The project included an onshore staging hub at the South Brooklyn Marine Terminal, anticipated to create about 1,000 union construction jobs.“Stopping work on the fully federally permitted Empire Wind 1 offshore project should send chills across all industries investing in and holding contracts with the United States Government,” Liz Burdock, president and CEO of offshore energy trade group Oceantic Network said in an emailed statement. “Preventing a permitted and financed energy project from moving forward sends a loud and clear message to all businesses - beyond those in the offshore wind industry - that their investment in the US is not safe.”The US lags far behind Europe and China in deploying offshore wind, even though it has more potential than many other nations to harness the resource from its vast coastlines. Offshore wind could meet up to a quarter of the nation’s power needs by 2050, and it can pair well with energy-hungry data centers that are pushing up power demand in the US.But on top of financial woes brought on by tangled supply chains and rising project costs, offshore wind has faced stiff opposition from the commercial fishing industry and residents concerned about turbines affecting ocean views. A turbine failure off the coast of Massachusetts that led to a blade breaking off and plummeting into the ocean fomented fears about the potential environmental impact of wind farms, although necropsies point to vessel strikes and fishing gear as the leading causes of whale deaths. “Stopping work on the fully federally permitted Empire Wind 1 offshore project should send chills across all industries investing in and holding contracts with the United States Government.”“It’s the industrialization of our ocean, rubber-stamped by federal agencies and delivered by a foreign-owned corporation under the guise of climate action,” Bonnie Brady, executive director of the Long Island Commercial Fishing Association, said in an opinion published in the New York Post last week. Joe Biden had set a goal of growing US offshore wind capacity from 42 to 30,000 megawatts by 2030. Since winds are typically stronger over the ocean than on land, offshore turbines were seen as an abundant source of renewable energy that would help the US eliminate pollution from power plants and fight climate change. New York Governor Kathy Hochul vowed to fight the Trump administration’s efforts to stop Empire Wind “every step of the way” in a statement released yesterday.“If Trump had any ounce of compassion or care for the American people, he would be bolstering renewable energy projects like Empire that create stable jobs, allow families to breathe easier, and save more on electricity,” Sierra Club deputy legislative director for clean energy and electrification Xavier Boatright said in an emailed statement. “Instead, Trump is yet again prioritizing the interests of Big Fossil Fuel, and making Americans pay the price.” Oil and gas interests spent more than $75 million in campaign donations to get Trump elected last year. In January, Trump claimed “no new windmills” would be built while he’s in office, saying they “litter” the US like “garbage in a field.”See More:
    0 Commentaires 0 Parts 45 Vue
  • WWW.MARKTECHPOST.COM
    Do We Still Need Complex Vision-Language Pipelines? Researchers from ByteDance and WHU Introduce Pixel-SAIL—A Single Transformer Model for Pixel-Level Understanding That Outperforms 7B MLLMs
    MLLMs have recently advanced in handling fine-grained, pixel-level visual understanding, thereby expanding their applications to tasks such as precise region-based editing and segmentation. Despite their effectiveness, most existing approaches rely heavily on complex architectures composed of separate components such as vision encoders (e.g., CLIP), segmentation networks, and additional fusion or decoding modules. This reliance on modular systems increases system complexity and limits scalability, especially when adapting to new tasks. Inspired by unified architectures that jointly learn visual and textual features using a single transformer, recent efforts have explored more simplified designs that avoid external components while still enabling strong performance in tasks requiring detailed visual grounding and language interaction. Historically, vision-language models have evolved from contrastive learning approaches, such as CLIP and ALIGN, progressing toward large-scale models that address open-ended tasks, including visual question answering and optical character recognition. These models typically fuse vision and language features either by injecting language into visual transformers or by appending segmentation networks to large language models. However, such methods often require intricate engineering and are dependent on the performance of individual submodules. Recent research has begun to explore encoder-free designs that unify image and text learning within a single transformer, enabling more efficient training and inference. These approaches have also been extended to tasks such as referring expression segmentation and visual prompt understanding, aiming to support region-level reasoning and interaction without the need for multiple specialized components. Researchers from ByteDance and WHU present Pixel-SAIL, a single-transformer framework designed for pixel-wise multimodal tasks that does not rely on extra vision encoders. It introduces three key innovations: a learnable upsampling module to refine visual features, a visual prompt injection strategy that maps prompts into text tokens, and a vision expert distillation method to enhance mask quality. Pixel-SAIL is trained on a mixture of referring segmentation, VQA, and visual prompt datasets. It outperforms larger models, such as GLaMM (7B) and OMG-LLaVA (7B), on five benchmarks, including the newly proposed PerBench, while maintaining a significantly simpler architecture. Pixel-SAIL, a simple yet effective single-transformer model for fine-grained vision-language tasks, eliminates the need for separate vision encoders. They first design a plain encoder-free MLLM baseline and identify its limitations in segmentation quality and visual prompt understanding. To overcome these, Pixel-SAIL introduces: (1) a learnable upsampling module for high-res feature recovery, (2) a visual prompt injection technique enabling early fusion with vision tokens, and (3) a dense feature distillation strategy using expert models like Mask2Former and SAM2. They also introduce PerBench, a new benchmark assessing object captioning, visual-prompt understanding, and V-T RES segmentation across 1,500 annotated examples. The experiment evaluates the Pixel-SAIL model on various benchmarks using modified SOLO and EVEv2 architectures, showing its effectiveness in segmentation and visual prompt tasks. Pixel-SAIL significantly outperforms other models, including segmentation specialists, with higher cIoU scores on datasets like RefCOCO and gRefCOCO. Scaling up the model size from 0.5B to 3B leads to further improvements. Ablation studies reveal that incorporating visual prompt mechanisms, data scaling, and distillation strategies enhances performance. Visualization analysis reveals that Pixel-SAIL’s image and mask features are denser and more diverse, resulting in improved segmentation results. In conclusion, Pixel-SAIL, a simplified MLLM for pixel-grounded tasks, achieves strong performance without requiring additional components such as vision encoders or segmentation models. The model incorporates three key innovations: a learnable upsampling module, a visual prompt encoding strategy, and vision expert distillation for enhanced feature extraction. Pixel-SAIL is evaluated on four referring segmentation benchmarks and a new, challenging benchmark, PerBench, which includes tasks such as object description, visual prompt-based Q&A, and referring segmentation. The results show that Pixel-SAIL performs as well as or better than existing models, with a simpler architecture. Check out the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Biophysical Brain Models Get a 2000× Speed Boost: Researchers from NUS, UPenn, and UPF Introduce DELSSOME to Replace Numerical Integration with Deep Learning Without Sacrificing AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/SyncSDE: A Probabilistic Framework for Task-Adaptive Diffusion Synchronization in Collaborative GenerationSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Transformers Can Now Predict Spreadsheet Cells without Fine-Tuning: Researchers Introduce TabPFN Trained on 100 Million Synthetic DatasetsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/A Coding Guide to Build a Finance Analytics Tool for Extracting Yahoo Finance Data, Computing Financial Analysis, and Creating Custom PDF Reports
    0 Commentaires 0 Parts 31 Vue
  • WWW.IGN.COM
    The Wheel of Time Season 3 Review
    Warning: This review contains full spoilers for season 3 of The Wheel of TimeSeason 3 of The Wheel of Time is by far its best yet, building on the rich worldbuilding of the first two seasons while largely avoiding their melodrama and pacing issues. Skillfully fusing aspects of three of Robert Jordan’s books – The Dragon Reborn, The Shadow Rising, and The Fires of Heaven – the show condenses plots to keep the action moving while improving on some of the characters by giving them more depth. It makes some bold choices of adaptation, but overall the series is making the sprawling fantasy epic look great while keeping to its themes and ambition.The ta’veren of Emond’s Field started the series as whiny, resentful, or wide-eyed about the change in fortunes brought by the Aes Sedai Moiraine Damodred (Rosamund Pike), but they’ve grown significantly as characters as they’ve gained more power. The most improved is Rand al’Thor (Josha Stradowski), who has embraced his destiny as the Dragon Reborn by heading to the Aiel Wastes to raise an army. Stradowski brings an intensity to the role that simmers in quieter moments and fully ignites when he ferociously makes his case as the chosen one in the finale.A major theme of The Wheel of Time is how change cannot be stopped or controlled, and it’s a lesson that many of the characters have to learn the hard way. Egwene al’Vere (Madeleine Madden) tries to cope with her traumatic experience at the hands of the Seanchan by resuming her relationship with Rand. But the childhood sweethearts can’t confront their demons until they make mutual respect, not love, the basis of their connection. It’s a richer portrayal of how those characters evolve together and apart than the books presented, enhanced by Madden’s ability to match Stradowski’s intensity as she blends both deep strength and emotional vulnerability.Meanwhile, Perrin Aybara (Marcus Rutherford) tries to escape the coming war by returning to the Two Rivers and finds he must instead embrace his potential as a leader. The character often feels tangential in Jordan’s books, but the show’s writers do their best to do right by him, strongly weaving in his desire to avoid violence with the greater arc of the season. Perrin’s plot is strengthened by the addition of Faile Bashere (Isabella Bucceri), who has more personality in the show than on the page, where Jordan’s women often feel too similar. Bucceri brings plenty of charm to the role with her wry smile and teasing of Perrin, a ferocious counterpart to the more reserved blacksmith. The Wheel of Time Season 3 Character PostersAes Sedai royal advisor Elaida do Avriny a'Roihan (Shohreh Aghdashloo) makes an even bigger impact as a series newcomer: Her ability to drip contempt with every sentence especially shines opposite her primary rival, Siuan Sanche (Sophie Okonedo), who projects authority with a mix of restraint and folksy wisdom.The efforts to build up the Forsaken Moghedien (Laia Costa) as one of the biggest threats on the show are less successful. Lanfear (Natasha O'Keeffe) continues to be an excellent villain, attempting to control Rand through a mix of Machiavellian scheming and vamping. Compared to her, Moghedien is just an overtly weird psychopath. At least the smug regality of the Forsaken Rahvin (Nuno Lopes) feels like a better fit for the catty dynamic the show has established among the Chosen.The Wheel of Time never relies fully on spectacle to carry an episode.“Showrunner Rafe Judkins has done a great job not only trimming excess characters and storylines but building on Jordan’s work as well. The attack on the White Tower by Liandrin Guirale (Kate Fleetwood) and the Black Ajah is only referenced in a few lines of the book The Dragon Reborn, but 34 years later, writer Justine Juel Gillmer and director Ciaran Donnelly have turned it into an epic battle that kicks off season 3 with a bang. The sword and sorcery combat looks great in the premiere, skillfully blending CGI and choreography, and that strength continues in the multiple big battles throughout the season. There are some issues with monsters created with practical effects for closeups looking much worse when moving, but overall the production value is high. Costuming is an especially strong point for the show, particularly the ornate outfits favored by the Aes Sedai.Unlike Prime Video’s other fantasy epic, The Lord of the Rings: The Rings of Power, The Wheel of Time never relies fully on spectacle to carry an episode. Pathos is as likely to drive the conflicts as magic is, which makes any moment of sacrifice or loss all the more excruciating. Jordan killed relatively few of his characters, but the show’s writers have a significantly higher body count, which keeps things tense even for book readers who think they know where this story is going.That tension is broken by some great comic relief. After the first two seasons were tarnished by the departure of Barney Harris, some of the shine has been restored by his replacement, Dónal Finn. His take on Mat Cauthon is a charming rake whose definition of “laying low” involves showing off a powerful magical artifact in a bar. And his tendency to turn everything into a joke makes it all the more meaningful when he asks for help. Princess Elayne Trakand (Ceara Coverney) is the season’s other major source of laughs: She steals the show in episode 6 with a hilariously bawdy musical number.It all amounts to a season of Wheel of Time that leans into the concepts of reincarnation and cycles of conflict core to Jordan’s marriage of Western fantasy and Eastern mythology. Nowhere are those themes stronger than in the terrific “The Road to the Spear,” which allows Rand to live through the long shared history of the Aiel and the Tinkers and come to an understanding of just how much was lost in the Breaking of the World. It’s another showcase of Stradowski’s acting talent, the makeup and costume team, and the writers. The keen understanding of Jordan’s vision – and ability to bring it to the screen in a satisfying way – in this episode, and season 3 as a whole, makes me eager to see where things go next.
    0 Commentaires 0 Parts 30 Vue
  • WWW.COUNTRYLIVING.COM
    This Cute English Cottage is a Beatrix Potter Book Brought to Life
    Twenty years ago, Mimi Pickard was looking to escape south London. “My husband Ed and I have three children, and we liked the idea of our kids going to school in the countryside,” she says. So it was Kismet when some friends mentioned they might be interested in selling their home near Guildford in Surrey (about 30 miles southwest of London).The three-acre property, located on the outskirts of a quaint village complete with a charming neighborhood pub, boasted everything Mimi was looking for: a remote location, bucolic views, and a beautiful garden. The only thing that gave her pause was the late 1920s farmhouse. “Overall, it was well built with a good foundation, and the house had a lovely, happy atmosphere, but I didn’t love it,” she says. “It just took a few years of renovations to make the house itself our own.” “In our 1920s home, the windows are drafty and rattle in the wind. You get used to it though.” That process involved knocking down a lot of walls to let in more natural light, rearranging rooms for better flow, and maximizing the views in all the rooms. The unobstructed views of the garden and neighboring fields also inspired a shift in Mimi’s tastes. Through the years, she found herself replacing her conservative neutrals with bright colors.Years of working with nature proved fruitful in a particularly unexpected way. After her kids were grown, Mimi decided to combine her love of textiles and the outdoors into her very own business: Mimi Pickard English Textiles. Her English-countryside-inspired collection of wallpapers and fabrics features an array of posies, birds, and trees, as well as stripes and geometric designs. Rachel WhitingThe conservatory’s ceiling are covered in painted wood blinds that can be opened and shut. The brushed linen sofa is piled with pillows in both “Naked Angelica” from Mimi’s fabric line and “Oakleaves” from Peggy Angus’ Blithfield collection. While her wares are now sold in cities throughout England, Australia, and the U.S., Mimi has no plans to leave the place that made it all possible. “I love how bringing in my own designs has even given this old house a new lease on life,” she says. Besides, there’s no way she could get Ed to wander too far from Three Horseshoes, his favorite nearby pub. Says Mimi, “They serve a delicious Sunday brunch, and that’s very important for us Brits!” Tour More of Mimi’s House Below: RELATED STORY:ENTRYRachel WhitingThe entry flows seamlessly into a living area thanks to sisal rugs and walls covered in “Small Damask” by Peggy Angus for Blithfield.KITCHENRachel Whiting The eat-in kitchen is a contrast of creamy paints and black granite countertops with a splash of pink via window treatments in “Charlie Stripe” by Mimi Pickard. MUDROOMRachel WhitingWhat was once the “boot room” is now a downstairs cloakroom. Mimi added tongue-and-groove wainscoting paneling and “Angelica” wallpaper from her collection.RELATED STORIES:BEDROOM & BATHRachel WhitingRachel WhitingThe same floral wallpaper (“Hatley” by Cabbages & Roses) appears in both the bedroom and adjacent bath. “The matching wallpaper makes the two rooms flow better. My mum has always done the same thing, so I copied her,” Mimi says. The chair in the bathroom, which belonged to Mimi’s granny, is reupholstered in a Jane Churchill plaid. The bed is made in a variety of crisp linens from The White Company and pillows covered in an array of antique fabrics. An old wicker side table was freshened up with a coat of Farrow & Ball’s Slipper Satin. RELATED STORIES:EXTERIORRachel WhitingLife in the English countryside requires a dog—in this case, English Springer Spaniel Lola—and a garden. “When it came to gardening, I was not a natural. When we first moved here, I had to lean very heavily on my mother, and she really helped me.”RELATED STORIES:
    0 Commentaires 0 Parts 21 Vue
  • 9TO5MAC.COM
    Report: How Tim Cook helped Apple avoid Trump’s tariffs
    Over the week, the Trump administration announced a set of exemptions for its tariffs imposed on products imported into the United States from China. The exemptions cover product categories like smartphones and laptops, giving Apple a huge reprieve for the iPhone, Mac, and more. In comments earlier this week, Trump bragged about how he “helped” Apple with these exemptions. A new report from The Washington Post dives into the inner workings of the Trump administration, particularly as they relate to Apple CEO Tim Cook. According to the report, Cook spoke to Commerce Secretary Howard Lutnick last week “about the potential impact of the tariffs on iPhone prices.” In addition to Lutnick, he also spoke with “other senior officials in the White House.” Perhaps most importantly, Cook “refrained from publicly criticizing the president or his policies on national television.” Wilbur Ross, commerce secretary during Trump’s first term, said it’s unsurprising that the current administration has been receptive to Cook’s suggestions: “Tim has a very good relationship with the president and rightly so. He has been playing a very careful role in that he obviously has a huge dependency on China but is also hugely important to the U.S. In general, he has a lot of respect because he’s not a public whiner, he’s not a crybaby, but comes with the real voice of reality. It’s no surprise to me that his suggestions are being well received.” Here’s another tidbit on Trump and Cook’s relationship from today’s report: Cook joined other tech executives in personally giving more than $1 million to Trump’s inaugural fund, and people close to the president say he respects the Apple chief. When venture capitalist Marc Andreessen first met Trump at his club in Bedminster, New Jersey, last year, the president-elect asked him what he thought of Cook, according to a separate person familiar with the dinner, who also spoke on the condition of anonymity to describe private talks. Andreessen responded that he was impressed by Cook’s leadership of the iPhone maker. Trump agreed, and told Andreessen that he appreciated how Cook met with him directly with no intermediaries, which has not been previously reported. But while Apple has won a reprieve from tariffs, that peace of mind might be temporary. In an interview on Sunday, Lutnick said that the exemption from the reciprocal tariff isn’t a “permanent sort of exemption” and a broader plan will be announced “in a month or two.” Follow Chance: Threads, Bluesky, Instagram, and Mastodon.  Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commentaires 0 Parts 32 Vue
  • FUTURISM.COM
    Fyre Fest 2 Is Already Crumbling Into Embarrassing Chaos
    Remember Fyre Fest, the 2017 music festival that crashed and burned so epically that it became a metaphor for failed ventures?After threatening to follow up with a second Fyre Festival in April 2023, convicted fraudster and organizer Billy McFarland has announced that Fyre Festival 2 was scheduled to kick off in Playa del Carmen, Mexico, on May 30 of this year.Given McFarland's acumen as a festival organizer, you'll surely be shocked to learn that the event has now officially been "postponed.""The event has been postponed and a new date will be announced," a message to ticket holders reads, as quoted by ABC News. "We have issued you a refund. Once the new date is announced, at that time, you can repurchase if it works for your schedule."Tickets went on sale in February, starting at a whopping $1,400."Experience unforgettable performances, immersive experiences, and an atmosphere that redefines creativity and culture," the festival's website reads.Some "Phoenix"-level tickets went as high as $25,000. However, what all that cash will get you remains bafflingly unclear."The FYRE Experiences will be released in a number of experiential drops leading up to the festival, of which several will be included in Phoenix packages," the website reads, even though the event was scheduled for just over a month from now.While an exact reason for the cancellation has yet to be revealed, it's likely the event just didn't manage to sell many of its outrageously expensive tickets.Plus, it was probably its own disaster waiting to happen. Look no further than the inaugural Fyre Fest, which was such a colossal fiasco that McFarland ended up being convicted of wire fraud in connection with the event.He pleaded guilty to two counts in March 2018 and was sentenced to six years in federal prison. He was released four years later.The festival was a catastrophe of epochal proportions, from lackluster security and woefully inadequate food and shelter to furious artists who'd been scheduled to perform at the event.Plenty of questions remain about the since-postponed sequel to the event. As The Guardian reports, officials from the chain of islands where the festival was meant to take place had no idea it was happening."For us, this is an event that does not exist," Isla Mujeres tourism directorate Edgar Gasca told the newspaper.Bernardo Cueto, tourism secretary of the State of Quintana Roo, also told ABC that he wasn't informed about any such event."All media reports suggesting our team has not been working with the government of [Playa del Carmen, Mexico] are simply inaccurate and based on misinformation," the festival's official Instagram account fired back last week. "FYRE has operated as a good partner with PDC government and has followed the proper processes and procedures to lawfully host an event."Meanwhile, the schadenfreude about festival goers being refunded over a canceled festival was widespread on social media."Everyone who bought tickets to this also bought $HawkTuah," one user tweeted, referencing a shady crypto venture backed by influencer Haliey "Hawk Tuah" Welch."There’ll be no processed cheese sandwiches then," another user wrote. "I’ll tell the influencers."More on Fyre Festival: The Fyre Fest Guy Is Off House Arrest and Says You Should Invest in His Metaverse SchemeShare This Article
    0 Commentaires 0 Parts 45 Vue
  • THEHACKERNEWS.COM
    Artificial Intelligence – What's all the fuss?
    Talking about AI: Definitions Artificial Intelligence (AI) — AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as decision-making and problem-solving. AI is the broadest concept in this field, encompassing various technologies and methodologies, including Machine Learning (ML) and Deep Learning. Machine Learning (ML) — ML is a subset of AI that focuses on developing algorithms and statistical models that allow machines to learn from and make predictions or decisions based on data. ML is a specific approach within AI, emphasizing data-driven learning and improvement over time. Deep Learning (DL) — Deep Learning is a specialized subset of ML that uses neural networks with multiple layers to analyze and interpret complex data patterns. This advanced form of ML is particularly effective for tasks such as image and speech recognition, making it a crucial component of many AI applications. Large Language Models (LLM) — LLMs are a type of AI model designed to understand and generate human-like text by being trained on extensive text datasets. These models are a specific application of Deep Learning, focusing on natural language processing tasks, and are integral to many modern AI-driven language applications. Generative AI (GenAI) — GenAI refers to AI systems capable of creating new content, such as text, images, or music, based on the data they have been trained on. This technology often leverages LLMs and other Deep Learning techniques to produce original and creative outputs, showcasing the advanced capabilities of AI in content generation. Overview: AI for Good and Bad Almost daily now we watch the hallowed milestone of the "Turing Test" slip farther and farther into an almost naïve irrelevance, as computer interfaces have evolved from being comparable to human language, to similar, to indistinguishable, to arguably superior [1]. The development of large language models (LLMs) began with natural language processing (NLP) advancements in the early 2000s, but the major breakthrough came with Ashish Vaswani's 2017 paper, "Attention is All You Need." This allowed for training larger models on vast datasets, greatly improving language understanding and generation. Like any technology, LLMs are neutral and can be used by both attackers and defenders. The key question is, which side will benefit more, or more quickly? Let's dive into that question in a bit more detail. This is but an excerpt of our coverage in the Security Navigator 2025, but it covers some of the main points that should be relevant to everyone who works in a security- or technology context. If you want to read more on 'Prompt Injection' techniques or how AI is productively used in security technology I invite you to get the full report! AI in defense operations May improve general office productivity and communication May improve search, research and Open-Source Intelligence May enable efficient international and cross-cultural communications May assist with collation and summarization of diverse, unstructured text datasets May assist with documentation of security intelligence and event information May assist with analyzing potentially malicious emails and files May assist with identification of fraudulent, fake or deceptive text, image or video content. May assist with security testing functions like reconnaissance and vulnerability discovery. AI in one form or another has long been used in a variety of security technologies. By way of example: Intrusion Detection Systems (IDS) and Threat Detection. Security vendor Darktrace, employs ML to autonomously detect and respond to threats in real-time by leveraging behavioral analysis and ML algorithms trained on historical data to flag suspicious deviations from normal activity. Phishing Detection and Prevention. ML models are used in products like Proofpoint and Microsoft Defender that identify and block phishing attacks utilizing ML algorithms to analyze email content, metadata, and user behavior to identify phishing attempts.Endpoint Detection and Response (EDR). EDR offerings like CrowdStrike Falcon leverage ML to identify unusual behavior and detect and mitigate cyber threats on endpoints.Microsoft Copilot for Security. Microsoft's AI-powered solution is designed to assist security professionals by streamlining threat detection, incident response, and risk management by leveraging generative AI, including OpenAI's GPT models.AI in offensive operations May improve general office productivity and communication for bad actors as well May improve search, research and Open-Source Intelligence May enable efficient international and cross-cultural communications May assist with collation and summarization of diverse, unstructured text datasets (like social media profiles for phishing/spear-phishing attacks) May assist with attack processes like reconnaissance and vulnerability discovery. May assist with the creation of believable text for cyber-attack methods like phishing, waterholing and malvertising. Can assist with the creation of fraudulent, fake or deceptive text, image or video content. May facilitate accidental data leakage or unauthorized data access May present a new, vulnerable and attractive attack surface. Real-world examples of AI in offensive operations have been relatively rare. Notable instances include MIT's Automated Exploit Generation (AEG)[2] and IBM's DeepLocker[3], which demonstrated AI-powered malware. These remain proof-of-concepts for now. In 2019, our research team presented two AI-based attacks using Topic Modelling[4], showing AI's offensive potential for network mapping and email classification. While we haven't seen widespread use of such capabilities, in October 2024, our CERT reported[5] that the Rhadamanthys Malware-as-a-Service (MaaS) incorporated AI to perform Optical Character Recognition (OCR) on images containing sensitive information, like passwords, marking the closest real-world instance of AI-driven offensive capabilities. Security Navigator 2025 is Here - Download Now The newly released Security Navigator 2025 offers critical insights into current digital threats, documenting 135,225 incidents and 20,706 confirmed breaches. More than just a report, it serves as a guide to navigating a safer digital landscape. What's Inside?# 📈 In-Depth Analysis: Statistics from CyberSOC, Vulnerabilitiy scanning, Pentesting, CERT, Cy-X and Ransomware observations from Dark Net surveillance. 🔮 Future-Ready: Equip yourself with security predictions and stories from the field. 👁️ Security deep-dives: Get briefed on emerging trends related to hacktivist activities and LLMs/Generative AI. Stay one step ahead in cybersecurity. Your essential guide awaits! 🔗 Get Your Copy Now LLMs are increasingly being used offensively, especially in scams. A prominent example is the UK engineering group Arup[6], which reportedly lost $25 million to fraudsters who used a digitally cloned voice of a senior manager to order financial transfers during a video conference. Does AI drive threats? For systematically considering the potential risk from LLM technologies, we examine four perspectives: the risk of not adopting LLMs, existing AI threats, new threats specific to LLMs, and broader risks as LLMs are integrated into business and society. These aspects are visualized in the graphic below: Branch 1: The Risk of Non-adoption Many clients we talk to feel pressure to adopt LLMs, with CISOs particularly concerned about the "risk of non-adoption," driven by three main factors: Efficiency loss: Leaders believe LLMs like Copilot or ChatGPT will boost worker efficiency and fear falling behind competitors who adopt them. Opportunity loss: LLMs are seen as uncovering new business opportunities, products, or market channels, and failing to leverage them risks losing a competitive edge. Marketability loss: With AI dominating discussions, businesses worry that not showcasing AI in their offerings will leave them irrelevant in the market. These concerns are valid, but the assumptions are often untested. For example, a July 2024 survey by the Upwork Research Agency [7] revealed that "96% of C-suite leaders expect AI tools to boost productivity." However, the report points out, "Nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say these tools have actually decreased their productivity and added to their workload. The marketing value of being "powered by AI" is also still debated. A recent FTC report notes that consumers have voiced concerns about AI's entire lifecycle, particularly regarding limited appeal pathways for AI-based product decisions. Businesses must consider the true costs of adopting LLMs, including direct expenses like licensing, implementation, testing, and training. There's also an opportunity cost, as resources allocated to LLM adoption could have been invested elsewhere. Security and privacy risks need to be considered too, alongside broader economic externalities—such as the massive resource consumption of LLM training, which requires significant power and water usage. According to one article [8], Microsoft's AI data centers may consume more power than all of India within the next six years. Apparently "They will be cooled by millions upon millions of gallons of water". Beyond resource strain, there are ethical concerns as creative works are often used to train models without creators' consent, affecting artists, writers, and academics. Additionally, AI concentration among a few owners could impact business, society, and geopolitics, as these systems amass wealth, data, and control. While LLMs promise increased productivity, businesses risk sacrificing direction, vision, and autonomy for convenience. In weighing the risk of non-adoption, the potential benefits must be carefully balanced against the direct, indirect, and external costs, including security. Without a clear understanding of the value LLMs may bring, businesses might find the risks and costs outweigh the rewards. Branch 2: Existing Threats From AI In mid October 2024, our "World Watch" security intelligence capability published an advisory that summarized the use of AI by offensive actors as follows: "The adoption of AI by APTs remains likely in early stages but it is only a matter of time before it becomes more widespread." One of the most common ways state-aligned and state-sponsored threat groups have been adopting AI in their kill chains is by using Generative AI chatbots such as ChatGPT for malicious purposes. We assess that these usages differ depending on each group's own capabilities and interests. North Korean threat actors have been allegedly leveraging LLMs to better understand publicly reported vulnerabilities [9], for basic scripting tasks and for target reconnaissance (including dedicated content creation used in social engineering). Iranian groups were seen generating phishing emails and used LLMs for web scraping [10]. Chinese groups such as Charcoal Typhoon abused LLMs for advanced commands representative of post-compromise behavior [10]. In October 9, OpenAI disclosed [11] that since the beginning of the year it had disrupted over 20 ChatGPT abuses aimed at debugging and developing malware, spreading misinformation, evading detection, and launching spear-phishing attacks. These malicious usages were attributed to Chinese (SweetSpecter) and Iranian threat actors (CyberAv3ngers and Storm-0817). The Chinese cluster SweetSpecter (tracked as TGR-STA-0043 by Palo Alto Networks) even targeted OpenAI employees with spear-phishing attacks. Recently, state-sponsored threat groups have also been observed carrying out disinformation and influence campaigns targeting the US presidential election for instance. Several campaigns attributed to Iranian, Russian and Chinese threat actors leveraged AI tools to erode public trust in the US democratic system or discredit a candidate. In its Digital Defense Report 2024, Microsoft confirmed this trend, adding that these threat actors were leveraging AI to create fake text, images and videos. Cybercrime In addition to leveraging legitimate chatbots, cybercriminals have also created "dark LLMs" (models trained specifically for fraudulent purposes) such as FraudGPT, WormGPT and DarkGemini. These tools are used to automate and enhance phishing campaigns, help low-skilled developers create malware, and generate scam-related content. They are typically advertised on the DarkWeb and Telegram, with an emphasis on the model's criminal function. Some financially-motivated threat groups are also adding AI to their malware strains. A recent World Watch advisory on the new version of the Rhadamanthys infostealer describes new features relying on AI to analyze images that may contain important information, such as passwords or recovery phrases. In our continuous monitoring of cybercriminal forums and marketplaces we observed a clear increase in malicious services supporting social-engineering activities, including: Deepfakes, notably for sextortion and romance schemes. This technology is becoming more convincing and less expensive over time. AI-powered phishing and BEC tools designed to facilitate the creation of phishing pages, social media contents and email copies. AI-powered voice phishing. In a report published on July 23, Google revealed [12] how AI-powered vishing (or voice-spoofing), facilitated by commodified voice synthesizers, was an emerging threat. Vulnerability exploitation AI still faces limits when used to write exploit code based on a CVE description. If the technology improves and becomes more readily available, it will likely be of interest to both cybercriminals and state-backed actors. An LLM capable of autonomously finding a critical vulnerability, writing and testing exploit code and then using it against targets, could deeply impact the threat landscape. Exploit development skills could thus become accessible to anyone with access to an advanced AI model. The source code of most products is fortunately not readily available for training such models, but open source software may present a useful testcase. Branch 3: New Threats from LLMs The new threats emerging from widespread LLM adoption will depend on how and where the technology is used. In this report, we focus strictly on LLMs and must consider whether they are in the hands of attackers, businesses, or society at large. For businesses, are they consumers of LLM services or providers? If a provider, are they building their own models, sourcing models, or procuring full capabilities from others? Each scenario introduces different threats, requiring tailored controls to mitigate the risks specific to that use case. Threats to Consumers A Consumer uses GenAI products and services from external providers, while a Provider creates or enhances consumer-facing services that leverage LLMs, whether by developing in-house models or using third-party solutions. Many businesses will likely adopt both roles over time. It's important to recognize that employees are almost certainly already using public or local GenAI for work and personal purposes, posing additional challenges for enterprises. For those consuming external LLM services, whether businesses or individual employees, the primary risks revolve around data security, with additional compliance and legal concerns to consider. The main data-related risks include: Data leaks: Workers may unintentionally disclose confidential data to LLM systems like ChatGPT, either directly or through the nature of their queries. Hallucination: GenAI can produce inaccurate, misleading, or inappropriate content that employees might incorporate into their work, potentially creating legal liability. When generating code, there's a risk it could be buggy or insecure [13]. Intellectual Property Rights: As businesses use data to train LLMs and incorporate outputs into their intellectual property, unresolved questions about ownership could expose them to liability for rights violations. The outputs of GenAI only enhance productivity if they are accurate, appropriate, and lawful. Unregulated AI-generated outputs could introduce misinformation, liability, or legal risks to the business. Threats to providers An entirely different set of threats emerge when businesses choose to integrate LLM into their own systems or processes. These can be broadly categorized as follows: Model Related Threats A trained or tuned LLM has immense value to its developer and is thus subject to threats to its Confidentiality, Integrity and Availability. In the latter case, the threats to proprietary models include: Theft of the model. Adversarial "poisoning" to negatively impact the accuracy of the model. Destruction or disruption of the model. Legal liability that may emerge from the model producing incorrect, misrepresentative, misleading, inappropriate or unlawful content. We assess, however, that the most meaningful new threats will emerge from the increased attack surface when organizations implement GenAI within their technical environments. GenAI as Attack Surface GenAI are complex new technologies consisting of millions of lines of code that expand the attack surface and introduce new vulnerabilities. As general GenAI tools like ChatGPT and Microsoft Copilot become widely available, they will no longer offer a significant competitive advantage by themselves. The true power of LLM technology lies in integrating it with a business's proprietary data or systems to improve customer services and internal processes. One key method is through interactive chat interfaces powered by GenAI, where users interact with a chatbot that generates coherent, context-aware responses. To enhance this, the chat interface must leverage capabilities like Retrieval-Augmented Generation (RAG) and APIs. GenAI processes user queries, RAG retrieves relevant information from proprietary knowledge bases, and APIs connect the GenAI to backend systems. This combination allows the chatbot to provide contextually accurate outputs while interacting with complex backend systems. However, exposing GenAI as the security boundary between users and a corporation's backend systems, often directly to the Internet, introduces a significant new attack surface. Like the graphical Web Application interfaces that emerged in the 2000's to offer easy, intuitive access to business clients, such Chat Interfaces are likely to transform digital channels. Unlike graphical web interfaces, GenAI's non-deterministic nature means that even its developers may not fully understand its internal logic, creating enormous opportunity for vulnerabilities and exploitation. Attackers are already developing tools to exploit this opacity, leading to potential security challenges similar to those seen with early web applications, that are still plaguing security defenders today. Tricking LLMs out of their 'guardrails' The Open Web Application Security Project (OWASP) has identified "Prompt Injection" as the most critical vulnerability in GenAI applications. This attack manipulates language models by embedding specific instructions within user inputs to trigger unintended or harmful responses, potentially revealing confidential information or bypassing safeguards. Attackers craft inputs that override the model's standard behavior. Tools and resources for discovering and exploiting prompt injection are quickly emerging, similar to the early days of web application hacking. We expect that Chat Interface hacking will remain a significant cybersecurity issue for years, given the complexity of LLMs and the digital infrastructure needed to connect chat interfaces with proprietary systems. As these architectures grow, traditional security practices—such as secure development, architecture, data security, and Identity & Access Management—will become even more crucial to ensure proper authorization, access control, and privilege management in this evolving landscape. When the "NSFW" AI chatbot site Muah.ai was breached in October 2024, the hacker described the platform as "a handful of open-source projects duct-taped together." Apparently, according to reports, "it was no trouble at all to find a vulnerability that provided access to the platform's database". We predict that such reports will become commonplace in the next few years. Conclusion: more of the same is not a new dimension Like any powerful technology, we naturally fear the impact LLMs could have in the hands of our adversaries. Much attention is paid to the question of how AI might "accelerate the threat. The uncertainty and anxiety that emerges from this apparent change in the threat landscape is of course exploited to argue for greater investment in security, sometimes honestly, but sometimes also duplicitously. However, while some things are certainly changing, many of the threats being highlighted by alarmists today pre-exist LLM technology and require nothing more of us than to keep consistently doing what we already know to do. For example, all the following threat actions, whilst perhaps enhanced by LLMs, have already been performed with the support of ML and other forms of AI [14] (or indeed, without AI at all): Online Impersonation Cheap, believable phishing mails and sites Voice fakes Translation Predictive password cracking Vulnerability discovery Technical hacking The notion that adversaries may execute such activities more often or more easily is a cause for concern, but it does not necessarily require a fundamental shift in our security practices and technologies. LLMs as an attack surface on the other hand are vastly underestimated. It is crucial that we learn the lessons of previous technology revolutions (like web applications and APIs) so as not to repeat them by recklessly adopting an untested and somewhat untestable technology at the boundary between open cyberspace and our critical internal assets. Enterprises are well advised to be extremely cautious and diligent in weighing up the potential benefits of deploying a GenAI as an interface, with the potential risks that such a complex, untested technology will surely introduce. Essentially we face at least the same access- and data safety issues we already know from the dawn of the cloud age and subsequent erosion of the classic company perimeter. Despite the ground-breaking innovations we're observing, security "Risk" is still comprised fundamentally from the product of Threat, Vulnerability and Impact, and an LLM cannot magically create these if they aren't already there. If those elements are already there, the risk a business has to deal with is largely independent of the existence of AI. This is just an excerpt of the research we did on AI and LLMs. To read the full story and more detailed advisory, as well as expert stories about how prompt injections work to manipulate LLMs and work outside their safety guardrails, or how defenders use AI to detect subtle signals of compromise in vast networks: it's all in the Security Navigator 2025. Head over to the download page and get your copy! Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Commentaires 0 Parts 22 Vue
  • SCREENCRUSH.COM
    Mikey Madison Reportedly Passes on ‘Star Wars’
    After years of stops and starts, the next Star Wars movie will be The Mandalorian & Grogu, the big-screen continuation of the Disney+ series. After that ... well, after that it’s still a mystery. But the project that seems to be gaining momentum is the untitled one in development from Deadpool & Wolverine director Shawn Levy that would star Ryan Gosling.Those are already some big names, but according to Variety, Levy and Lucasfilm were looking to add another to the project. Their sources claim recent Academy Award winner Mikey Madison was “offered a role” in the project but “conversations have since ended with the Anora star passing on the part.”ParamountParamountloading...READ MORE: Every Star Wars Movie, Ranked From Worst to BestMadison is certainly no stranger to Hollywood franchises. Before her Oscar-winning breakthrough with Anora, she appeared as one of the key characters in the Scream legacyquel of 2022. (She did not return in Scream VI.) She also had a small voice role in the animated version of The Addams Family in 2019.Nor would Madison be the first big star to say no to Star Wars, either. Actors ranging from Leonardo DiCaprio to Al Pacino to Gary Oldman to Rooney Mara have all had the chance to take on major roles in Star Wars. They all passed.Star Wars movie makes it to the screen (not necessarily a wise assumption given how Star Wars film development has gone over the past five years), it will be interesting to see what the female lead in the film looks like and who plays her, and to contemplate Madison in that role.The Mandalorian & Grogu is currently scheduled to open in theaters on May 22, 2026. That’s right; there’s a new Star Wars movie only about a year away.The Worst Mockbuster Ripoffs of Beloved Animated MoviesWe can't in good conscience recommend any of these terrible cartoon mockbusters. Gallery Credit: Emma Stefansky
    0 Commentaires 0 Parts 45 Vue
  • WWW.TECHNOLOGYREVIEW.COM
    We need targeted policies, not blunt tariffs, to drive “American energy dominance”
    President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.”  But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete. Heat Exchange MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here. The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies.  By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage. The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad.  But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies. Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China. China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition. Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy. The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness. Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs. That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations. Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built. The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength.  But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years. Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries. But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum.  Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.   Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home. Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership. The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal. The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away. Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.
    0 Commentaires 0 Parts 23 Vue