-
- EXPLORE
-
-
-
-
Tech Enthusiasts - Power Users - IT Professionals - Gamers
Aggiornamenti recenti
-
Stolen iPhones disabled by Apple's anti-theft tech after Los Angeles looting
What just happened? As protests against federal immigration enforcement swept through downtown Los Angeles last week, a wave of looting left several major retailers, including Apple, T-Mobile, and Adidas, counting the cost of smashed windows and stolen goods. Yet for those who made off with iPhones from Apple's flagship store, the thrill of the heist quickly turned into a lesson in high-tech security.
Apple's retail locations are equipped with advanced anti-theft technology that renders display devices useless once they leave the premises. The moment a demonstration iPhone is taken beyond the store's Wi-Fi network, it is instantly disabled by proximity software and a remote "kill switch."
Instead of a functioning smartphone, thieves were met with a stark message on the screen: "Please return to Apple Tower Theatre. This device has been disabled and is being tracked. Local authorities will be alerted." The phone simultaneously sounds an alarm and flashes the warning, ensuring it cannot be resold or activated elsewhere.
This system is not new. During the nationwide unrest of 2020, similar scenes played out as looters discovered that Apple's security measures turned their stolen goods into little more than expensive paperweights.
The technology relies on a combination of location tracking and network monitoring. As soon as a device is separated from the store's secure environment, it is remotely locked, its location is tracked, and law enforcement is notified.
// Related Stories
Videos circulating online show stolen iPhones blaring alarms and displaying tracking messages, making them impossible to ignore and virtually worthless on the black market.
According to the Los Angeles Police Department, at least three individuals were arrested in connection with the Apple Store burglary, including one suspect apprehended at the scene and two others detained for looting.
The crackdown on looting comes amid a broader shift in California's approach to retail crime. In response to public outcry over rising thefts, state and local officials have moved away from previously lenient policies. The passage of Proposition 36 has empowered prosecutors to file felony charges against repeat offenders, regardless of the value of stolen goods, and to impose harsher penalties for organized group theft.
Under these new measures, those caught looting face the prospect of significant prison time, a marked departure from the misdemeanor charges that were common under earlier laws.
District attorneys in Southern California have called for even harsher penalties, particularly for crimes committed during states of emergency. Proposals include making looting a felony offense, increasing prison sentences, and ensuring that suspects are not released without judicial review. The goal, officials say, is to deter opportunistic criminals who exploit moments of crisis, whether during protests or natural disasters.
#stolen #iphones #disabled #apple039s #antitheftStolen iPhones disabled by Apple's anti-theft tech after Los Angeles lootingWhat just happened? As protests against federal immigration enforcement swept through downtown Los Angeles last week, a wave of looting left several major retailers, including Apple, T-Mobile, and Adidas, counting the cost of smashed windows and stolen goods. Yet for those who made off with iPhones from Apple's flagship store, the thrill of the heist quickly turned into a lesson in high-tech security. Apple's retail locations are equipped with advanced anti-theft technology that renders display devices useless once they leave the premises. The moment a demonstration iPhone is taken beyond the store's Wi-Fi network, it is instantly disabled by proximity software and a remote "kill switch." Instead of a functioning smartphone, thieves were met with a stark message on the screen: "Please return to Apple Tower Theatre. This device has been disabled and is being tracked. Local authorities will be alerted." The phone simultaneously sounds an alarm and flashes the warning, ensuring it cannot be resold or activated elsewhere. This system is not new. During the nationwide unrest of 2020, similar scenes played out as looters discovered that Apple's security measures turned their stolen goods into little more than expensive paperweights. The technology relies on a combination of location tracking and network monitoring. As soon as a device is separated from the store's secure environment, it is remotely locked, its location is tracked, and law enforcement is notified. // Related Stories Videos circulating online show stolen iPhones blaring alarms and displaying tracking messages, making them impossible to ignore and virtually worthless on the black market. According to the Los Angeles Police Department, at least three individuals were arrested in connection with the Apple Store burglary, including one suspect apprehended at the scene and two others detained for looting. The crackdown on looting comes amid a broader shift in California's approach to retail crime. In response to public outcry over rising thefts, state and local officials have moved away from previously lenient policies. The passage of Proposition 36 has empowered prosecutors to file felony charges against repeat offenders, regardless of the value of stolen goods, and to impose harsher penalties for organized group theft. Under these new measures, those caught looting face the prospect of significant prison time, a marked departure from the misdemeanor charges that were common under earlier laws. District attorneys in Southern California have called for even harsher penalties, particularly for crimes committed during states of emergency. Proposals include making looting a felony offense, increasing prison sentences, and ensuring that suspects are not released without judicial review. The goal, officials say, is to deter opportunistic criminals who exploit moments of crisis, whether during protests or natural disasters. #stolen #iphones #disabled #apple039s #antitheftWWW.TECHSPOT.COMStolen iPhones disabled by Apple's anti-theft tech after Los Angeles lootingWhat just happened? As protests against federal immigration enforcement swept through downtown Los Angeles last week, a wave of looting left several major retailers, including Apple, T-Mobile, and Adidas, counting the cost of smashed windows and stolen goods. Yet for those who made off with iPhones from Apple's flagship store, the thrill of the heist quickly turned into a lesson in high-tech security. Apple's retail locations are equipped with advanced anti-theft technology that renders display devices useless once they leave the premises. The moment a demonstration iPhone is taken beyond the store's Wi-Fi network, it is instantly disabled by proximity software and a remote "kill switch." Instead of a functioning smartphone, thieves were met with a stark message on the screen: "Please return to Apple Tower Theatre. This device has been disabled and is being tracked. Local authorities will be alerted." The phone simultaneously sounds an alarm and flashes the warning, ensuring it cannot be resold or activated elsewhere. This system is not new. During the nationwide unrest of 2020, similar scenes played out as looters discovered that Apple's security measures turned their stolen goods into little more than expensive paperweights. The technology relies on a combination of location tracking and network monitoring. As soon as a device is separated from the store's secure environment, it is remotely locked, its location is tracked, and law enforcement is notified. // Related Stories Videos circulating online show stolen iPhones blaring alarms and displaying tracking messages, making them impossible to ignore and virtually worthless on the black market. According to the Los Angeles Police Department, at least three individuals were arrested in connection with the Apple Store burglary, including one suspect apprehended at the scene and two others detained for looting. The crackdown on looting comes amid a broader shift in California's approach to retail crime. In response to public outcry over rising thefts, state and local officials have moved away from previously lenient policies. The passage of Proposition 36 has empowered prosecutors to file felony charges against repeat offenders, regardless of the value of stolen goods, and to impose harsher penalties for organized group theft. Under these new measures, those caught looting face the prospect of significant prison time, a marked departure from the misdemeanor charges that were common under earlier laws. District attorneys in Southern California have called for even harsher penalties, particularly for crimes committed during states of emergency. Proposals include making looting a felony offense, increasing prison sentences, and ensuring that suspects are not released without judicial review. The goal, officials say, is to deter opportunistic criminals who exploit moments of crisis, whether during protests or natural disasters.Effettua l'accesso per mettere mi piace, condividere e commentare! -
Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation.
Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.
View this post on Instagram
A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions.
It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices.
On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta.
Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable.
Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language.
The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons.
Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console.
Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
#microsoft #trolls #apple039s #new #liquidMicrosoft trolls Apple's new Liquid Glass UI for looking like Windows VistaIn a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI. View this post on Instagram A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency. #microsoft #trolls #apple039s #new #liquidWWW.TECHSPOT.COMMicrosoft trolls Apple's new Liquid Glass UI for looking like Windows VistaIn a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI. View this post on Instagram A post shared by Windows (@windows) Liquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.0 Commenti 0 condivisioni -
A shortage of high-voltage power cables could stall the clean energy transition
In a nutshell: As nations set ever more ambitious targets for renewable energy and electrification, the humble high-voltage cable has emerged as a linchpin – and a potential chokepoint – in the race to decarbonize the global economy. A Bloomberg interview with Claes Westerlind, CEO of NKT, a leading cable manufacturer based in Denmark, explains why.
A global surge in demand for high-voltage electricity cables is threatening to stall the clean energy revolution, as the world's ability to build new wind farms, solar plants, and cross-border power links increasingly hinges on a supply chain bottleneck few outside the industry have considered. At the center of this challenge is the complex, capital-intensive process of manufacturing the giant cables that transport electricity across hundreds of miles, both over land and under the sea.
Despite soaring demand, cable manufacturers remain cautious about expanding capacity, raising questions about whether the pace of electrification can keep up with climate ambitions, geopolitical tensions, and the practical realities of industrial investment.
High-voltage cables are the arteries of modern power grids, carrying electrons from remote wind farms or hydroelectric dams to the cities and industries that need them. Unlike the thin wires that run through a home's walls, these cables are engineering marvels – sometimes as thick as a person's torso, armored to withstand the crushing pressure of the ocean floor, and designed to last for decades under extreme electrical and environmental stress.
"If you look at the very high voltage direct current cable, able to carry roughly two gigawatts through two pairs of cables – that means that the equivalent of one nuclear power reactor is flowing through one cable," Westerlind told Bloomberg.
The process of making these cables is as specialized as it is demanding. At the core is a conductor, typically made of copper or aluminum, twisted together like a rope for flexibility and strength. Around this, manufacturers apply multiple layers of insulation in towering vertical factories to ensure the cable remains perfectly round and can safely contain the immense voltages involved. Any impurity in the insulation, even something as small as an eyelash, can cause catastrophic failure, potentially knocking out power to entire cities.
// Related Stories
As the world rushes to harness new sources of renewable energy, the demand for high-voltage direct currentcables has skyrocketed. HVDC technology, initially pioneered by NKT in the 1950s, has become the backbone of long-distance power transmission, particularly for offshore wind farms and intercontinental links. In recent years, approximately 80 to 90 percent of new large-scale cable projects have utilized HVDC, reflecting its efficiency in transmitting electricity over vast distances with minimal losses.
But this surge in demand has led to a critical bottleneck. Factories that produce these cables are booked out for years, Westerlind reports, and every project requires custom engineering to match the power needs, geography, and environmental conditions of its route. According to the International Energy Agency, meeting global clean energy goals will require building the equivalent of 80 million kilometersof new grid infrastructure by 2040 – essentially doubling what has been constructed over the past century, but in just 15 years.
Despite the clear need, cable makers have been slow to add capacity due to reasons that are as much economic and political as technical. Building a new cable factory can cost upwards of a billion euros, and manufacturers are wary of making such investments without long-term commitments from utilities or governments. "For a company like us to do investments in the realm of €1 or 2 billion, it's a massive commitment... but it's also a massive amount of demand that is needed for this investment to actually make financial sense over the next not five years, not 10 years, but over the next 20 to 30 years," Westerlind said. The industry still bears scars from a decade ago, when anticipated demand failed to materialize and expensive new facilities sat underused.
Some governments and transmission system operators are trying to break the logjam by making "anticipatory investments" – committing to buy cable capacity even before specific projects are finalized. This approach, backed by regulators, gives manufacturers the confidence to expand, but it remains the exception rather than the rule.
Meanwhile, the industry's structure itself creates barriers to rapid expansion, according to Westerlind. The expertise, technology, and infrastructure required to make high-voltage cables are concentrated in a handful of companies, creating what analysts describe as a "deep moat" that is difficult for new entrants to cross.
Geopolitical tensions add another layer of complexity. China has built more HVDC lines than any other country, although Western manufacturers, such as NKT, maintain a technical edge in the most advanced cable systems. Still, there is growing concern in Europe and the US about becoming dependent on foreign suppliers for such critical infrastructure, especially in light of recent global conflicts and trade disputes. "Strategic autonomy is very important when it comes to the core parts and the fundamental parts of your society, where the grid backbone is one," Westerlind noted.
The stakes are high. Without a rapid and coordinated push to expand cable manufacturing, the world's clean energy transition could be slowed not by a lack of wind or sun but by a shortage of the cables needed to connect them to the grid. As Westerlind put it, "We all know it has to be done... These are large investments. They are very expensive investments. So also the governments have to have a part in enabling these anticipatory investments, and making it possible for the TSOs to actually carry forward with them."
#shortage #highvoltage #power #cables #couldA shortage of high-voltage power cables could stall the clean energy transitionIn a nutshell: As nations set ever more ambitious targets for renewable energy and electrification, the humble high-voltage cable has emerged as a linchpin – and a potential chokepoint – in the race to decarbonize the global economy. A Bloomberg interview with Claes Westerlind, CEO of NKT, a leading cable manufacturer based in Denmark, explains why. A global surge in demand for high-voltage electricity cables is threatening to stall the clean energy revolution, as the world's ability to build new wind farms, solar plants, and cross-border power links increasingly hinges on a supply chain bottleneck few outside the industry have considered. At the center of this challenge is the complex, capital-intensive process of manufacturing the giant cables that transport electricity across hundreds of miles, both over land and under the sea. Despite soaring demand, cable manufacturers remain cautious about expanding capacity, raising questions about whether the pace of electrification can keep up with climate ambitions, geopolitical tensions, and the practical realities of industrial investment. High-voltage cables are the arteries of modern power grids, carrying electrons from remote wind farms or hydroelectric dams to the cities and industries that need them. Unlike the thin wires that run through a home's walls, these cables are engineering marvels – sometimes as thick as a person's torso, armored to withstand the crushing pressure of the ocean floor, and designed to last for decades under extreme electrical and environmental stress. "If you look at the very high voltage direct current cable, able to carry roughly two gigawatts through two pairs of cables – that means that the equivalent of one nuclear power reactor is flowing through one cable," Westerlind told Bloomberg. The process of making these cables is as specialized as it is demanding. At the core is a conductor, typically made of copper or aluminum, twisted together like a rope for flexibility and strength. Around this, manufacturers apply multiple layers of insulation in towering vertical factories to ensure the cable remains perfectly round and can safely contain the immense voltages involved. Any impurity in the insulation, even something as small as an eyelash, can cause catastrophic failure, potentially knocking out power to entire cities. // Related Stories As the world rushes to harness new sources of renewable energy, the demand for high-voltage direct currentcables has skyrocketed. HVDC technology, initially pioneered by NKT in the 1950s, has become the backbone of long-distance power transmission, particularly for offshore wind farms and intercontinental links. In recent years, approximately 80 to 90 percent of new large-scale cable projects have utilized HVDC, reflecting its efficiency in transmitting electricity over vast distances with minimal losses. But this surge in demand has led to a critical bottleneck. Factories that produce these cables are booked out for years, Westerlind reports, and every project requires custom engineering to match the power needs, geography, and environmental conditions of its route. According to the International Energy Agency, meeting global clean energy goals will require building the equivalent of 80 million kilometersof new grid infrastructure by 2040 – essentially doubling what has been constructed over the past century, but in just 15 years. Despite the clear need, cable makers have been slow to add capacity due to reasons that are as much economic and political as technical. Building a new cable factory can cost upwards of a billion euros, and manufacturers are wary of making such investments without long-term commitments from utilities or governments. "For a company like us to do investments in the realm of €1 or 2 billion, it's a massive commitment... but it's also a massive amount of demand that is needed for this investment to actually make financial sense over the next not five years, not 10 years, but over the next 20 to 30 years," Westerlind said. The industry still bears scars from a decade ago, when anticipated demand failed to materialize and expensive new facilities sat underused. Some governments and transmission system operators are trying to break the logjam by making "anticipatory investments" – committing to buy cable capacity even before specific projects are finalized. This approach, backed by regulators, gives manufacturers the confidence to expand, but it remains the exception rather than the rule. Meanwhile, the industry's structure itself creates barriers to rapid expansion, according to Westerlind. The expertise, technology, and infrastructure required to make high-voltage cables are concentrated in a handful of companies, creating what analysts describe as a "deep moat" that is difficult for new entrants to cross. Geopolitical tensions add another layer of complexity. China has built more HVDC lines than any other country, although Western manufacturers, such as NKT, maintain a technical edge in the most advanced cable systems. Still, there is growing concern in Europe and the US about becoming dependent on foreign suppliers for such critical infrastructure, especially in light of recent global conflicts and trade disputes. "Strategic autonomy is very important when it comes to the core parts and the fundamental parts of your society, where the grid backbone is one," Westerlind noted. The stakes are high. Without a rapid and coordinated push to expand cable manufacturing, the world's clean energy transition could be slowed not by a lack of wind or sun but by a shortage of the cables needed to connect them to the grid. As Westerlind put it, "We all know it has to be done... These are large investments. They are very expensive investments. So also the governments have to have a part in enabling these anticipatory investments, and making it possible for the TSOs to actually carry forward with them." #shortage #highvoltage #power #cables #couldWWW.TECHSPOT.COMA shortage of high-voltage power cables could stall the clean energy transitionIn a nutshell: As nations set ever more ambitious targets for renewable energy and electrification, the humble high-voltage cable has emerged as a linchpin – and a potential chokepoint – in the race to decarbonize the global economy. A Bloomberg interview with Claes Westerlind, CEO of NKT, a leading cable manufacturer based in Denmark, explains why. A global surge in demand for high-voltage electricity cables is threatening to stall the clean energy revolution, as the world's ability to build new wind farms, solar plants, and cross-border power links increasingly hinges on a supply chain bottleneck few outside the industry have considered. At the center of this challenge is the complex, capital-intensive process of manufacturing the giant cables that transport electricity across hundreds of miles, both over land and under the sea. Despite soaring demand, cable manufacturers remain cautious about expanding capacity, raising questions about whether the pace of electrification can keep up with climate ambitions, geopolitical tensions, and the practical realities of industrial investment. High-voltage cables are the arteries of modern power grids, carrying electrons from remote wind farms or hydroelectric dams to the cities and industries that need them. Unlike the thin wires that run through a home's walls, these cables are engineering marvels – sometimes as thick as a person's torso, armored to withstand the crushing pressure of the ocean floor, and designed to last for decades under extreme electrical and environmental stress. "If you look at the very high voltage direct current cable, able to carry roughly two gigawatts through two pairs of cables – that means that the equivalent of one nuclear power reactor is flowing through one cable," Westerlind told Bloomberg. The process of making these cables is as specialized as it is demanding. At the core is a conductor, typically made of copper or aluminum, twisted together like a rope for flexibility and strength. Around this, manufacturers apply multiple layers of insulation in towering vertical factories to ensure the cable remains perfectly round and can safely contain the immense voltages involved. Any impurity in the insulation, even something as small as an eyelash, can cause catastrophic failure, potentially knocking out power to entire cities. // Related Stories As the world rushes to harness new sources of renewable energy, the demand for high-voltage direct current (HVDC) cables has skyrocketed. HVDC technology, initially pioneered by NKT in the 1950s, has become the backbone of long-distance power transmission, particularly for offshore wind farms and intercontinental links. In recent years, approximately 80 to 90 percent of new large-scale cable projects have utilized HVDC, reflecting its efficiency in transmitting electricity over vast distances with minimal losses. But this surge in demand has led to a critical bottleneck. Factories that produce these cables are booked out for years, Westerlind reports, and every project requires custom engineering to match the power needs, geography, and environmental conditions of its route. According to the International Energy Agency, meeting global clean energy goals will require building the equivalent of 80 million kilometers (around 49.7 million miles) of new grid infrastructure by 2040 – essentially doubling what has been constructed over the past century, but in just 15 years. Despite the clear need, cable makers have been slow to add capacity due to reasons that are as much economic and political as technical. Building a new cable factory can cost upwards of a billion euros, and manufacturers are wary of making such investments without long-term commitments from utilities or governments. "For a company like us to do investments in the realm of €1 or 2 billion, it's a massive commitment... but it's also a massive amount of demand that is needed for this investment to actually make financial sense over the next not five years, not 10 years, but over the next 20 to 30 years," Westerlind said. The industry still bears scars from a decade ago, when anticipated demand failed to materialize and expensive new facilities sat underused. Some governments and transmission system operators are trying to break the logjam by making "anticipatory investments" – committing to buy cable capacity even before specific projects are finalized. This approach, backed by regulators, gives manufacturers the confidence to expand, but it remains the exception rather than the rule. Meanwhile, the industry's structure itself creates barriers to rapid expansion, according to Westerlind. The expertise, technology, and infrastructure required to make high-voltage cables are concentrated in a handful of companies, creating what analysts describe as a "deep moat" that is difficult for new entrants to cross. Geopolitical tensions add another layer of complexity. China has built more HVDC lines than any other country, although Western manufacturers, such as NKT, maintain a technical edge in the most advanced cable systems. Still, there is growing concern in Europe and the US about becoming dependent on foreign suppliers for such critical infrastructure, especially in light of recent global conflicts and trade disputes. "Strategic autonomy is very important when it comes to the core parts and the fundamental parts of your society, where the grid backbone is one," Westerlind noted. The stakes are high. Without a rapid and coordinated push to expand cable manufacturing, the world's clean energy transition could be slowed not by a lack of wind or sun but by a shortage of the cables needed to connect them to the grid. As Westerlind put it, "We all know it has to be done... These are large investments. They are very expensive investments. So also the governments have to have a part in enabling these anticipatory investments, and making it possible for the TSOs to actually carry forward with them."0 Commenti 0 condivisioni -
Resident Evil 9 returns to Raccoon City, coming next February
Something to look forward to: This year's Summer Game Fest presentation ended with a reveal trailer for Resident Evil Requiem, which Capcom confirmed is the ninth mainline title in the long-running survival horror game series. Details on the upcoming title are scant, but it is set to launch on PC and current-generation consoles in a few months.
Capcom has not yet revealed gameplay details for Resident Evil Requiem, as the initial trailer focuses on the story, characters, and locations. The game's scenario appears to draw heavily from the franchise's history, likely to celebrate the 30th anniversary of the original Resident Evil's 1996 release.
Much of the trailer highlights the ruins of Raccoon City, suggesting that players will revisit the setting of the series' first three entries. Brief shots clearly show the decayed remains of the city's police station – where much of Resident Evil 2 and 3 took place – with layouts that appear nearly identical to those in the 2019 and 2020 remakes.
Another shot depicts the city's deserted landscape, featuring a crater at its center left by the missile that destroyed the town following the events of RE3. Additionally, the game's protagonist is FBI agent Grace Ashcroft, the daughter of one of the main characters from Resident Evil Outbreak, an online multiplayer spin-off released for the PlayStation 2 in 2003.
The game's website mentions technological advancements, suggesting it will showcase the next evolution of Capcom's RE Engine. This graphics engine debuted in 2017 with Resident Evil 7, which was known for its impressive level of realism and surprisingly fast performance.
However, more recent titles using the engine, such as Dragon's Dogma II and the enormously successful Monster Hunter Wilds, are far more demanding, in part due to their massive open-world environments.
Capcom's shift toward open-world games has led some to speculate that the next Resident Evil title might adopt a similar gameplay structure, representing a stark contrast to the franchise's traditional preference for isolated locations. A ruined city would provide a fitting backdrop for such a radical change, but it's difficult to say what Capcom has planned.
// Related Stories
Other games revealed this week include Atomic Heart II, Game of Thrones: War for Westeros, Dying Light: The Beast, Lego Voyagers, Killer Inn, Felt That Boxing, Nioh 3, 007 First Light, Lumines Arise, Marvel Tōkon, Thief VR, Mortal Kombat Legacy Kollection, and more. More new titles are expected to debut this weekend during the Xbox Games Showcase 2025 on Sunday, June 8, at 1 pm ET.
Resident Evil Requiem launches on February 27 on Steam, PlayStation 5, and Xbox Series consoles.
#resident #evil #returns #raccoon #cityResident Evil 9 returns to Raccoon City, coming next FebruarySomething to look forward to: This year's Summer Game Fest presentation ended with a reveal trailer for Resident Evil Requiem, which Capcom confirmed is the ninth mainline title in the long-running survival horror game series. Details on the upcoming title are scant, but it is set to launch on PC and current-generation consoles in a few months. Capcom has not yet revealed gameplay details for Resident Evil Requiem, as the initial trailer focuses on the story, characters, and locations. The game's scenario appears to draw heavily from the franchise's history, likely to celebrate the 30th anniversary of the original Resident Evil's 1996 release. Much of the trailer highlights the ruins of Raccoon City, suggesting that players will revisit the setting of the series' first three entries. Brief shots clearly show the decayed remains of the city's police station – where much of Resident Evil 2 and 3 took place – with layouts that appear nearly identical to those in the 2019 and 2020 remakes. Another shot depicts the city's deserted landscape, featuring a crater at its center left by the missile that destroyed the town following the events of RE3. Additionally, the game's protagonist is FBI agent Grace Ashcroft, the daughter of one of the main characters from Resident Evil Outbreak, an online multiplayer spin-off released for the PlayStation 2 in 2003. The game's website mentions technological advancements, suggesting it will showcase the next evolution of Capcom's RE Engine. This graphics engine debuted in 2017 with Resident Evil 7, which was known for its impressive level of realism and surprisingly fast performance. However, more recent titles using the engine, such as Dragon's Dogma II and the enormously successful Monster Hunter Wilds, are far more demanding, in part due to their massive open-world environments. Capcom's shift toward open-world games has led some to speculate that the next Resident Evil title might adopt a similar gameplay structure, representing a stark contrast to the franchise's traditional preference for isolated locations. A ruined city would provide a fitting backdrop for such a radical change, but it's difficult to say what Capcom has planned. // Related Stories Other games revealed this week include Atomic Heart II, Game of Thrones: War for Westeros, Dying Light: The Beast, Lego Voyagers, Killer Inn, Felt That Boxing, Nioh 3, 007 First Light, Lumines Arise, Marvel Tōkon, Thief VR, Mortal Kombat Legacy Kollection, and more. More new titles are expected to debut this weekend during the Xbox Games Showcase 2025 on Sunday, June 8, at 1 pm ET. Resident Evil Requiem launches on February 27 on Steam, PlayStation 5, and Xbox Series consoles. #resident #evil #returns #raccoon #cityWWW.TECHSPOT.COMResident Evil 9 returns to Raccoon City, coming next FebruarySomething to look forward to: This year's Summer Game Fest presentation ended with a reveal trailer for Resident Evil Requiem, which Capcom confirmed is the ninth mainline title in the long-running survival horror game series. Details on the upcoming title are scant, but it is set to launch on PC and current-generation consoles in a few months. Capcom has not yet revealed gameplay details for Resident Evil Requiem, as the initial trailer focuses on the story, characters, and locations. The game's scenario appears to draw heavily from the franchise's history, likely to celebrate the 30th anniversary of the original Resident Evil's 1996 release. Much of the trailer highlights the ruins of Raccoon City, suggesting that players will revisit the setting of the series' first three entries. Brief shots clearly show the decayed remains of the city's police station – where much of Resident Evil 2 and 3 took place – with layouts that appear nearly identical to those in the 2019 and 2020 remakes. Another shot depicts the city's deserted landscape, featuring a crater at its center left by the missile that destroyed the town following the events of RE3. Additionally, the game's protagonist is FBI agent Grace Ashcroft, the daughter of one of the main characters from Resident Evil Outbreak, an online multiplayer spin-off released for the PlayStation 2 in 2003. The game's website mentions technological advancements, suggesting it will showcase the next evolution of Capcom's RE Engine. This graphics engine debuted in 2017 with Resident Evil 7, which was known for its impressive level of realism and surprisingly fast performance. However, more recent titles using the engine, such as Dragon's Dogma II and the enormously successful Monster Hunter Wilds, are far more demanding, in part due to their massive open-world environments. Capcom's shift toward open-world games has led some to speculate that the next Resident Evil title might adopt a similar gameplay structure, representing a stark contrast to the franchise's traditional preference for isolated locations. A ruined city would provide a fitting backdrop for such a radical change, but it's difficult to say what Capcom has planned. // Related Stories Other games revealed this week include Atomic Heart II, Game of Thrones: War for Westeros, Dying Light: The Beast, Lego Voyagers, Killer Inn, Felt That Boxing, Nioh 3, 007 First Light, Lumines Arise, Marvel Tōkon, Thief VR, Mortal Kombat Legacy Kollection, and more. More new titles are expected to debut this weekend during the Xbox Games Showcase 2025 on Sunday, June 8, at 1 pm ET. Resident Evil Requiem launches on February 27 on Steam, PlayStation 5, and Xbox Series consoles. -
Trump-Musk feud wipes $152 billion off Tesla, sparks Dragon spacecraft threat and Epstein files claim
WTF?! When the president of the United States and the world's richest person have a falling out, the ramifications can be widespread. Since Musk and Trump went from friends to enemies, billion has been wiped off Tesla's share price, and Musk has threatened to decommission the SpaceX Dragon spacecraft that NASA relies on to deliver crew to and from the International Space Station. Musk has also said that Trump appears in files relating to Jeffrey Epstein.
When he left the White House last week, Musk blasted those who said he'd had a falling out with Trump. The CEO insisted his departure was due to his scheduled 130 days as a government employee coming to an end. But Musk had been publicly criticizing Trump's Big Beautiful Bill Act, warning it would increase the budget deficit.
After learning that an electric-vehicle tax credit that would help incentivize Tesla purchases was not included in the bill, Musk called it "a disgusting abomination" on X and urged Americans to call Congress to have the bill killed.
On Thursday, the two men used their respective social media platforms to throw insults at each other. At one point, Trump threatened to "terminate Elon's Governmental Subsidies and Contracts" as a way to slash billions of dollars from the budget.
The warning sent Tesla's shares down just over 14%, wiping around billion off its valuation – and almost billion off Musk's total net worth.
In response to Trump's threat to cancel Musk's government contracts, Musk said SpaceX will begin decommissioning its Dragon spacecraft immediately. The craft, which NASA relies on for transport missions including ferrying astronauts to the ISS, is under contract worth roughly billion. The capsule is the only US spacecraft capable of flying humans into orbit. The only other crewed spacecraft that sends astronauts to the ISS is Russia's Soyuz system.
However, after an X user told him to "cool off," Musk wrote, "Ok, we won't decommission Dragon."
// Related Stories
As the war of words has grown, Musk said Trump's controversial tariffs will cause a recession in the second half of this year. But his "really big bomb" was an allegation that Trump appears in the files of pedophile financier Jeffrey Epstein, who killed himself in his jail cell in August 2019 while awaiting trial.
Musk has also shared a post calling for Trump's impeachment and posted a poll asking if a new political party should be created in the US that "actually represents the 80% in the middle." 81% of the 4.4 million respondents have voted yes.
One has to wonder if Musk believes his time in the White House was worth it. Beyond his reputational damage, his companies have suffered by association. Tesla sales were down 50% last month, and there have been protests and attacks on dealerships. The company's share price is down 40% from its all-time high on December 17, 2024, before Musk was part of DOGE.
#trumpmusk #feud #wipes #billion #offTrump-Musk feud wipes $152 billion off Tesla, sparks Dragon spacecraft threat and Epstein files claimWTF?! When the president of the United States and the world's richest person have a falling out, the ramifications can be widespread. Since Musk and Trump went from friends to enemies, billion has been wiped off Tesla's share price, and Musk has threatened to decommission the SpaceX Dragon spacecraft that NASA relies on to deliver crew to and from the International Space Station. Musk has also said that Trump appears in files relating to Jeffrey Epstein. When he left the White House last week, Musk blasted those who said he'd had a falling out with Trump. The CEO insisted his departure was due to his scheduled 130 days as a government employee coming to an end. But Musk had been publicly criticizing Trump's Big Beautiful Bill Act, warning it would increase the budget deficit. After learning that an electric-vehicle tax credit that would help incentivize Tesla purchases was not included in the bill, Musk called it "a disgusting abomination" on X and urged Americans to call Congress to have the bill killed. On Thursday, the two men used their respective social media platforms to throw insults at each other. At one point, Trump threatened to "terminate Elon's Governmental Subsidies and Contracts" as a way to slash billions of dollars from the budget. The warning sent Tesla's shares down just over 14%, wiping around billion off its valuation – and almost billion off Musk's total net worth. In response to Trump's threat to cancel Musk's government contracts, Musk said SpaceX will begin decommissioning its Dragon spacecraft immediately. The craft, which NASA relies on for transport missions including ferrying astronauts to the ISS, is under contract worth roughly billion. The capsule is the only US spacecraft capable of flying humans into orbit. The only other crewed spacecraft that sends astronauts to the ISS is Russia's Soyuz system. However, after an X user told him to "cool off," Musk wrote, "Ok, we won't decommission Dragon." // Related Stories As the war of words has grown, Musk said Trump's controversial tariffs will cause a recession in the second half of this year. But his "really big bomb" was an allegation that Trump appears in the files of pedophile financier Jeffrey Epstein, who killed himself in his jail cell in August 2019 while awaiting trial. Musk has also shared a post calling for Trump's impeachment and posted a poll asking if a new political party should be created in the US that "actually represents the 80% in the middle." 81% of the 4.4 million respondents have voted yes. One has to wonder if Musk believes his time in the White House was worth it. Beyond his reputational damage, his companies have suffered by association. Tesla sales were down 50% last month, and there have been protests and attacks on dealerships. The company's share price is down 40% from its all-time high on December 17, 2024, before Musk was part of DOGE. #trumpmusk #feud #wipes #billion #offWWW.TECHSPOT.COMTrump-Musk feud wipes $152 billion off Tesla, sparks Dragon spacecraft threat and Epstein files claimWTF?! When the president of the United States and the world's richest person have a falling out, the ramifications can be widespread. Since Musk and Trump went from friends to enemies, $152 billion has been wiped off Tesla's share price, and Musk has threatened to decommission the SpaceX Dragon spacecraft that NASA relies on to deliver crew to and from the International Space Station. Musk has also said that Trump appears in files relating to Jeffrey Epstein. When he left the White House last week, Musk blasted those who said he'd had a falling out with Trump. The CEO insisted his departure was due to his scheduled 130 days as a government employee coming to an end. But Musk had been publicly criticizing Trump's Big Beautiful Bill Act, warning it would increase the budget deficit. After learning that an electric-vehicle tax credit that would help incentivize Tesla purchases was not included in the bill, Musk called it "a disgusting abomination" on X and urged Americans to call Congress to have the bill killed. On Thursday, the two men used their respective social media platforms to throw insults at each other. At one point, Trump threatened to "terminate Elon's Governmental Subsidies and Contracts" as a way to slash billions of dollars from the budget. The warning sent Tesla's shares down just over 14%, wiping around $152 billion off its valuation – and almost $100 billion off Musk's total net worth. In response to Trump's threat to cancel Musk's government contracts, Musk said SpaceX will begin decommissioning its Dragon spacecraft immediately. The craft, which NASA relies on for transport missions including ferrying astronauts to the ISS, is under contract worth roughly $4.9 billion. The capsule is the only US spacecraft capable of flying humans into orbit. The only other crewed spacecraft that sends astronauts to the ISS is Russia's Soyuz system. However, after an X user told him to "cool off," Musk wrote, "Ok, we won't decommission Dragon." // Related Stories As the war of words has grown, Musk said Trump's controversial tariffs will cause a recession in the second half of this year. But his "really big bomb" was an allegation that Trump appears in the files of pedophile financier Jeffrey Epstein, who killed himself in his jail cell in August 2019 while awaiting trial. Musk has also shared a post calling for Trump's impeachment and posted a poll asking if a new political party should be created in the US that "actually represents the 80% in the middle." 81% of the 4.4 million respondents have voted yes. One has to wonder if Musk believes his time in the White House was worth it. Beyond his reputational damage, his companies have suffered by association. Tesla sales were down 50% last month, and there have been protests and attacks on dealerships. The company's share price is down 40% from its all-time high on December 17, 2024, before Musk was part of DOGE. -
Intel integrated graphics overclocked to 4.25 GHz, edging out the RTX 4090's world record
What just happened? Enthusiast-class discrete graphics cards typically dominate conversations about high performance, but integrated GPUs aren't far behind when it comes to overclocking records for clock frequency. The latest world record holder recently explained how he managed voltage and temperature levels to push an Intel iGPU past the 4GHz mark for the very first time.
Overclocker Pieter Massman recently detailed how he set a new graphics clock frequency world record at Computex 2025. While most recent record holders have used Nvidia's flagship RTX 4090, Massman surpassed them using the integrated GPU from an Intel Core Ultra 9 285K.
With help from Asus overclocker Peter "Shamino" Tan, Massman pushed the Arrow Lake processor's Xe2-LPG 64EU iGPU to 4.25GHz – more than double its stock boost clock. The team achieved the feat twice, validating the results in CPU-Z during a livestream in the early days of this year's Computex event in Taiwan.
According to Massman's blog, Skatter Bencher, the achievement marks a new world record for both integrated GPU clock frequency and GPU clock frequency overall.
Since the RTX 4090 launched in 2022, frequency records have steadily climbed from around 3.3GHz to 4.02GHz in 2023. Massman had previously set the iGPU record at 3.9GHz during an Arrow Lake launch event late last year – using the same chip he would later overclock at Computex.
The overclock involved setting the GT ratio to multiply the default reference clock by a factor of 85, the highest available setting. Initially, Massman supplied 1.3V to the integrated GPU via a VccGT voltage rail dedicated to the CPU's graphics tile cores, but this only reached 3.1GHz.
// Related Stories
Pushing further required a delicate balance of overvolting and liquid nitrogen cooling, ultimately achieving 4.25GHz with 1.7V and a temperature of -170°C.
However, measuring the iGPU's performance at those settings overwhelmed several common benchmarking tools. Furmark crashed after reaching 2,800 points in 1080p, 3DMark Speed Way halted around 650 marks, and GPUPI 1B only ran for about 17.9 seconds. To stabilize the system, the team overclocked the graphics die-to-die interface and increased the reference clock.
While discrete and overall GPU clock frequency records have steadily risen since the early 2000s, progress with iGPUs only resumed recently.
After breaking the 2GHz barrier in 2011, integrated GPU overclocking plateaued for nearly a decade before surpassing 3GHz in 2023. Whether graphics overclocking will stagnate as CPU frequency gains have since 2010 remains to be seen.
#intel #integrated #graphics #overclocked #ghzIntel integrated graphics overclocked to 4.25 GHz, edging out the RTX 4090's world recordWhat just happened? Enthusiast-class discrete graphics cards typically dominate conversations about high performance, but integrated GPUs aren't far behind when it comes to overclocking records for clock frequency. The latest world record holder recently explained how he managed voltage and temperature levels to push an Intel iGPU past the 4GHz mark for the very first time. Overclocker Pieter Massman recently detailed how he set a new graphics clock frequency world record at Computex 2025. While most recent record holders have used Nvidia's flagship RTX 4090, Massman surpassed them using the integrated GPU from an Intel Core Ultra 9 285K. With help from Asus overclocker Peter "Shamino" Tan, Massman pushed the Arrow Lake processor's Xe2-LPG 64EU iGPU to 4.25GHz – more than double its stock boost clock. The team achieved the feat twice, validating the results in CPU-Z during a livestream in the early days of this year's Computex event in Taiwan. According to Massman's blog, Skatter Bencher, the achievement marks a new world record for both integrated GPU clock frequency and GPU clock frequency overall. Since the RTX 4090 launched in 2022, frequency records have steadily climbed from around 3.3GHz to 4.02GHz in 2023. Massman had previously set the iGPU record at 3.9GHz during an Arrow Lake launch event late last year – using the same chip he would later overclock at Computex. The overclock involved setting the GT ratio to multiply the default reference clock by a factor of 85, the highest available setting. Initially, Massman supplied 1.3V to the integrated GPU via a VccGT voltage rail dedicated to the CPU's graphics tile cores, but this only reached 3.1GHz. // Related Stories Pushing further required a delicate balance of overvolting and liquid nitrogen cooling, ultimately achieving 4.25GHz with 1.7V and a temperature of -170°C. However, measuring the iGPU's performance at those settings overwhelmed several common benchmarking tools. Furmark crashed after reaching 2,800 points in 1080p, 3DMark Speed Way halted around 650 marks, and GPUPI 1B only ran for about 17.9 seconds. To stabilize the system, the team overclocked the graphics die-to-die interface and increased the reference clock. While discrete and overall GPU clock frequency records have steadily risen since the early 2000s, progress with iGPUs only resumed recently. After breaking the 2GHz barrier in 2011, integrated GPU overclocking plateaued for nearly a decade before surpassing 3GHz in 2023. Whether graphics overclocking will stagnate as CPU frequency gains have since 2010 remains to be seen. #intel #integrated #graphics #overclocked #ghzWWW.TECHSPOT.COMIntel integrated graphics overclocked to 4.25 GHz, edging out the RTX 4090's world recordWhat just happened? Enthusiast-class discrete graphics cards typically dominate conversations about high performance, but integrated GPUs aren't far behind when it comes to overclocking records for clock frequency. The latest world record holder recently explained how he managed voltage and temperature levels to push an Intel iGPU past the 4GHz mark for the very first time. Overclocker Pieter Massman recently detailed how he set a new graphics clock frequency world record at Computex 2025. While most recent record holders have used Nvidia's flagship RTX 4090, Massman surpassed them using the integrated GPU from an Intel Core Ultra 9 285K. With help from Asus overclocker Peter "Shamino" Tan, Massman pushed the Arrow Lake processor's Xe2-LPG 64EU iGPU to 4.25GHz – more than double its stock boost clock. The team achieved the feat twice, validating the results in CPU-Z during a livestream in the early days of this year's Computex event in Taiwan. According to Massman's blog, Skatter Bencher, the achievement marks a new world record for both integrated GPU clock frequency and GPU clock frequency overall. Since the RTX 4090 launched in 2022, frequency records have steadily climbed from around 3.3GHz to 4.02GHz in 2023. Massman had previously set the iGPU record at 3.9GHz during an Arrow Lake launch event late last year – using the same chip he would later overclock at Computex. The overclock involved setting the GT ratio to multiply the default reference clock by a factor of 85, the highest available setting. Initially, Massman supplied 1.3V to the integrated GPU via a VccGT voltage rail dedicated to the CPU's graphics tile cores, but this only reached 3.1GHz. // Related Stories Pushing further required a delicate balance of overvolting and liquid nitrogen cooling, ultimately achieving 4.25GHz with 1.7V and a temperature of -170°C. However, measuring the iGPU's performance at those settings overwhelmed several common benchmarking tools. Furmark crashed after reaching 2,800 points in 1080p, 3DMark Speed Way halted around 650 marks, and GPUPI 1B only ran for about 17.9 seconds. To stabilize the system, the team overclocked the graphics die-to-die interface and increased the reference clock. While discrete and overall GPU clock frequency records have steadily risen since the early 2000s, progress with iGPUs only resumed recently. After breaking the 2GHz barrier in 2011, integrated GPU overclocking plateaued for nearly a decade before surpassing 3GHz in 2023. Whether graphics overclocking will stagnate as CPU frequency gains have since 2010 remains to be seen. -
Australia becomes first country to force disclosure of ransomware payments
TL;DR: Canberra authorities are embracing a tough approach to ransomware threats. A new law will require certain organizations to disclose when and how much they have paid to cybercriminals following a data breach. However, experts remain unconvinced that this is the most effective way to tackle the problem.
Companies operating in Australia must now report any payments made to cybercriminals after experiencing a ransomware incident. Government officials hope the new mandate will help them gain a deeper understanding of the issue, as many enterprises continue to pay ransoms whenever they fall victim to file-encrypting malware.
Originally proposed last year, the law applies only to companies with an annual turnover exceeding million. This threshold targets the top 6.5 percent of Australia's registered businesses – representing roughly half of the country's total economic output.
Under the new law, affected companies must report ransomware incidents to the Australian Signals Directorate. Failure to properly disclose an attack will result in fines under the country's civil penalty system.
Authorities are allegedly planning to follow a two-stage approach, initially prioritizing major violations while fostering a "constructive" dialogue with victims.
Starting next year, regulators will adopt a much stricter stance toward noncompliant organizations. The Australian government has implemented this mandatory reporting requirement after concluding that voluntary disclosures were insufficient. In 2024, officials noted that ransomware and cyber extortion incidents were vastly underreported, with only one in five victims coming forward.
Ransomware remains a highly complex and growing phenomenon, with attacks reaching record levels despite increased law enforcement actions against notorious cyber gangs. Although several governments have proposed similar regulations, Australia is the first country to formally enact such a law.
// Related Stories
Jeff Wichman, director of incident response at cybersecurity firm Semperis, cautions that mandatory reporting is a double-edged sword. While the government may gain valuable data and insights into attacker profiles, the law may not reduce the frequency of attacks.
Instead, it could serve mainly to publicly shame breached organizations – while cybercriminals continue to profit. A recent Semperis study found that over 70 percent of 1,000 ransomware-hit companies opted to pay the ransom and hope for the best.
"Some companies, they just want to pay it and get things done, to get their data off the dark web. Others, it's a delayed response perspective, they want negotiations to happen with the attacker while they figure out what happened," Wichman explained.
According to the study, 60 percent of victims who paid received functional decryption keys and successfully recovered their data. However, in 40 percent of cases, the provided keys were corrupted or ineffective.
#australia #becomes #first #country #forceAustralia becomes first country to force disclosure of ransomware paymentsTL;DR: Canberra authorities are embracing a tough approach to ransomware threats. A new law will require certain organizations to disclose when and how much they have paid to cybercriminals following a data breach. However, experts remain unconvinced that this is the most effective way to tackle the problem. Companies operating in Australia must now report any payments made to cybercriminals after experiencing a ransomware incident. Government officials hope the new mandate will help them gain a deeper understanding of the issue, as many enterprises continue to pay ransoms whenever they fall victim to file-encrypting malware. Originally proposed last year, the law applies only to companies with an annual turnover exceeding million. This threshold targets the top 6.5 percent of Australia's registered businesses – representing roughly half of the country's total economic output. Under the new law, affected companies must report ransomware incidents to the Australian Signals Directorate. Failure to properly disclose an attack will result in fines under the country's civil penalty system. Authorities are allegedly planning to follow a two-stage approach, initially prioritizing major violations while fostering a "constructive" dialogue with victims. Starting next year, regulators will adopt a much stricter stance toward noncompliant organizations. The Australian government has implemented this mandatory reporting requirement after concluding that voluntary disclosures were insufficient. In 2024, officials noted that ransomware and cyber extortion incidents were vastly underreported, with only one in five victims coming forward. Ransomware remains a highly complex and growing phenomenon, with attacks reaching record levels despite increased law enforcement actions against notorious cyber gangs. Although several governments have proposed similar regulations, Australia is the first country to formally enact such a law. // Related Stories Jeff Wichman, director of incident response at cybersecurity firm Semperis, cautions that mandatory reporting is a double-edged sword. While the government may gain valuable data and insights into attacker profiles, the law may not reduce the frequency of attacks. Instead, it could serve mainly to publicly shame breached organizations – while cybercriminals continue to profit. A recent Semperis study found that over 70 percent of 1,000 ransomware-hit companies opted to pay the ransom and hope for the best. "Some companies, they just want to pay it and get things done, to get their data off the dark web. Others, it's a delayed response perspective, they want negotiations to happen with the attacker while they figure out what happened," Wichman explained. According to the study, 60 percent of victims who paid received functional decryption keys and successfully recovered their data. However, in 40 percent of cases, the provided keys were corrupted or ineffective. #australia #becomes #first #country #forceWWW.TECHSPOT.COMAustralia becomes first country to force disclosure of ransomware paymentsTL;DR: Canberra authorities are embracing a tough approach to ransomware threats. A new law will require certain organizations to disclose when and how much they have paid to cybercriminals following a data breach. However, experts remain unconvinced that this is the most effective way to tackle the problem. Companies operating in Australia must now report any payments made to cybercriminals after experiencing a ransomware incident. Government officials hope the new mandate will help them gain a deeper understanding of the issue, as many enterprises continue to pay ransoms whenever they fall victim to file-encrypting malware. Originally proposed last year, the law applies only to companies with an annual turnover exceeding $1.93 million. This threshold targets the top 6.5 percent of Australia's registered businesses – representing roughly half of the country's total economic output. Under the new law, affected companies must report ransomware incidents to the Australian Signals Directorate (ASD). Failure to properly disclose an attack will result in fines under the country's civil penalty system. Authorities are allegedly planning to follow a two-stage approach, initially prioritizing major violations while fostering a "constructive" dialogue with victims. Starting next year, regulators will adopt a much stricter stance toward noncompliant organizations. The Australian government has implemented this mandatory reporting requirement after concluding that voluntary disclosures were insufficient. In 2024, officials noted that ransomware and cyber extortion incidents were vastly underreported, with only one in five victims coming forward. Ransomware remains a highly complex and growing phenomenon, with attacks reaching record levels despite increased law enforcement actions against notorious cyber gangs. Although several governments have proposed similar regulations, Australia is the first country to formally enact such a law. // Related Stories Jeff Wichman, director of incident response at cybersecurity firm Semperis, cautions that mandatory reporting is a double-edged sword. While the government may gain valuable data and insights into attacker profiles, the law may not reduce the frequency of attacks. Instead, it could serve mainly to publicly shame breached organizations – while cybercriminals continue to profit. A recent Semperis study found that over 70 percent of 1,000 ransomware-hit companies opted to pay the ransom and hope for the best. "Some companies, they just want to pay it and get things done, to get their data off the dark web. Others, it's a delayed response perspective, they want negotiations to happen with the attacker while they figure out what happened," Wichman explained. According to the study, 60 percent of victims who paid received functional decryption keys and successfully recovered their data. However, in 40 percent of cases, the provided keys were corrupted or ineffective. -
TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%
In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead.
The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers.
TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity.
Also see: How profitable are TSMC's nodes: crunching the numbers
Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year.
Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year.
// Related Stories
With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025.
TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%.
As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-aroundtransistor architectures, enabling more precise control of electrical currents.
Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16and A14could cost up to per wafer.
Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.
#tsmc039s #2nm #wafer #prices #hitTSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead. The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers. TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity. Also see: How profitable are TSMC's nodes: crunching the numbers Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year. Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year. // Related Stories With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025. TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%. As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-aroundtransistor architectures, enabling more precise control of electrical currents. Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16and A14could cost up to per wafer. Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake. #tsmc039s #2nm #wafer #prices #hitWWW.TECHSPOT.COMTSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead. The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost $30,000 per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers. TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to $725 million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity. Also see: How profitable are TSMC's nodes: crunching the numbers Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year. Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year. // Related Stories With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025. TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%. As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-around (GAA) transistor architectures, enabling more precise control of electrical currents. Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16 (1.6nm) and A14 (1.4nm) could cost up to $45,000 per wafer. Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.0 Commenti 0 condivisioni -
Java turns 30 and shows no signs of slowing down
The big picture: Java stands as one of the enduring pillars of the software world. The programming language was released by Sun Microsystems on May 23, 1995, and so far has weathered the shifting tides of technology, outlasting many of its rivals and adapting to new eras of computing.
Java's origins trace back to the early 1990s, when a team at Sun Microsystems led by James Gosling set out to develop a language for interactive television and embedded devices. Initially dubbed "Oak," the project aimed to simplify application development across a range of devices. Gosling famously described Java as "C++ without the guns and knives," a nod to its safer and more streamlined syntax.
Gosling, who remains closely associated with Java to this day, described the language as "C++ without guns and knives," a nod to its simpler, safer syntax compared to its predecessor.
As the World Wide Web began to take off, Java's focus shifted from consumer electronics to internet applications. The language's defining feature – platform independence – meant that code could be compiled into bytecode and executed on any device with a Java Virtual Machine.
This "write once, run anywhere" capability was groundbreaking, allowing software to run across different operating systems with minimal modification.
Java quickly gained traction with web applets and, soon after, enterprise applications. Its rapid rise prompted competitors to react. Microsoft introduced Visual J++, a Java-compatible language for Windows, but the product was discontinued after a legal dispute with Sun over non-compliance with Java's standards.
Many universities and colleges offer dedicated Java programming courses and certificates. It is often an introductory language in computer science curricula because of its object-oriented structure.
The late 1990s and early 2000s saw significant evolution in Java's capabilities. Features like JavaBeans, JDBC, and the Swing GUI library broadened its use. The language was eventually split into multiple editions – Standard, Enterprise, and Micro– tailored for desktop, server, and mobile development, respectively.
// Related Stories
In 2006, Sun made a pivotal move by open-sourcing Java, releasing the OpenJDK under the GNU General Public License. This move helped cement Java's role in the open-source community and made it even more accessible to developers worldwide.
Java's stewardship changed in 2010 when Oracle acquired Sun Microsystems. While the core implementation of Java remained open source, Oracle introduced licensing changes in later years that led some organizations to explore alternatives such as OpenJDK builds from other vendors.
Java's influence on enterprise software has been profound. Its robust ecosystem, including frameworks like Spring Boot and Jakarta EE, has made it a go-to choice for organizations seeking reliability and scalability. The language's stability and backward compatibility have ensured that even as trends come and go, Java remains a constant in the back offices of countless businesses.
James Gosling remains closely associated with Java to this day.
According to industry experts, Java's longevity stems from its adaptability. Brian Fox, CTO of Sonatype, told The Register that Java has endured through changing paradigms, from early web applets to today's cloud-native applications. "Java has outlasted trends, rival languages, and shifting paradigms. It paved the way for open source to enter the enterprise. And, arguably, the enterprise never looked back."
While it may no longer be the flashiest programming language around, Java remains one of the most important. It powers enterprise systems, big data platforms, and cloud-native architectures alike. Despite the rise of languages like Python and JavaScript, Java consistently ranks among the most-used programming languages in industry surveys.
As Java enters its fourth decade, it shows no signs of fading away. Instead, it stands as a testament to the enduring value of reliability, adaptability, and a vibrant developer community – a language that, for many, is as essential today as it was in 1995.
#java #turns #shows #signs #slowingJava turns 30 and shows no signs of slowing downThe big picture: Java stands as one of the enduring pillars of the software world. The programming language was released by Sun Microsystems on May 23, 1995, and so far has weathered the shifting tides of technology, outlasting many of its rivals and adapting to new eras of computing. Java's origins trace back to the early 1990s, when a team at Sun Microsystems led by James Gosling set out to develop a language for interactive television and embedded devices. Initially dubbed "Oak," the project aimed to simplify application development across a range of devices. Gosling famously described Java as "C++ without the guns and knives," a nod to its safer and more streamlined syntax. Gosling, who remains closely associated with Java to this day, described the language as "C++ without guns and knives," a nod to its simpler, safer syntax compared to its predecessor. As the World Wide Web began to take off, Java's focus shifted from consumer electronics to internet applications. The language's defining feature – platform independence – meant that code could be compiled into bytecode and executed on any device with a Java Virtual Machine. This "write once, run anywhere" capability was groundbreaking, allowing software to run across different operating systems with minimal modification. Java quickly gained traction with web applets and, soon after, enterprise applications. Its rapid rise prompted competitors to react. Microsoft introduced Visual J++, a Java-compatible language for Windows, but the product was discontinued after a legal dispute with Sun over non-compliance with Java's standards. Many universities and colleges offer dedicated Java programming courses and certificates. It is often an introductory language in computer science curricula because of its object-oriented structure. The late 1990s and early 2000s saw significant evolution in Java's capabilities. Features like JavaBeans, JDBC, and the Swing GUI library broadened its use. The language was eventually split into multiple editions – Standard, Enterprise, and Micro– tailored for desktop, server, and mobile development, respectively. // Related Stories In 2006, Sun made a pivotal move by open-sourcing Java, releasing the OpenJDK under the GNU General Public License. This move helped cement Java's role in the open-source community and made it even more accessible to developers worldwide. Java's stewardship changed in 2010 when Oracle acquired Sun Microsystems. While the core implementation of Java remained open source, Oracle introduced licensing changes in later years that led some organizations to explore alternatives such as OpenJDK builds from other vendors. Java's influence on enterprise software has been profound. Its robust ecosystem, including frameworks like Spring Boot and Jakarta EE, has made it a go-to choice for organizations seeking reliability and scalability. The language's stability and backward compatibility have ensured that even as trends come and go, Java remains a constant in the back offices of countless businesses. James Gosling remains closely associated with Java to this day. According to industry experts, Java's longevity stems from its adaptability. Brian Fox, CTO of Sonatype, told The Register that Java has endured through changing paradigms, from early web applets to today's cloud-native applications. "Java has outlasted trends, rival languages, and shifting paradigms. It paved the way for open source to enter the enterprise. And, arguably, the enterprise never looked back." While it may no longer be the flashiest programming language around, Java remains one of the most important. It powers enterprise systems, big data platforms, and cloud-native architectures alike. Despite the rise of languages like Python and JavaScript, Java consistently ranks among the most-used programming languages in industry surveys. As Java enters its fourth decade, it shows no signs of fading away. Instead, it stands as a testament to the enduring value of reliability, adaptability, and a vibrant developer community – a language that, for many, is as essential today as it was in 1995. #java #turns #shows #signs #slowingWWW.TECHSPOT.COMJava turns 30 and shows no signs of slowing downThe big picture: Java stands as one of the enduring pillars of the software world. The programming language was released by Sun Microsystems on May 23, 1995, and so far has weathered the shifting tides of technology, outlasting many of its rivals and adapting to new eras of computing. Java's origins trace back to the early 1990s, when a team at Sun Microsystems led by James Gosling set out to develop a language for interactive television and embedded devices. Initially dubbed "Oak," the project aimed to simplify application development across a range of devices. Gosling famously described Java as "C++ without the guns and knives," a nod to its safer and more streamlined syntax. Gosling, who remains closely associated with Java to this day, described the language as "C++ without guns and knives," a nod to its simpler, safer syntax compared to its predecessor. As the World Wide Web began to take off, Java's focus shifted from consumer electronics to internet applications. The language's defining feature – platform independence – meant that code could be compiled into bytecode and executed on any device with a Java Virtual Machine (JVM). This "write once, run anywhere" capability was groundbreaking, allowing software to run across different operating systems with minimal modification. Java quickly gained traction with web applets and, soon after, enterprise applications. Its rapid rise prompted competitors to react. Microsoft introduced Visual J++, a Java-compatible language for Windows, but the product was discontinued after a legal dispute with Sun over non-compliance with Java's standards. Many universities and colleges offer dedicated Java programming courses and certificates. It is often an introductory language in computer science curricula because of its object-oriented structure. The late 1990s and early 2000s saw significant evolution in Java's capabilities. Features like JavaBeans, JDBC (Java Database Connectivity), and the Swing GUI library broadened its use. The language was eventually split into multiple editions – Standard (SE), Enterprise (EE), and Micro (ME) – tailored for desktop, server, and mobile development, respectively. // Related Stories In 2006, Sun made a pivotal move by open-sourcing Java, releasing the OpenJDK under the GNU General Public License. This move helped cement Java's role in the open-source community and made it even more accessible to developers worldwide. Java's stewardship changed in 2010 when Oracle acquired Sun Microsystems. While the core implementation of Java remained open source, Oracle introduced licensing changes in later years that led some organizations to explore alternatives such as OpenJDK builds from other vendors. Java's influence on enterprise software has been profound. Its robust ecosystem, including frameworks like Spring Boot and Jakarta EE, has made it a go-to choice for organizations seeking reliability and scalability. The language's stability and backward compatibility have ensured that even as trends come and go, Java remains a constant in the back offices of countless businesses. James Gosling remains closely associated with Java to this day. According to industry experts, Java's longevity stems from its adaptability. Brian Fox, CTO of Sonatype, told The Register that Java has endured through changing paradigms, from early web applets to today's cloud-native applications. "Java has outlasted trends, rival languages, and shifting paradigms. It paved the way for open source to enter the enterprise. And, arguably, the enterprise never looked back." While it may no longer be the flashiest programming language around, Java remains one of the most important. It powers enterprise systems, big data platforms, and cloud-native architectures alike. Despite the rise of languages like Python and JavaScript, Java consistently ranks among the most-used programming languages in industry surveys. As Java enters its fourth decade, it shows no signs of fading away. Instead, it stands as a testament to the enduring value of reliability, adaptability, and a vibrant developer community – a language that, for many, is as essential today as it was in 1995.0 Commenti 0 condivisioni -
Amazon Fire Sticks are enabling billions in video piracy, report finds
Why it matters: It's somewhat ironic that arguably the biggest piracy enabler today is a device that comes from Amazon, a trillion tech giant with a streaming service. According to a new report, jailbroken Amazon Fire Sticks are used to watch billions of dollars worth of pirated streams, and Google, Meta and Microsoft are exacerbating the situation.
A report from Enders Analysis, titled "Video piracy: Big tech is clearly unwilling to address the problem," looks at the issue of illegal streams.
Driving the piracy epidemic, particularly in Europe, is the sports broadcasting industry. The BBC reports that the overall value of media rights for this business passed billion last year, which means fans are paying increasingly higher prices to watch sports on TV, especially if they pay for multiple services. UK soccer fans had to pay around in the 23/24 season if they wanted to watch all televised Premier League games.
The same is also true for mainstream streamers such as Netflix and Disney Plus, which keep raising their subscription costs and clamping down on account sharing.
Paying so much in these economically uncertain times has pushed more people into canceling their legitimate streaming services and turning to pirated alternatives.
The report notes that Tom Burrows, head of global rights at the world's largest European soccer streamer, DAZN, called streaming piracy "almost a crisis for the sports rights industry."
// Related Stories
Comcast-owned European TV giant Sky Group echoed the warnings. It said piracy was costing the company "hundreds of millions of dollars" in revenue.
Many high-profile events, such as major games, can draw tens of thousands of viewers away from legal services and toward the many pirated streams showing the same content at a fraction of the price – or free.
Most people are familiar with jailbroken Amazon Fire Sticks being used to access illegal streaming services – the report calls the device a "piracy enabler." According to Sky, 59% of people who watched pirated material in the UK over the last year did so using a Fire Stick. The report says that the device enables "billions of dollars in piracy" overall.
Would you pirate this pirate show?
"People think that because it's a legitimate brand, it must be OK. So they give their credit card details to criminal gangs. Amazon is not engaging with us as much as we'd like," said Sky Group COO Nick Herm.
As with all forms of piracy, there are risks associated with this trend. Providing credit card details and email addresses to those behind the services isn't exactly safe, and there have been cases of jailbroken, malware-infested pirate streaming devices – not just Fire Sticks – being sold on eBay, Craigslist, and the dark web.
There has been a crackdown on the sale of hacked Fire Sticks in the UK recently. Last year saw a man given a two-year suspended sentence for selling the devices, while another was jailed. Just using these sticks or illegal IPTV subscriptions is breaking the law.
It's not just Amazon that is being blamed. The report highlights Facebook's lack of action to stop ads for illegal streams running on the platform. Google and Microsoft are also called out for the "continued deprecation" of their respective DRM systems, Widevine and PlayReady; the report says they "are now compromised across various security levels." Microsoft's last update to PlayReady was December 2022.
"Over twenty years since launch, the DRM solutions provided by Google and Microsoft are in steep decline," reads the report. "A complete overhaul of the technology architecture, licensing, and support model is needed. Lack of engagement with content owners indicates this a low priority."
Amazon says it is working with industry partners and relevant authorities to combat piracy and protect customers from the risks associated with pirated content. The company has takensteps to make turning Fire TV-branded devices into piracy boxes more difficult. These include raising the technical bar, and adding warning messages about legality. Moreover, Amazon is switching Fire TV devices from Android to the Linux-based Vega OS later this year, which doesn't run Android APKs at all.
#amazon #fire #sticks #are #enablingAmazon Fire Sticks are enabling billions in video piracy, report findsWhy it matters: It's somewhat ironic that arguably the biggest piracy enabler today is a device that comes from Amazon, a trillion tech giant with a streaming service. According to a new report, jailbroken Amazon Fire Sticks are used to watch billions of dollars worth of pirated streams, and Google, Meta and Microsoft are exacerbating the situation. A report from Enders Analysis, titled "Video piracy: Big tech is clearly unwilling to address the problem," looks at the issue of illegal streams. Driving the piracy epidemic, particularly in Europe, is the sports broadcasting industry. The BBC reports that the overall value of media rights for this business passed billion last year, which means fans are paying increasingly higher prices to watch sports on TV, especially if they pay for multiple services. UK soccer fans had to pay around in the 23/24 season if they wanted to watch all televised Premier League games. The same is also true for mainstream streamers such as Netflix and Disney Plus, which keep raising their subscription costs and clamping down on account sharing. Paying so much in these economically uncertain times has pushed more people into canceling their legitimate streaming services and turning to pirated alternatives. The report notes that Tom Burrows, head of global rights at the world's largest European soccer streamer, DAZN, called streaming piracy "almost a crisis for the sports rights industry." // Related Stories Comcast-owned European TV giant Sky Group echoed the warnings. It said piracy was costing the company "hundreds of millions of dollars" in revenue. Many high-profile events, such as major games, can draw tens of thousands of viewers away from legal services and toward the many pirated streams showing the same content at a fraction of the price – or free. Most people are familiar with jailbroken Amazon Fire Sticks being used to access illegal streaming services – the report calls the device a "piracy enabler." According to Sky, 59% of people who watched pirated material in the UK over the last year did so using a Fire Stick. The report says that the device enables "billions of dollars in piracy" overall. Would you pirate this pirate show? "People think that because it's a legitimate brand, it must be OK. So they give their credit card details to criminal gangs. Amazon is not engaging with us as much as we'd like," said Sky Group COO Nick Herm. As with all forms of piracy, there are risks associated with this trend. Providing credit card details and email addresses to those behind the services isn't exactly safe, and there have been cases of jailbroken, malware-infested pirate streaming devices – not just Fire Sticks – being sold on eBay, Craigslist, and the dark web. There has been a crackdown on the sale of hacked Fire Sticks in the UK recently. Last year saw a man given a two-year suspended sentence for selling the devices, while another was jailed. Just using these sticks or illegal IPTV subscriptions is breaking the law. It's not just Amazon that is being blamed. The report highlights Facebook's lack of action to stop ads for illegal streams running on the platform. Google and Microsoft are also called out for the "continued deprecation" of their respective DRM systems, Widevine and PlayReady; the report says they "are now compromised across various security levels." Microsoft's last update to PlayReady was December 2022. "Over twenty years since launch, the DRM solutions provided by Google and Microsoft are in steep decline," reads the report. "A complete overhaul of the technology architecture, licensing, and support model is needed. Lack of engagement with content owners indicates this a low priority." Amazon says it is working with industry partners and relevant authorities to combat piracy and protect customers from the risks associated with pirated content. The company has takensteps to make turning Fire TV-branded devices into piracy boxes more difficult. These include raising the technical bar, and adding warning messages about legality. Moreover, Amazon is switching Fire TV devices from Android to the Linux-based Vega OS later this year, which doesn't run Android APKs at all. #amazon #fire #sticks #are #enablingWWW.TECHSPOT.COMAmazon Fire Sticks are enabling billions in video piracy, report findsWhy it matters: It's somewhat ironic that arguably the biggest piracy enabler today is a device that comes from Amazon, a $2 trillion tech giant with a streaming service. According to a new report, jailbroken Amazon Fire Sticks are used to watch billions of dollars worth of pirated streams, and Google, Meta and Microsoft are exacerbating the situation. A report from Enders Analysis, titled "Video piracy: Big tech is clearly unwilling to address the problem," looks at the issue of illegal streams. Driving the piracy epidemic, particularly in Europe, is the sports broadcasting industry. The BBC reports that the overall value of media rights for this business passed $60 billion last year, which means fans are paying increasingly higher prices to watch sports on TV, especially if they pay for multiple services. UK soccer fans had to pay around $1,171 in the 23/24 season if they wanted to watch all televised Premier League games. The same is also true for mainstream streamers such as Netflix and Disney Plus, which keep raising their subscription costs and clamping down on account sharing. Paying so much in these economically uncertain times has pushed more people into canceling their legitimate streaming services and turning to pirated alternatives. The report notes that Tom Burrows, head of global rights at the world's largest European soccer streamer, DAZN, called streaming piracy "almost a crisis for the sports rights industry." // Related Stories Comcast-owned European TV giant Sky Group echoed the warnings. It said piracy was costing the company "hundreds of millions of dollars" in revenue. Many high-profile events, such as major games, can draw tens of thousands of viewers away from legal services and toward the many pirated streams showing the same content at a fraction of the price – or free. Most people are familiar with jailbroken Amazon Fire Sticks being used to access illegal streaming services – the report calls the device a "piracy enabler." According to Sky, 59% of people who watched pirated material in the UK over the last year did so using a Fire Stick. The report says that the device enables "billions of dollars in piracy" overall. Would you pirate this pirate show? "People think that because it's a legitimate brand, it must be OK. So they give their credit card details to criminal gangs. Amazon is not engaging with us as much as we'd like," said Sky Group COO Nick Herm. As with all forms of piracy, there are risks associated with this trend. Providing credit card details and email addresses to those behind the services isn't exactly safe, and there have been cases of jailbroken, malware-infested pirate streaming devices – not just Fire Sticks – being sold on eBay, Craigslist, and the dark web. There has been a crackdown on the sale of hacked Fire Sticks in the UK recently. Last year saw a man given a two-year suspended sentence for selling the devices, while another was jailed. Just using these sticks or illegal IPTV subscriptions is breaking the law. It's not just Amazon that is being blamed. The report highlights Facebook's lack of action to stop ads for illegal streams running on the platform. Google and Microsoft are also called out for the "continued deprecation" of their respective DRM systems, Widevine and PlayReady; the report says they "are now compromised across various security levels." Microsoft's last update to PlayReady was December 2022. "Over twenty years since launch, the DRM solutions provided by Google and Microsoft are in steep decline," reads the report. "A complete overhaul of the technology architecture, licensing, and support model is needed. Lack of engagement with content owners indicates this a low priority." Amazon says it is working with industry partners and relevant authorities to combat piracy and protect customers from the risks associated with pirated content. The company has taken (or is about to take) steps to make turning Fire TV-branded devices into piracy boxes more difficult. These include raising the technical bar (ADB over local network disabled, tighter DRM), and adding warning messages about legality. Moreover, Amazon is switching Fire TV devices from Android to the Linux-based Vega OS later this year, which doesn't run Android APKs at all.0 Commenti 0 condivisioni -
Ultra-fast fiber sets global speed record: 1.02 petabits per second over continental distance
Why it matters: A technological leap in fiber optics has shattered previous limitations, achieving what experts once considered impossible: transmitting data at 1.02 petabits per second – enough to download every movie on Netflix 30 times over – across 1,808 kilometers using a single fiber no thicker than a human hair.
At the heart of this breakthrough – driven by Japan's National Institute of Information and Communications Technologyand Sumitomo Electric Industries – is a 19-core optical fiber with a standard 0.125 mm cladding diameter, designed to fit seamlessly into existing infrastructure and eliminate the need for costly upgrades.
Each core acts as an independent data channel, collectively forming a "19-lane highway" within the same space as traditional single-core fibers.
Unlike earlier multi-core designs limited to short distances or specialized wavelength bands, this fiber operates efficiently across the C and L bandsthanks to a refined core arrangement that slashes signal loss by 40% compared to prior models.
The experiment's success relied on a complex recirculating loop system. Signals traveled through an 86.1-kilometer fiber segment 21 times, simulating a cross-continental journey equivalent to linking Berlin to Naples or Sapporo to Fukuoka.
To maintain integrity over this distance, researchers deployed a dual-band optical amplification system, comprising separate devices that boosted signals in the C and L bands. This enabled 180 distinct wavelengths to carry data simultaneously using 16QAM modulation, a method that packs more information into each pulse.
// Related Stories
At the receiving end, a 19-channel detector, paired with advanced MIMOprocessing, dissected interference between cores, much like untangling 19 overlapping conversations in a crowded room.
Schematic diagram of the transmission system
This digital signal processor, leveraging algorithms developed over a decade of multi-core research, extracted usable data at unprecedented rates while correcting for distortions accumulated over 1,808 km.
The achievement caps years of incremental progress. In 2023, the same team achieved 1.7 petabits per second, but only across 63.5 km. Earlier efforts using 4-core fibers reached 0.138 petabits over 12,345 km by tapping the less practical S-band, while 15-mode fibers struggled with signal distortion beyond 1,001 km due to mismatched propagation characteristics.
The new 19-core fiber's uniform core design sidesteps these issues, achieving a capacity-distance product of 1.86 exabits per second per kilometer – 14 times higher than previous records for standard fibers.
Image diagram of 19-core optical fiber.
Presented as the top-rated post-deadline paper at OFC 2025 in San Francisco, this work arrives as global data traffic is projected to triple by 2030.
While challenges remain, such as optimizing amplifier efficiency and scaling MIMO processing for real-world use, the technology offers a viable path to petabit-scale networks. Researchers aim to refine production techniques for mass deployment, potentially enabling transoceanic cables that move entire data centers' worth of information hourly.
Researchers aim to refine production techniques for mass deployment, potentially enabling transoceanic cables that move entire data centers' worth of information hourly.
Sumitomo Electric's engineers, who designed the fiber's coupled-core architecture, note that existing manufacturing lines can adapt to produce the 19-core design with minimal retooling.
Meanwhile, NICT's team is exploring AI-driven signal processing to further boost speeds. As 6G and quantum computing loom, this breakthrough positions fiber optics not just as a backbone for tomorrow's internet, but as the central nervous system of a hyperconnected planetary infrastructure.
#ultrafast #fiber #sets #global #speedUltra-fast fiber sets global speed record: 1.02 petabits per second over continental distanceWhy it matters: A technological leap in fiber optics has shattered previous limitations, achieving what experts once considered impossible: transmitting data at 1.02 petabits per second – enough to download every movie on Netflix 30 times over – across 1,808 kilometers using a single fiber no thicker than a human hair. At the heart of this breakthrough – driven by Japan's National Institute of Information and Communications Technologyand Sumitomo Electric Industries – is a 19-core optical fiber with a standard 0.125 mm cladding diameter, designed to fit seamlessly into existing infrastructure and eliminate the need for costly upgrades. Each core acts as an independent data channel, collectively forming a "19-lane highway" within the same space as traditional single-core fibers. Unlike earlier multi-core designs limited to short distances or specialized wavelength bands, this fiber operates efficiently across the C and L bandsthanks to a refined core arrangement that slashes signal loss by 40% compared to prior models. The experiment's success relied on a complex recirculating loop system. Signals traveled through an 86.1-kilometer fiber segment 21 times, simulating a cross-continental journey equivalent to linking Berlin to Naples or Sapporo to Fukuoka. To maintain integrity over this distance, researchers deployed a dual-band optical amplification system, comprising separate devices that boosted signals in the C and L bands. This enabled 180 distinct wavelengths to carry data simultaneously using 16QAM modulation, a method that packs more information into each pulse. // Related Stories At the receiving end, a 19-channel detector, paired with advanced MIMOprocessing, dissected interference between cores, much like untangling 19 overlapping conversations in a crowded room. Schematic diagram of the transmission system This digital signal processor, leveraging algorithms developed over a decade of multi-core research, extracted usable data at unprecedented rates while correcting for distortions accumulated over 1,808 km. The achievement caps years of incremental progress. In 2023, the same team achieved 1.7 petabits per second, but only across 63.5 km. Earlier efforts using 4-core fibers reached 0.138 petabits over 12,345 km by tapping the less practical S-band, while 15-mode fibers struggled with signal distortion beyond 1,001 km due to mismatched propagation characteristics. The new 19-core fiber's uniform core design sidesteps these issues, achieving a capacity-distance product of 1.86 exabits per second per kilometer – 14 times higher than previous records for standard fibers. Image diagram of 19-core optical fiber. Presented as the top-rated post-deadline paper at OFC 2025 in San Francisco, this work arrives as global data traffic is projected to triple by 2030. While challenges remain, such as optimizing amplifier efficiency and scaling MIMO processing for real-world use, the technology offers a viable path to petabit-scale networks. Researchers aim to refine production techniques for mass deployment, potentially enabling transoceanic cables that move entire data centers' worth of information hourly. Researchers aim to refine production techniques for mass deployment, potentially enabling transoceanic cables that move entire data centers' worth of information hourly. Sumitomo Electric's engineers, who designed the fiber's coupled-core architecture, note that existing manufacturing lines can adapt to produce the 19-core design with minimal retooling. Meanwhile, NICT's team is exploring AI-driven signal processing to further boost speeds. As 6G and quantum computing loom, this breakthrough positions fiber optics not just as a backbone for tomorrow's internet, but as the central nervous system of a hyperconnected planetary infrastructure. #ultrafast #fiber #sets #global #speedWWW.TECHSPOT.COMUltra-fast fiber sets global speed record: 1.02 petabits per second over continental distanceWhy it matters: A technological leap in fiber optics has shattered previous limitations, achieving what experts once considered impossible: transmitting data at 1.02 petabits per second – enough to download every movie on Netflix 30 times over – across 1,808 kilometers using a single fiber no thicker than a human hair. At the heart of this breakthrough – driven by Japan's National Institute of Information and Communications Technology (NICT) and Sumitomo Electric Industries – is a 19-core optical fiber with a standard 0.125 mm cladding diameter, designed to fit seamlessly into existing infrastructure and eliminate the need for costly upgrades. Each core acts as an independent data channel, collectively forming a "19-lane highway" within the same space as traditional single-core fibers. Unlike earlier multi-core designs limited to short distances or specialized wavelength bands, this fiber operates efficiently across the C and L bands (commercial standards used globally) thanks to a refined core arrangement that slashes signal loss by 40% compared to prior models. The experiment's success relied on a complex recirculating loop system. Signals traveled through an 86.1-kilometer fiber segment 21 times, simulating a cross-continental journey equivalent to linking Berlin to Naples or Sapporo to Fukuoka. To maintain integrity over this distance, researchers deployed a dual-band optical amplification system, comprising separate devices that boosted signals in the C and L bands. This enabled 180 distinct wavelengths to carry data simultaneously using 16QAM modulation, a method that packs more information into each pulse. // Related Stories At the receiving end, a 19-channel detector, paired with advanced MIMO (multiple-input multiple-output) processing, dissected interference between cores, much like untangling 19 overlapping conversations in a crowded room. Schematic diagram of the transmission system This digital signal processor, leveraging algorithms developed over a decade of multi-core research, extracted usable data at unprecedented rates while correcting for distortions accumulated over 1,808 km. The achievement caps years of incremental progress. In 2023, the same team achieved 1.7 petabits per second, but only across 63.5 km. Earlier efforts using 4-core fibers reached 0.138 petabits over 12,345 km by tapping the less practical S-band, while 15-mode fibers struggled with signal distortion beyond 1,001 km due to mismatched propagation characteristics. The new 19-core fiber's uniform core design sidesteps these issues, achieving a capacity-distance product of 1.86 exabits per second per kilometer – 14 times higher than previous records for standard fibers. Image diagram of 19-core optical fiber. Presented as the top-rated post-deadline paper at OFC 2025 in San Francisco, this work arrives as global data traffic is projected to triple by 2030. While challenges remain, such as optimizing amplifier efficiency and scaling MIMO processing for real-world use, the technology offers a viable path to petabit-scale networks. Researchers aim to refine production techniques for mass deployment, potentially enabling transoceanic cables that move entire data centers' worth of information hourly. Researchers aim to refine production techniques for mass deployment, potentially enabling transoceanic cables that move entire data centers' worth of information hourly. Sumitomo Electric's engineers, who designed the fiber's coupled-core architecture, note that existing manufacturing lines can adapt to produce the 19-core design with minimal retooling. Meanwhile, NICT's team is exploring AI-driven signal processing to further boost speeds. As 6G and quantum computing loom, this breakthrough positions fiber optics not just as a backbone for tomorrow's internet, but as the central nervous system of a hyperconnected planetary infrastructure.0 Commenti 0 condivisioni -
Apple hasn't given up on haptic buttons for iPhone, iPad, and Apple Watch
Rumor mill: For years, leakers have reported that Apple is working to eliminate physical buttons from the iPhone entirely. Although the company opted to keep them on the iPhone 15, 16, and likely the upcoming iPhone 17, new information suggests that Apple is expanding its buttonless ambitions across multiple devices.
Weibo-based leaker "Instant Digital"recently revived rumors that Apple is exploring ways to replace physical buttons with haptic inputs. This radical change could debut in future iPhones and other mobile devices.
Back in 2022, leaks indicated that the iPhone 15 might replace its protruding power and volume buttons with touch-sensitive areas that use haptic feedback to simulate physical presses – similar to the iPhone 7's haptic home button.
However, both the iPhone 15 and 16 retained traditional buttons, and the iPhone 17, expected later this year, will likely be no different. Recently, the rumors had quieted down, with reports suggesting that Apple had temporarily shelved the project.
Some speculate that Apple is gradually working toward a completely smooth iPhone, one without buttons or ports, that relies entirely on touch and wireless technologies.
Beyond offering a cleaner aesthetic, removing physical buttons could improve durability. Fewer moving parts mean less internal wear and tear, and eliminating external buttons and ports can enhance waterproofing. However, dropping USB support would be a dramatic and unprecedented design shift.
// Related Stories
Although Apple hasn't given up on haptics, accuracy remains a major challenge. Instant Digital claims the company is still working to minimize accidental inputs, while also expanding these efforts to the iPad and Apple Watch.
Meanwhile, the latest iPhone 17 rumors suggest Apple plans to introduce a new mid-tier model: the iPhone 17 Air. At just 5.5mm thick, it will feature a 6.6-inch always-on OLED display with 120Hz ProMotion support and a new A19 processor.
While the iPhone 17 Pro and Air models are expected to include 12GB of RAM, analyst Jeff Pu believes the base model will likely ship with 8GB of RAM and the A18 processor – the same chip found in the base iPhone 16. However, the newer device may feature a slightly larger 6.3-inch display.
Apple typically unveils new iPhones in the fall, but the first preview of this year's software updates will arrive next week at the 2025 Worldwide Developers Conference on June 9. There, Apple is expected to announce a rebranding of its operating systems, starting with iOS 26, macOS 26, iPadOS 26, and more.
iPhone users: Do you use Safari or Chrome?
#apple #hasn039t #given #haptic #buttonsApple hasn't given up on haptic buttons for iPhone, iPad, and Apple WatchRumor mill: For years, leakers have reported that Apple is working to eliminate physical buttons from the iPhone entirely. Although the company opted to keep them on the iPhone 15, 16, and likely the upcoming iPhone 17, new information suggests that Apple is expanding its buttonless ambitions across multiple devices. Weibo-based leaker "Instant Digital"recently revived rumors that Apple is exploring ways to replace physical buttons with haptic inputs. This radical change could debut in future iPhones and other mobile devices. Back in 2022, leaks indicated that the iPhone 15 might replace its protruding power and volume buttons with touch-sensitive areas that use haptic feedback to simulate physical presses – similar to the iPhone 7's haptic home button. However, both the iPhone 15 and 16 retained traditional buttons, and the iPhone 17, expected later this year, will likely be no different. Recently, the rumors had quieted down, with reports suggesting that Apple had temporarily shelved the project. Some speculate that Apple is gradually working toward a completely smooth iPhone, one without buttons or ports, that relies entirely on touch and wireless technologies. Beyond offering a cleaner aesthetic, removing physical buttons could improve durability. Fewer moving parts mean less internal wear and tear, and eliminating external buttons and ports can enhance waterproofing. However, dropping USB support would be a dramatic and unprecedented design shift. // Related Stories Although Apple hasn't given up on haptics, accuracy remains a major challenge. Instant Digital claims the company is still working to minimize accidental inputs, while also expanding these efforts to the iPad and Apple Watch. Meanwhile, the latest iPhone 17 rumors suggest Apple plans to introduce a new mid-tier model: the iPhone 17 Air. At just 5.5mm thick, it will feature a 6.6-inch always-on OLED display with 120Hz ProMotion support and a new A19 processor. While the iPhone 17 Pro and Air models are expected to include 12GB of RAM, analyst Jeff Pu believes the base model will likely ship with 8GB of RAM and the A18 processor – the same chip found in the base iPhone 16. However, the newer device may feature a slightly larger 6.3-inch display. Apple typically unveils new iPhones in the fall, but the first preview of this year's software updates will arrive next week at the 2025 Worldwide Developers Conference on June 9. There, Apple is expected to announce a rebranding of its operating systems, starting with iOS 26, macOS 26, iPadOS 26, and more. iPhone users: Do you use Safari or Chrome? #apple #hasn039t #given #haptic #buttonsWWW.TECHSPOT.COMApple hasn't given up on haptic buttons for iPhone, iPad, and Apple WatchRumor mill: For years, leakers have reported that Apple is working to eliminate physical buttons from the iPhone entirely. Although the company opted to keep them on the iPhone 15, 16, and likely the upcoming iPhone 17, new information suggests that Apple is expanding its buttonless ambitions across multiple devices. Weibo-based leaker "Instant Digital" (via MacRumors) recently revived rumors that Apple is exploring ways to replace physical buttons with haptic inputs. This radical change could debut in future iPhones and other mobile devices. Back in 2022, leaks indicated that the iPhone 15 might replace its protruding power and volume buttons with touch-sensitive areas that use haptic feedback to simulate physical presses – similar to the iPhone 7's haptic home button. However, both the iPhone 15 and 16 retained traditional buttons, and the iPhone 17, expected later this year, will likely be no different. Recently, the rumors had quieted down, with reports suggesting that Apple had temporarily shelved the project. Some speculate that Apple is gradually working toward a completely smooth iPhone, one without buttons or ports, that relies entirely on touch and wireless technologies. Beyond offering a cleaner aesthetic, removing physical buttons could improve durability. Fewer moving parts mean less internal wear and tear, and eliminating external buttons and ports can enhance waterproofing. However, dropping USB support would be a dramatic and unprecedented design shift. // Related Stories Although Apple hasn't given up on haptics, accuracy remains a major challenge. Instant Digital claims the company is still working to minimize accidental inputs, while also expanding these efforts to the iPad and Apple Watch. Meanwhile, the latest iPhone 17 rumors suggest Apple plans to introduce a new mid-tier model: the iPhone 17 Air. At just 5.5mm thick, it will feature a 6.6-inch always-on OLED display with 120Hz ProMotion support and a new A19 processor. While the iPhone 17 Pro and Air models are expected to include 12GB of RAM, analyst Jeff Pu believes the base model will likely ship with 8GB of RAM and the A18 processor – the same chip found in the base iPhone 16. However, the newer device may feature a slightly larger 6.3-inch display. Apple typically unveils new iPhones in the fall, but the first preview of this year's software updates will arrive next week at the 2025 Worldwide Developers Conference on June 9. There, Apple is expected to announce a rebranding of its operating systems, starting with iOS 26, macOS 26, iPadOS 26, and more. iPhone users: Do you use Safari or Chrome?0 Commenti 0 condivisioni -
Unknown object in Milky Way found emitting both X-rays and radio waves
What just happened? An international team of researchers have discovered a cosmic anomaly unlike anything previously witnessed. The object in question, located roughly 15,000 light-years away in our very own Milky Way galaxy, has been observed emitting both radio waves and X-ray radiation.
The celestial body, dubbed ASKAP J1832- 0911, was initially found by astronomers using the Australian Square Kilometer Array Pathfinder, a radio telescope located in Australia. Another look using NASA's Chandra X-Ray telescope found the object also emitted X-rays in an unusual yet predictable pattern.
Every 44 minutes, the object flashes both radio waves and X-rays for two minutes straight.
Team leader ZiengWang said the object is unlike anything they have seen before, although its origins are likely not as mysterious as one might initially think.
Wang said the object could very well be what's left of a dead star with powerful magnetic fields, called a magnetar – or perhaps something as simple as a pair of stars in a binary system in which one of the two is a highly magnetized white dwarf.
Then again, neither explanation fully explains exactly what astronomers are witnessing with its behavior so the jury is still out. Conversely, Wang added, the discovery could be a new type of physics or a fresh model of stellar evolution we haven't seen before.
// Related Stories
NASA's Chandra X-ray observatory launched into space way back in 1999 and has been orbiting Earth ever since. The ASKAP radio telescope array has been operational since 2012, and consists of 36 giant antennas – each measuring 39 feet in diameter – that work together as one tool.
Further study will be needed to help astronomers better understand exactly what they are looking at. At 15,000 light-years away, the object is somewhat nearby in the grand scheme of the universe but still quite far away in actual distance. For reference, one light-year is roughly equal to six trillion miles.
The team's research has been published in the journal Nature and is titled, "Detection of X-ray Emission from a Bright Long-Period Radio Transient."
#unknown #object #milky #way #foundUnknown object in Milky Way found emitting both X-rays and radio wavesWhat just happened? An international team of researchers have discovered a cosmic anomaly unlike anything previously witnessed. The object in question, located roughly 15,000 light-years away in our very own Milky Way galaxy, has been observed emitting both radio waves and X-ray radiation. The celestial body, dubbed ASKAP J1832- 0911, was initially found by astronomers using the Australian Square Kilometer Array Pathfinder, a radio telescope located in Australia. Another look using NASA's Chandra X-Ray telescope found the object also emitted X-rays in an unusual yet predictable pattern. Every 44 minutes, the object flashes both radio waves and X-rays for two minutes straight. Team leader ZiengWang said the object is unlike anything they have seen before, although its origins are likely not as mysterious as one might initially think. Wang said the object could very well be what's left of a dead star with powerful magnetic fields, called a magnetar – or perhaps something as simple as a pair of stars in a binary system in which one of the two is a highly magnetized white dwarf. Then again, neither explanation fully explains exactly what astronomers are witnessing with its behavior so the jury is still out. Conversely, Wang added, the discovery could be a new type of physics or a fresh model of stellar evolution we haven't seen before. // Related Stories NASA's Chandra X-ray observatory launched into space way back in 1999 and has been orbiting Earth ever since. The ASKAP radio telescope array has been operational since 2012, and consists of 36 giant antennas – each measuring 39 feet in diameter – that work together as one tool. Further study will be needed to help astronomers better understand exactly what they are looking at. At 15,000 light-years away, the object is somewhat nearby in the grand scheme of the universe but still quite far away in actual distance. For reference, one light-year is roughly equal to six trillion miles. The team's research has been published in the journal Nature and is titled, "Detection of X-ray Emission from a Bright Long-Period Radio Transient." #unknown #object #milky #way #foundWWW.TECHSPOT.COMUnknown object in Milky Way found emitting both X-rays and radio wavesWhat just happened? An international team of researchers have discovered a cosmic anomaly unlike anything previously witnessed. The object in question, located roughly 15,000 light-years away in our very own Milky Way galaxy, has been observed emitting both radio waves and X-ray radiation. The celestial body, dubbed ASKAP J1832- 0911, was initially found by astronomers using the Australian Square Kilometer Array Pathfinder (ASKAP), a radio telescope located in Australia. Another look using NASA's Chandra X-Ray telescope found the object also emitted X-rays in an unusual yet predictable pattern. Every 44 minutes, the object flashes both radio waves and X-rays for two minutes straight. Team leader Zieng (Andy) Wang said the object is unlike anything they have seen before, although its origins are likely not as mysterious as one might initially think. Wang said the object could very well be what's left of a dead star with powerful magnetic fields, called a magnetar – or perhaps something as simple as a pair of stars in a binary system in which one of the two is a highly magnetized white dwarf. Then again, neither explanation fully explains exactly what astronomers are witnessing with its behavior so the jury is still out. Conversely, Wang added, the discovery could be a new type of physics or a fresh model of stellar evolution we haven't seen before. // Related Stories NASA's Chandra X-ray observatory launched into space way back in 1999 and has been orbiting Earth ever since. The ASKAP radio telescope array has been operational since 2012, and consists of 36 giant antennas – each measuring 39 feet in diameter – that work together as one tool. Further study will be needed to help astronomers better understand exactly what they are looking at. At 15,000 light-years away, the object is somewhat nearby in the grand scheme of the universe but still quite far away in actual distance. For reference, one light-year is roughly equal to six trillion miles. The team's research has been published in the journal Nature and is titled, "Detection of X-ray Emission from a Bright Long-Period Radio Transient."0 Commenti 0 condivisioni -
Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI
What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery.
The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology.
Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing.
While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain.
Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell.
The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory.
A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads.
// Related Stories
The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis.
Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe."
In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing.
Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operationsthat enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research.
The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows."
#dell #nvidia #department #energy #joinDell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AIWhat just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery. The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology. Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing. While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain. Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads. // Related Stories The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis. Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe." In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing. Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operationsthat enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research. The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows." #dell #nvidia #department #energy #joinWWW.TECHSPOT.COMDell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AIWhat just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery. The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology. Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing. While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain. Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads. // Related Stories The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis. Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe." In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing. Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operations (such as 16-bit or 8-bit) that enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research. The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows."0 Commenti 0 condivisioni -
You have 30 minutes: Where would you hide a USB drive from the FBI?
We're bringing back something a lot of you told us you missed – the Weekend Open Forum. It's a chance to unwind a little, step away from the news cycle, and just chat, joke, and share your thoughts with the rest of the TechSpot community. We'll come back to it once every month or so, and each one will pose a fun, geeky, or thought-provoking question to kick off the conversation.
So, without further ado, here's the first topic – inspired by a spicy little meme that's been making the rounds:
You have 30 minutes to hide a USB drive in your house.
Your house will then be raided by police, detectives, and FBI agents – all looking for that one USB.
Where do you hide it so that it won't be found?
We want to hear your most clever, outrageous,ideas. Think like a spy. Think like a hacker. Think like someone who's watched just enough heist movies to be dangerously imaginative.
Would you stash it inside a hollowed-out bar of soap? Tape it under the fridge coils? Disguise it as a dead battery? Bury it in the litter box? Or go full 4D chess and leave it in plain sight? Bonus points for creativity, plausibility, and absurdity. Drop your ideas in the comments below and let's see who can outwit a full FBI search team. We'll highlight our favorite replies in next month's WOF!
Have fun – and remember: this is all hypothetical… probably.
#you #have #minutes #where #wouldYou have 30 minutes: Where would you hide a USB drive from the FBI?We're bringing back something a lot of you told us you missed – the Weekend Open Forum. It's a chance to unwind a little, step away from the news cycle, and just chat, joke, and share your thoughts with the rest of the TechSpot community. We'll come back to it once every month or so, and each one will pose a fun, geeky, or thought-provoking question to kick off the conversation. So, without further ado, here's the first topic – inspired by a spicy little meme that's been making the rounds: You have 30 minutes to hide a USB drive in your house. Your house will then be raided by police, detectives, and FBI agents – all looking for that one USB. Where do you hide it so that it won't be found? We want to hear your most clever, outrageous,ideas. Think like a spy. Think like a hacker. Think like someone who's watched just enough heist movies to be dangerously imaginative. Would you stash it inside a hollowed-out bar of soap? Tape it under the fridge coils? Disguise it as a dead battery? Bury it in the litter box? Or go full 4D chess and leave it in plain sight? Bonus points for creativity, plausibility, and absurdity. Drop your ideas in the comments below and let's see who can outwit a full FBI search team. We'll highlight our favorite replies in next month's WOF! Have fun – and remember: this is all hypothetical… probably. #you #have #minutes #where #wouldWWW.TECHSPOT.COMYou have 30 minutes: Where would you hide a USB drive from the FBI?We're bringing back something a lot of you told us you missed – the Weekend Open Forum. It's a chance to unwind a little, step away from the news cycle, and just chat, joke, and share your thoughts with the rest of the TechSpot community. We'll come back to it once every month or so (maybe more if you like it), and each one will pose a fun, geeky, or thought-provoking question to kick off the conversation. So, without further ado, here's the first topic – inspired by a spicy little meme that's been making the rounds: You have 30 minutes to hide a USB drive in your house. Your house will then be raided by police, detectives, and FBI agents – all looking for that one USB. Where do you hide it so that it won't be found? We want to hear your most clever, outrageous, (legal-ish?) ideas. Think like a spy. Think like a hacker. Think like someone who's watched just enough heist movies to be dangerously imaginative. Would you stash it inside a hollowed-out bar of soap? Tape it under the fridge coils? Disguise it as a dead battery? Bury it in the litter box? Or go full 4D chess and leave it in plain sight? Bonus points for creativity, plausibility, and absurdity. Drop your ideas in the comments below and let's see who can outwit a full FBI search team. We'll highlight our favorite replies in next month's WOF! Have fun – and remember: this is all hypothetical… probably.0 Commenti 0 condivisioni -
Modder crams RTX 4060 PC inside an office chair
WTF?! Some people prefer to hide their PCs to save desk space or maintain a minimalist aesthetic, but a few modders have taken extreme measures to conceal their rigs inside furniture. The latest example successfully crams a mid-range gaming PC into an unassuming office chair.
A recent video from YouTuber and modder "Basically Homeless" showcases one of the most unusual methods for conserving space in a PC gaming battle station: turning the chair into a case mod. Instructions for 3D printing an enclosure to fit inside a FlexiSpot office chair are available to subscribers of the YouTuber's free-tier Patreon.
Related reading: FlexiSpot C7 Ergonomic Office Chair Review
Luckily, the chair FlexiSpot donated for the video has an opening between the seat cushion and the chair mechanism just wide enough to accommodate a mini-ITX motherboard equipped with a Ryzen 7 9800X3D and 64GB of RAM. Inserting 50mm aluminum standoffs provides enough space for a low-profile cooler, a flex power supply unit, and a mini-ITX Nvidia RTX 4060.
After some trial and error, Basically Homeless designed and 3D printed a custom enclosure to conceal the PC components between the seat and cylinder with sufficient ventilation and several I/O ports. The I/O port openings support keystone modules, allowing the modder to hot-swap various ports such as HDMI outputs, USB ports, and headphone jacks.
However, when using the chair PC normally, the only cable that is partially visible at the bottom is the power cord, which Basically Homeless ran through several holes he cut into the base of the chair. This leaves the display as the last component that normally requires wires, which prompted the most unorthodox step of the entire project.
A wireless monitor that receives a video signal over Wi-Fi is one option, but it adds about 10 milliseconds of input lag, which Basically Homeless couldn't accept. So he ran another cable through the lumbar and headrest, which connects to a Bigscreen Beyond VR headset. Even when playing non-VR games, itcan project a virtual 1080p screen in front of the viewer.
Impressively, the PC components remain undamaged when reclining, and Basically Homeless doesn't feel them through the seat. However, he might have inadvertently turned it into a heated seat.
// Related Stories
In principle, the project resembles the Endgame Invisible PC, which modder and YouTuber Matthew Perks installed inside a desk last year. It includes a fold-out monitor, two PSUs, an RTX 4090, and liquid cooling.
#modder #crams #rtx #inside #officeModder crams RTX 4060 PC inside an office chairWTF?! Some people prefer to hide their PCs to save desk space or maintain a minimalist aesthetic, but a few modders have taken extreme measures to conceal their rigs inside furniture. The latest example successfully crams a mid-range gaming PC into an unassuming office chair. A recent video from YouTuber and modder "Basically Homeless" showcases one of the most unusual methods for conserving space in a PC gaming battle station: turning the chair into a case mod. Instructions for 3D printing an enclosure to fit inside a FlexiSpot office chair are available to subscribers of the YouTuber's free-tier Patreon. Related reading: FlexiSpot C7 Ergonomic Office Chair Review Luckily, the chair FlexiSpot donated for the video has an opening between the seat cushion and the chair mechanism just wide enough to accommodate a mini-ITX motherboard equipped with a Ryzen 7 9800X3D and 64GB of RAM. Inserting 50mm aluminum standoffs provides enough space for a low-profile cooler, a flex power supply unit, and a mini-ITX Nvidia RTX 4060. After some trial and error, Basically Homeless designed and 3D printed a custom enclosure to conceal the PC components between the seat and cylinder with sufficient ventilation and several I/O ports. The I/O port openings support keystone modules, allowing the modder to hot-swap various ports such as HDMI outputs, USB ports, and headphone jacks. However, when using the chair PC normally, the only cable that is partially visible at the bottom is the power cord, which Basically Homeless ran through several holes he cut into the base of the chair. This leaves the display as the last component that normally requires wires, which prompted the most unorthodox step of the entire project. A wireless monitor that receives a video signal over Wi-Fi is one option, but it adds about 10 milliseconds of input lag, which Basically Homeless couldn't accept. So he ran another cable through the lumbar and headrest, which connects to a Bigscreen Beyond VR headset. Even when playing non-VR games, itcan project a virtual 1080p screen in front of the viewer. Impressively, the PC components remain undamaged when reclining, and Basically Homeless doesn't feel them through the seat. However, he might have inadvertently turned it into a heated seat. // Related Stories In principle, the project resembles the Endgame Invisible PC, which modder and YouTuber Matthew Perks installed inside a desk last year. It includes a fold-out monitor, two PSUs, an RTX 4090, and liquid cooling. #modder #crams #rtx #inside #officeWWW.TECHSPOT.COMModder crams RTX 4060 PC inside an office chairWTF?! Some people prefer to hide their PCs to save desk space or maintain a minimalist aesthetic, but a few modders have taken extreme measures to conceal their rigs inside furniture. The latest example successfully crams a mid-range gaming PC into an unassuming office chair. A recent video from YouTuber and modder "Basically Homeless" showcases one of the most unusual methods for conserving space in a PC gaming battle station: turning the chair into a case mod. Instructions for 3D printing an enclosure to fit inside a FlexiSpot office chair are available to subscribers of the YouTuber's free-tier Patreon. Related reading: FlexiSpot C7 Ergonomic Office Chair Review Luckily, the chair FlexiSpot donated for the video has an opening between the seat cushion and the chair mechanism just wide enough to accommodate a mini-ITX motherboard equipped with a Ryzen 7 9800X3D and 64GB of RAM. Inserting 50mm aluminum standoffs provides enough space for a low-profile cooler, a flex power supply unit (normally used in server racks), and a mini-ITX Nvidia RTX 4060. After some trial and error, Basically Homeless designed and 3D printed a custom enclosure to conceal the PC components between the seat and cylinder with sufficient ventilation and several I/O ports. The I/O port openings support keystone modules, allowing the modder to hot-swap various ports such as HDMI outputs, USB ports, and headphone jacks. However, when using the chair PC normally, the only cable that is partially visible at the bottom is the power cord, which Basically Homeless ran through several holes he cut into the base of the chair. This leaves the display as the last component that normally requires wires, which prompted the most unorthodox step of the entire project. A wireless monitor that receives a video signal over Wi-Fi is one option, but it adds about 10 milliseconds of input lag, which Basically Homeless couldn't accept. So he ran another cable through the lumbar and headrest, which connects to a Bigscreen Beyond VR headset. Even when playing non-VR games, it (or, alternatively, a Meta Quest) can project a virtual 1080p screen in front of the viewer. Impressively, the PC components remain undamaged when reclining, and Basically Homeless doesn't feel them through the seat. However, he might have inadvertently turned it into a heated seat. // Related Stories In principle, the project resembles the Endgame Invisible PC, which modder and YouTuber Matthew Perks installed inside a desk last year. It includes a fold-out monitor, two PSUs, an RTX 4090, and liquid cooling.0 Commenti 0 condivisioni -
This stick-on e-tattoo tracks mental fatigue in real time
WTF?! Would you be willing to wear a stick-on facial tattoo that makes you look like a poor Cyberpunk 2077 cosplayer? You might be more tempted by the fact it's designed to show when your brain is overworked and you're at risk of making errors.
Dr Nanshu Lu, an author of the research from the University of Texas at Austin, writes that the e-tattoo could be valuable for professions that require high levels of concentration for extended periods, such as air traffic controllers, vehicle drivers, pilots, and robot operators.
The device could give wearers warnings and alerts should it detect they are becoming mentally overloaded, allowing them to adjust their workload or ask a co-worker for some help.
"Technology is developing faster than human evolution. Our brain capacity cannot keep up and can easily get overloaded," said Lu. "There is an optimal mental workload for optimal performance, which differs from person to person."
"Previous studies indicated that the optimal mental performance occurs when the mental workload demand is not too low or too high," Lu added. "When it's too low, it's very boring, and the people will just lose focus."
Lu and colleagues write that using self-reporting questionnaires for mental workload assessment often results in participants not objectively reporting their cognitive effort, and they are often conducted after a task.
// Related Stories
While traditional electroencephalographyand electrooculographydevices can be used for physiological mental workload monitoring, they are wired, bulky, and uncomfortable. They are also affected by head movements, meaning they're not exactly practical for real-world use.
The wireless forehead EEG and EOG sensor is designed to be as thin and conformable as possible. It is worn on the skin as a temporary tattoo sticker, which means it can be worn while the wearer is also sporting headgear such as a helmet. Being a sticker means the device can be personalized to fit different sized heads, ensuring the sensors are always in the right spot.
The tattoo is disposable and connected to a reusable flexible printed circuit using conductive tape, with the battery clipped on the device. The entire setup is expected to cost less than The researchers tested the system by placing the tattoo on six volunteers. The participants watched 20 letters flash onto a screen one after another, each in a different spot. They clicked the mouse whenever the current letter or its position matched the one that had appeared a numberof items earlier. Every volunteer repeated the exercise several times while N varied from 0 to 3, creating four escalating difficulty levels.
As the tests became more difficult, theta and delta brainwaves increased, indicating increased cognitive demands, while alpha and beta activity increased in line with their fatigue.
After feeding all the test data into a machine-learning algorithm, the researchers found the system was better able to predict mental workload than EEG and EOG data alone.
The next step for the researchers is to find a method for the signals to be decoded by the device's microprocessor, which can then alert an app if a wearer's mental workload becomes too high.
#this #stickon #etattoo #tracks #mentalThis stick-on e-tattoo tracks mental fatigue in real timeWTF?! Would you be willing to wear a stick-on facial tattoo that makes you look like a poor Cyberpunk 2077 cosplayer? You might be more tempted by the fact it's designed to show when your brain is overworked and you're at risk of making errors. Dr Nanshu Lu, an author of the research from the University of Texas at Austin, writes that the e-tattoo could be valuable for professions that require high levels of concentration for extended periods, such as air traffic controllers, vehicle drivers, pilots, and robot operators. The device could give wearers warnings and alerts should it detect they are becoming mentally overloaded, allowing them to adjust their workload or ask a co-worker for some help. "Technology is developing faster than human evolution. Our brain capacity cannot keep up and can easily get overloaded," said Lu. "There is an optimal mental workload for optimal performance, which differs from person to person." "Previous studies indicated that the optimal mental performance occurs when the mental workload demand is not too low or too high," Lu added. "When it's too low, it's very boring, and the people will just lose focus." Lu and colleagues write that using self-reporting questionnaires for mental workload assessment often results in participants not objectively reporting their cognitive effort, and they are often conducted after a task. // Related Stories While traditional electroencephalographyand electrooculographydevices can be used for physiological mental workload monitoring, they are wired, bulky, and uncomfortable. They are also affected by head movements, meaning they're not exactly practical for real-world use. The wireless forehead EEG and EOG sensor is designed to be as thin and conformable as possible. It is worn on the skin as a temporary tattoo sticker, which means it can be worn while the wearer is also sporting headgear such as a helmet. Being a sticker means the device can be personalized to fit different sized heads, ensuring the sensors are always in the right spot. The tattoo is disposable and connected to a reusable flexible printed circuit using conductive tape, with the battery clipped on the device. The entire setup is expected to cost less than The researchers tested the system by placing the tattoo on six volunteers. The participants watched 20 letters flash onto a screen one after another, each in a different spot. They clicked the mouse whenever the current letter or its position matched the one that had appeared a numberof items earlier. Every volunteer repeated the exercise several times while N varied from 0 to 3, creating four escalating difficulty levels. As the tests became more difficult, theta and delta brainwaves increased, indicating increased cognitive demands, while alpha and beta activity increased in line with their fatigue. After feeding all the test data into a machine-learning algorithm, the researchers found the system was better able to predict mental workload than EEG and EOG data alone. The next step for the researchers is to find a method for the signals to be decoded by the device's microprocessor, which can then alert an app if a wearer's mental workload becomes too high. #this #stickon #etattoo #tracks #mentalWWW.TECHSPOT.COMThis stick-on e-tattoo tracks mental fatigue in real timeWTF?! Would you be willing to wear a stick-on facial tattoo that makes you look like a poor Cyberpunk 2077 cosplayer? You might be more tempted by the fact it's designed to show when your brain is overworked and you're at risk of making errors. Dr Nanshu Lu, an author of the research from the University of Texas at Austin, writes that the e-tattoo could be valuable for professions that require high levels of concentration for extended periods, such as air traffic controllers, vehicle drivers, pilots, and robot operators. The device could give wearers warnings and alerts should it detect they are becoming mentally overloaded, allowing them to adjust their workload or ask a co-worker for some help. "Technology is developing faster than human evolution. Our brain capacity cannot keep up and can easily get overloaded," said Lu. "There is an optimal mental workload for optimal performance, which differs from person to person." "Previous studies indicated that the optimal mental performance occurs when the mental workload demand is not too low or too high," Lu added. "When it's too low, it's very boring, and the people will just lose focus." Lu and colleagues write that using self-reporting questionnaires for mental workload assessment often results in participants not objectively reporting their cognitive effort, and they are often conducted after a task. // Related Stories While traditional electroencephalography (EEG) and electrooculography (EOG) devices can be used for physiological mental workload monitoring, they are wired, bulky, and uncomfortable. They are also affected by head movements, meaning they're not exactly practical for real-world use. The wireless forehead EEG and EOG sensor is designed to be as thin and conformable as possible. It is worn on the skin as a temporary tattoo sticker, which means it can be worn while the wearer is also sporting headgear such as a helmet. Being a sticker means the device can be personalized to fit different sized heads, ensuring the sensors are always in the right spot. The tattoo is disposable and connected to a reusable flexible printed circuit using conductive tape, with the battery clipped on the device. The entire setup is expected to cost less than $200. The researchers tested the system by placing the tattoo on six volunteers. The participants watched 20 letters flash onto a screen one after another, each in a different spot. They clicked the mouse whenever the current letter or its position matched the one that had appeared a number (N) of items earlier. Every volunteer repeated the exercise several times while N varied from 0 to 3, creating four escalating difficulty levels. As the tests became more difficult, theta and delta brainwaves increased, indicating increased cognitive demands, while alpha and beta activity increased in line with their fatigue. After feeding all the test data into a machine-learning algorithm, the researchers found the system was better able to predict mental workload than EEG and EOG data alone. The next step for the researchers is to find a method for the signals to be decoded by the device's microprocessor, which can then alert an app if a wearer's mental workload becomes too high.0 Commenti 0 condivisioni -
AI could erase half of all entry-level white-collar jobs within five years, warns Anthropic CEO
What just happened? Hearing people warn about the danger that generative AI presents to the global job market is concerning enough, but it's especially worrying when these ominous predictions come from those behind the technology. Dario Amodei, CEO of Anthropic, believes that AI could wipe out about half of all entry-level white-collar jobs in the next five years, leading to unemployment spikes up to 20%.
Amodei made his comments during an interview with Axios. He said that AI companies and the government needed to stop "sugar-coating" the potential mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, with entry-level jobs most at risk.
Amodei said he was making this warning public in the hope that the government and other AI giants such as OpenAI will start preparing ways to protect the nation from a situation that could get out of hand.
"Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it."
The CEO's comments are backed up by reports into the state of the jobs market. The US IT job market declined for the second year in a row in 2024. There was also a report from SignalFire that found Big Tech's hiring of new graduates is down by over 50% compared to the pre-pandemic levels of 2019. Startups, meanwhile, have seen their hiring of new grads fall by over 30% during the same period.
We're also seeing huge layoffs across multiple tech companies, a large part of which can be attributed to AI replacing workers' duties.
The one bit of good news for workers is that some firms, including Klarna and Duolingo, are finding that the subpar performance of these bots and the public's negative feelings toward their use are forcing companies to start hiring humans again.
// Related Stories
Amodei's Anthropic AI firm is playing its own part in all this, of course. The company's latest Claude 4 AI model can code at a proficiency level close to that of humans – it's also very good at lying and blackmail.
"We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar."
The AI arms race in this billion-dollar industry is resulting in LLMs improving all the time. And with the US in a battle to stay ahead of China, regulation is rarely high on the government's agenda.
AI companies tend to claim that the technology will augment jobs, helping people become more productive. That might be true right now, but it won't be long before the systems are able to replace the people they are helping.
Amodei says the first step in addressing the problem is to make people more aware of what jobs are vulnerable to AI replacement. Helping workers better understand how AI can augment their jobs could also mitigate job losses, as would more government action. Or there's always OpenAI CEO Sam Altman's solution: universal basic income, though that will come with plenty of issues of its own.
Masthead: kate.sade
#could #erase #half #all #entrylevelAI could erase half of all entry-level white-collar jobs within five years, warns Anthropic CEOWhat just happened? Hearing people warn about the danger that generative AI presents to the global job market is concerning enough, but it's especially worrying when these ominous predictions come from those behind the technology. Dario Amodei, CEO of Anthropic, believes that AI could wipe out about half of all entry-level white-collar jobs in the next five years, leading to unemployment spikes up to 20%. Amodei made his comments during an interview with Axios. He said that AI companies and the government needed to stop "sugar-coating" the potential mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, with entry-level jobs most at risk. Amodei said he was making this warning public in the hope that the government and other AI giants such as OpenAI will start preparing ways to protect the nation from a situation that could get out of hand. "Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it." The CEO's comments are backed up by reports into the state of the jobs market. The US IT job market declined for the second year in a row in 2024. There was also a report from SignalFire that found Big Tech's hiring of new graduates is down by over 50% compared to the pre-pandemic levels of 2019. Startups, meanwhile, have seen their hiring of new grads fall by over 30% during the same period. We're also seeing huge layoffs across multiple tech companies, a large part of which can be attributed to AI replacing workers' duties. The one bit of good news for workers is that some firms, including Klarna and Duolingo, are finding that the subpar performance of these bots and the public's negative feelings toward their use are forcing companies to start hiring humans again. // Related Stories Amodei's Anthropic AI firm is playing its own part in all this, of course. The company's latest Claude 4 AI model can code at a proficiency level close to that of humans – it's also very good at lying and blackmail. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." The AI arms race in this billion-dollar industry is resulting in LLMs improving all the time. And with the US in a battle to stay ahead of China, regulation is rarely high on the government's agenda. AI companies tend to claim that the technology will augment jobs, helping people become more productive. That might be true right now, but it won't be long before the systems are able to replace the people they are helping. Amodei says the first step in addressing the problem is to make people more aware of what jobs are vulnerable to AI replacement. Helping workers better understand how AI can augment their jobs could also mitigate job losses, as would more government action. Or there's always OpenAI CEO Sam Altman's solution: universal basic income, though that will come with plenty of issues of its own. Masthead: kate.sade #could #erase #half #all #entrylevelWWW.TECHSPOT.COMAI could erase half of all entry-level white-collar jobs within five years, warns Anthropic CEOWhat just happened? Hearing people warn about the danger that generative AI presents to the global job market is concerning enough, but it's especially worrying when these ominous predictions come from those behind the technology. Dario Amodei, CEO of Anthropic, believes that AI could wipe out about half of all entry-level white-collar jobs in the next five years, leading to unemployment spikes up to 20%. Amodei made his comments during an interview with Axios. He said that AI companies and the government needed to stop "sugar-coating" the potential mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, with entry-level jobs most at risk. Amodei said he was making this warning public in the hope that the government and other AI giants such as OpenAI will start preparing ways to protect the nation from a situation that could get out of hand. "Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it." The CEO's comments are backed up by reports into the state of the jobs market. The US IT job market declined for the second year in a row in 2024. There was also a report from SignalFire that found Big Tech's hiring of new graduates is down by over 50% compared to the pre-pandemic levels of 2019. Startups, meanwhile, have seen their hiring of new grads fall by over 30% during the same period. We're also seeing huge layoffs across multiple tech companies, a large part of which can be attributed to AI replacing workers' duties. The one bit of good news for workers is that some firms, including Klarna and Duolingo, are finding that the subpar performance of these bots and the public's negative feelings toward their use are forcing companies to start hiring humans again. // Related Stories Amodei's Anthropic AI firm is playing its own part in all this, of course. The company's latest Claude 4 AI model can code at a proficiency level close to that of humans – it's also very good at lying and blackmail. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." The AI arms race in this billion-dollar industry is resulting in LLMs improving all the time. And with the US in a battle to stay ahead of China, regulation is rarely high on the government's agenda. AI companies tend to claim that the technology will augment jobs, helping people become more productive. That might be true right now, but it won't be long before the systems are able to replace the people they are helping. Amodei says the first step in addressing the problem is to make people more aware of what jobs are vulnerable to AI replacement. Helping workers better understand how AI can augment their jobs could also mitigate job losses, as would more government action. Or there's always OpenAI CEO Sam Altman's solution: universal basic income, though that will come with plenty of issues of its own. Masthead: kate.sade11 Commenti 0 condivisioni -
Dragon Quest I & II return in HD-2D remakes nearly four decades later
What just happened? Square Enix is now accepting pre-orders for HD-2D remakes Dragon Quest I & II. The games will be sold together as a bundle, and are being offered in physical and digital editions across a range of platforms. The announcement comes on the 39th anniversary of the franchise.
The first two games in the Dragon Quest trilogy are finally getting an HD-2D remake, joining the third in the series – and you'll be able to play both of them later this year.
Dragon Quest I and II take place after the events of the third game. The first, which was titled Dragon Warrior when localized to North America, arrived way back in 1986 for the Nintendo Entertainment System. A sequel followed a year later, but it'd be another five before the third game made its way to North America.
Square Enix described the remakes as a stunning reimagination of the beloved masterpiece and narrative beginning to the "Erdrick Trilogy." Indeed, both look gorgeous in the media samples and teaser trailer shared online and honor the legacy that the games helped establish in the console RPG genre so many years ago.
Pricing is set at for the standard edition bundle, and anyone that pre-orders is entitled to a collection of in-game items including a set of elevating shoes and seeds that grant varying abilities. Players with existing save data for the HD-2D remake of Dragon Quest III will receive additional in-game bonus material, we're told, but note that your save data will need to be tied to the same account that you play the new games on.
Interested parties can pre-order digital copies of the first two games for PlayStation, Xbox, or on the PC via Steam. Physical editions will also be available for the Nintendo Switch, PlayStation 5, and Xbox Series X. All are due out of October 30, 2025.
Should you need it, the third game in the series is also still available and will set you back In total, expect to spend around for the full trilogy.
// Related Stories
#dragon #quest #ampamp #return #hd2dDragon Quest I & II return in HD-2D remakes nearly four decades laterWhat just happened? Square Enix is now accepting pre-orders for HD-2D remakes Dragon Quest I & II. The games will be sold together as a bundle, and are being offered in physical and digital editions across a range of platforms. The announcement comes on the 39th anniversary of the franchise. The first two games in the Dragon Quest trilogy are finally getting an HD-2D remake, joining the third in the series – and you'll be able to play both of them later this year. Dragon Quest I and II take place after the events of the third game. The first, which was titled Dragon Warrior when localized to North America, arrived way back in 1986 for the Nintendo Entertainment System. A sequel followed a year later, but it'd be another five before the third game made its way to North America. Square Enix described the remakes as a stunning reimagination of the beloved masterpiece and narrative beginning to the "Erdrick Trilogy." Indeed, both look gorgeous in the media samples and teaser trailer shared online and honor the legacy that the games helped establish in the console RPG genre so many years ago. Pricing is set at for the standard edition bundle, and anyone that pre-orders is entitled to a collection of in-game items including a set of elevating shoes and seeds that grant varying abilities. Players with existing save data for the HD-2D remake of Dragon Quest III will receive additional in-game bonus material, we're told, but note that your save data will need to be tied to the same account that you play the new games on. Interested parties can pre-order digital copies of the first two games for PlayStation, Xbox, or on the PC via Steam. Physical editions will also be available for the Nintendo Switch, PlayStation 5, and Xbox Series X. All are due out of October 30, 2025. Should you need it, the third game in the series is also still available and will set you back In total, expect to spend around for the full trilogy. // Related Stories #dragon #quest #ampamp #return #hd2dWWW.TECHSPOT.COMDragon Quest I & II return in HD-2D remakes nearly four decades laterWhat just happened? Square Enix is now accepting pre-orders for HD-2D remakes Dragon Quest I & II. The games will be sold together as a bundle, and are being offered in physical and digital editions across a range of platforms. The announcement comes on the 39th anniversary of the franchise. The first two games in the Dragon Quest trilogy are finally getting an HD-2D remake, joining the third in the series – and you'll be able to play both of them later this year. Dragon Quest I and II take place after the events of the third game. The first, which was titled Dragon Warrior when localized to North America, arrived way back in 1986 for the Nintendo Entertainment System. A sequel followed a year later, but it'd be another five before the third game made its way to North America. Square Enix described the remakes as a stunning reimagination of the beloved masterpiece and narrative beginning to the "Erdrick Trilogy." Indeed, both look gorgeous in the media samples and teaser trailer shared online and honor the legacy that the games helped establish in the console RPG genre so many years ago. Pricing is set at $59.99 for the standard edition bundle, and anyone that pre-orders is entitled to a collection of in-game items including a set of elevating shoes and seeds that grant varying abilities. Players with existing save data for the HD-2D remake of Dragon Quest III will receive additional in-game bonus material, we're told, but note that your save data will need to be tied to the same account that you play the new games on. Interested parties can pre-order digital copies of the first two games for PlayStation, Xbox, or on the PC via Steam. Physical editions will also be available for the Nintendo Switch, PlayStation 5, and Xbox Series X. All are due out of October 30, 2025. Should you need it, the third game in the series is also still available and will set you back $39.99. In total, expect to spend around $100 for the full trilogy. // Related Stories0 Commenti 0 condivisioni -
SteamOS significantly improves the performance and battery life of the Lenovo Legion Go S
What just happened? Lenovo showcased two versions of its Legion Go S gaming handheld last January at CES 2025. While the Windows model launched in February, the SteamOS version has only just started shipping. Benchmarks released by popular YouTuber Dave Lee suggest that the latter is noticeably faster and more battery-efficient than the Windows version.
Tests conducted by Dave2D show that Cyberpunk 2077, Doom Eternal, and The Witcher 3 benefit the most from Valve's Linux-based operating system. Frame rates also improve in Helldivers 2, although the gains are less dramatic.
In Cyberpunk 2077, the Windows model could only hit 46 fps in low-medium settings, while the SteamOS version sees a noticeable jump to 59 fps. Witcher 3 also shows similar results, going from 66 fps on Windows to 76 fps on SteamOS. Doom Eternal hits 66 fps on Windows and 75 fps on SteamOS. The only exception is Spider Man 2, which actually drops a frame on SteamOS.
Battery life on SteamOS is also vastly superior than on Windows despite both devices sporting the same 55Wh battery.
Lee believes that it is because Linux doesn't have the same background tasks and telemetry that Windows 11 does. Another factor aiding the improved battery life and overall experience of the handheld is the well-optimized sleep and suspend functionality in SteamOS.
Interestingly, the Legion Go S with SteamOS also offers better performance than the Steam Deck in most games, though it requires more wattage to achieve the higher frame rates.
While the Lenovo device cranks up the power draw to 40 watts when plugged in, the Steam Deck tops out at just 15 watts.
These tests support earlier claims that Windows was holding back the Legion Go S from becoming a serious Steam Deck challenger. With the SteamOS version now available, it becomes the first true competitor to Valve's popular gaming handheld.
// Related Stories
Price is another factor in SteamOS's favor. The Windows-powered Legion Go S costs over making it significantly more expensive than the SteamOS model, which is listed at at Best Buy.
While we'll have to wait for full reviews, early benchmarks suggest that the SteamOS Legion Go is clearly the one to go for if you've got your heart set on a new Lenovo gaming handheld.
Charts and benchmarks by Dave2D
Which matters more to you in a PC gaming handheld?
#steamos #significantly #improves #performance #batterySteamOS significantly improves the performance and battery life of the Lenovo Legion Go SWhat just happened? Lenovo showcased two versions of its Legion Go S gaming handheld last January at CES 2025. While the Windows model launched in February, the SteamOS version has only just started shipping. Benchmarks released by popular YouTuber Dave Lee suggest that the latter is noticeably faster and more battery-efficient than the Windows version. Tests conducted by Dave2D show that Cyberpunk 2077, Doom Eternal, and The Witcher 3 benefit the most from Valve's Linux-based operating system. Frame rates also improve in Helldivers 2, although the gains are less dramatic. In Cyberpunk 2077, the Windows model could only hit 46 fps in low-medium settings, while the SteamOS version sees a noticeable jump to 59 fps. Witcher 3 also shows similar results, going from 66 fps on Windows to 76 fps on SteamOS. Doom Eternal hits 66 fps on Windows and 75 fps on SteamOS. The only exception is Spider Man 2, which actually drops a frame on SteamOS. Battery life on SteamOS is also vastly superior than on Windows despite both devices sporting the same 55Wh battery. Lee believes that it is because Linux doesn't have the same background tasks and telemetry that Windows 11 does. Another factor aiding the improved battery life and overall experience of the handheld is the well-optimized sleep and suspend functionality in SteamOS. Interestingly, the Legion Go S with SteamOS also offers better performance than the Steam Deck in most games, though it requires more wattage to achieve the higher frame rates. While the Lenovo device cranks up the power draw to 40 watts when plugged in, the Steam Deck tops out at just 15 watts. These tests support earlier claims that Windows was holding back the Legion Go S from becoming a serious Steam Deck challenger. With the SteamOS version now available, it becomes the first true competitor to Valve's popular gaming handheld. // Related Stories Price is another factor in SteamOS's favor. The Windows-powered Legion Go S costs over making it significantly more expensive than the SteamOS model, which is listed at at Best Buy. While we'll have to wait for full reviews, early benchmarks suggest that the SteamOS Legion Go is clearly the one to go for if you've got your heart set on a new Lenovo gaming handheld. Charts and benchmarks by Dave2D Which matters more to you in a PC gaming handheld? #steamos #significantly #improves #performance #batteryWWW.TECHSPOT.COMSteamOS significantly improves the performance and battery life of the Lenovo Legion Go SWhat just happened? Lenovo showcased two versions of its Legion Go S gaming handheld last January at CES 2025. While the Windows model launched in February, the SteamOS version has only just started shipping (with nearly identical hardware specs). Benchmarks released by popular YouTuber Dave Lee suggest that the latter is noticeably faster and more battery-efficient than the Windows version. Tests conducted by Dave2D show that Cyberpunk 2077, Doom Eternal, and The Witcher 3 benefit the most from Valve's Linux-based operating system. Frame rates also improve in Helldivers 2, although the gains are less dramatic. In Cyberpunk 2077, the Windows model could only hit 46 fps in low-medium settings, while the SteamOS version sees a noticeable jump to 59 fps. Witcher 3 also shows similar results, going from 66 fps on Windows to 76 fps on SteamOS. Doom Eternal hits 66 fps on Windows and 75 fps on SteamOS. The only exception is Spider Man 2, which actually drops a frame on SteamOS. Battery life on SteamOS is also vastly superior than on Windows despite both devices sporting the same 55Wh battery. Lee believes that it is because Linux doesn't have the same background tasks and telemetry that Windows 11 does. Another factor aiding the improved battery life and overall experience of the handheld is the well-optimized sleep and suspend functionality in SteamOS. Interestingly, the Legion Go S with SteamOS also offers better performance than the Steam Deck in most games, though it requires more wattage to achieve the higher frame rates. While the Lenovo device cranks up the power draw to 40 watts when plugged in, the Steam Deck tops out at just 15 watts. These tests support earlier claims that Windows was holding back the Legion Go S from becoming a serious Steam Deck challenger. With the SteamOS version now available, it becomes the first true competitor to Valve's popular gaming handheld. // Related Stories Price is another factor in SteamOS's favor. The Windows-powered Legion Go S costs over $700, making it significantly more expensive than the SteamOS model, which is listed at $600 at Best Buy. While we'll have to wait for full reviews, early benchmarks suggest that the SteamOS Legion Go is clearly the one to go for if you've got your heart set on a new Lenovo gaming handheld. Charts and benchmarks by Dave2D Which matters more to you in a PC gaming handheld?0 Commenti 0 condivisioni
Altre storie