• Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%

    In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead.
    The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers.
    TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity.
    Also see: How profitable are TSMC's nodes: crunching the numbers
    Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year.
    Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year.
    // Related Stories

    With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025.

    TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%.
    As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-aroundtransistor architectures, enabling more precise control of electrical currents.
    Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16and A14could cost up to per wafer.
    Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.
    #tsmc039s #2nm #wafer #prices #hit
    TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%
    In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead. The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers. TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity. Also see: How profitable are TSMC's nodes: crunching the numbers Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year. Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year. // Related Stories With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025. TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%. As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-aroundtransistor architectures, enabling more precise control of electrical currents. Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16and A14could cost up to per wafer. Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake. #tsmc039s #2nm #wafer #prices #hit
    WWW.TECHSPOT.COM
    TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%
    In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead. The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost $30,000 per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers. TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to $725 million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity. Also see: How profitable are TSMC's nodes: crunching the numbers Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year. Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year. // Related Stories With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025. TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%. As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-around (GAA) transistor architectures, enabling more precise control of electrical currents. Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16 (1.6nm) and A14 (1.4nm) could cost up to $45,000 per wafer. Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • What VMware’s licensing crackdown reveals about control and risk 

    Over the past few weeks, VMware customers holding onto their perpetual licenses, which are often unsupported and in limbo, have reportedly begun receiving formal cease-and-desist letters from Broadcom. The message is as blunt as it is unsettling: your support contract has expired, and you are to immediately uninstall any updates, patches, or enhancements released since that expiration date. Not only that, but audits could follow, with the possibility of “enhanced damages” for breach of contract.
    This is a sharp escalation in an effort to push perpetual license holders toward VMware’s new subscription-only model. For many, it signals the end of an era where critical infrastructure software could be owned, maintained, and supported on long-term, stable terms.
    Now, even those who bought VMware licenses outright are being told that support access is off the table unless they sign on to the new subscription regime. As a result, enterprises are being forced to make tough decisions about how they manage and support one of the most foundational layers of their IT environments.

    VMware isn’t just another piece of enterprise software. It’s the plumbing. The foundation. The layer everything else runs on top of, which is precisely why many CIOs flinch at the idea of running unsupported. The potential risk is too great. A vulnerability or failure in your virtual infrastructure isn’t the same as a bug in a CRM. It’s a systemic weakness. It touches everything.
    This technical risk is, without question, the biggest barrier to any organization considering support options outside of VMware’s official offering. And it’s a valid concern.  But technical risk isn’t black and white. It varies widely depending on version, deployment model, network architecture, and operational maturity. A tightly managed and stable VMware environment running a mature release with minimal exposure doesn’t carry the same risk profile as an open, multi-tenant deployment on a newer build.

    The prevailing assumption is that support equals security—and that operating unsupported equals exposure. But this relationship is more complex than it appears. In most enterprise environments, security is not determined by whether a patch is available. It’s determined by how well the environment is configured, managed, and monitored.
    Patches are not applied instantly. Risk assessments, integration testing, and change control processes introduce natural delays. And in many cases, security gaps arise not from missing patches but from misconfigurations: exposed management interfaces, weak credentials, overly permissive access. An unpatched environment, properly maintained and reviewed, can be significantly more secure than a patched one with poor hygiene. Support models that focus on proactive security—through vulnerability analysis, environment-specific impact assessments, and mitigation strategies—offer a different but equally valid form of protection. They don’t rely on patch delivery alone. They consider how a vulnerability behaves in the attack chain, whether it’s exploitable, and what compensating controls are available. 

    about VMware security

    Hacking contest exposes VMware security: In what has been described as a historical first, hackers in Berlin have been able to demo successful attacks on the ESXi hypervisor.
    No workaround leads to more pain for VMware users: There are patches for the latest batch of security alerts from Broadcom, but VMware users on perpetual licences may not have access.

    This kind of tailored risk management is especially important now, as vendor support for older VMware versions diminishes. Many reported vulnerabilities relate to newer product components or bundled services, not the core virtualization stack. The perception of rising security risk needs to be balanced against the stability and maturity of the versions in question. In other words, not all unsupported deployments are created equal.

    Some VMware environments—particularly older versions like vSphere 5.x or 6.x—are already beyond the range of vendor patching. In these cases, the transition to unsupported status may be more symbolic than substantive. The risk profile has not meaningfully changed.  Others, particularly organisations operating vSphere 7 or 8 without an active support contract, face a more complex challenge. Some critical security patches remain accessible, depending on severity and version, but the margin of certainty is shrinking.  
    These are the cases where enterprises are increasingly turning to alternative support models to bridge the gap—ensuring continuity, maintaining compliance, and retaining access to skilled technical expertise.

    Third-party support is sometimes seen as a temporary fix—a way to buy time while organizations figure out their long-term plans. And it can serve that purpose well. But increasingly, it’s also being recognized as a strategic choice in its own right: a long-term solution for enterprises that want to maintain operational stability with a reliable support partner while retaining control over their virtualization roadmap.What distinguishes third-party support in this context isn’t just cost control, it’s methodology.  
    Risk is assessed holistically, identifying which vulnerabilities truly matter, what can be addressed through configuration, and when escalation is genuinely required. This approach recognises that most enterprises aren’t chasing bleeding-edge features. They want to run stable, well-understood environments that don’t change unpredictably. Third-party support helps them do exactly that, without being forced into a rapid, costly migration or a subscription contract that may not align with their business needs. 
    Crucially, it enables organisations to move on their own timeline.
    Much of the conversation around unsupported VMware environments focuses on technical risk. But the longer-term threat may be strategic. The end of perpetual licensing, the sharp rise in subscription pricing, and now the legal enforcement of support boundaries all points to a much bigger problem: a loss of control over infrastructure strategy. 
    Vendor-imposed timelines, licensing models, and audit policies are increasingly dictating how organizations use the very software they once owned outright. Third-party support doesn’t eliminate risk—nothing can. But it redistributes and controls it. It gives enterprises more agency over when and how they migrate, how they manage updates, and where they invest. In a landscape shaped by vendor agendas, that independence is increasingly critical. 
    Broadcom’s cease-and-desist letters represent a new phase in the relationship between software vendors and customers—one defined not by collaboration, but by contractual enforcement. And for VMware customers still clinging to the idea of “owning” their infrastructure, it’s a rude awakening: support is no longer optional, and perpetual is no longer forever. Organizations now face three paths: accept the subscription model, attempt a rapid migration to an alternative platform, or find a support model that gives them the stability to decide their future on their own terms. 
    For many, the third option is the only one that balances operational security with strategic flexibility. 
    The question now isn’t whether unsupported infrastructure is risky. The question is whether the greater risk is allowing someone else to dictate what happens next. 
    #what #vmwares #licensing #crackdown #reveals
    What VMware’s licensing crackdown reveals about control and risk 
    Over the past few weeks, VMware customers holding onto their perpetual licenses, which are often unsupported and in limbo, have reportedly begun receiving formal cease-and-desist letters from Broadcom. The message is as blunt as it is unsettling: your support contract has expired, and you are to immediately uninstall any updates, patches, or enhancements released since that expiration date. Not only that, but audits could follow, with the possibility of “enhanced damages” for breach of contract. This is a sharp escalation in an effort to push perpetual license holders toward VMware’s new subscription-only model. For many, it signals the end of an era where critical infrastructure software could be owned, maintained, and supported on long-term, stable terms. Now, even those who bought VMware licenses outright are being told that support access is off the table unless they sign on to the new subscription regime. As a result, enterprises are being forced to make tough decisions about how they manage and support one of the most foundational layers of their IT environments. VMware isn’t just another piece of enterprise software. It’s the plumbing. The foundation. The layer everything else runs on top of, which is precisely why many CIOs flinch at the idea of running unsupported. The potential risk is too great. A vulnerability or failure in your virtual infrastructure isn’t the same as a bug in a CRM. It’s a systemic weakness. It touches everything. This technical risk is, without question, the biggest barrier to any organization considering support options outside of VMware’s official offering. And it’s a valid concern.  But technical risk isn’t black and white. It varies widely depending on version, deployment model, network architecture, and operational maturity. A tightly managed and stable VMware environment running a mature release with minimal exposure doesn’t carry the same risk profile as an open, multi-tenant deployment on a newer build. The prevailing assumption is that support equals security—and that operating unsupported equals exposure. But this relationship is more complex than it appears. In most enterprise environments, security is not determined by whether a patch is available. It’s determined by how well the environment is configured, managed, and monitored. Patches are not applied instantly. Risk assessments, integration testing, and change control processes introduce natural delays. And in many cases, security gaps arise not from missing patches but from misconfigurations: exposed management interfaces, weak credentials, overly permissive access. An unpatched environment, properly maintained and reviewed, can be significantly more secure than a patched one with poor hygiene. Support models that focus on proactive security—through vulnerability analysis, environment-specific impact assessments, and mitigation strategies—offer a different but equally valid form of protection. They don’t rely on patch delivery alone. They consider how a vulnerability behaves in the attack chain, whether it’s exploitable, and what compensating controls are available.  about VMware security Hacking contest exposes VMware security: In what has been described as a historical first, hackers in Berlin have been able to demo successful attacks on the ESXi hypervisor. No workaround leads to more pain for VMware users: There are patches for the latest batch of security alerts from Broadcom, but VMware users on perpetual licences may not have access. This kind of tailored risk management is especially important now, as vendor support for older VMware versions diminishes. Many reported vulnerabilities relate to newer product components or bundled services, not the core virtualization stack. The perception of rising security risk needs to be balanced against the stability and maturity of the versions in question. In other words, not all unsupported deployments are created equal. Some VMware environments—particularly older versions like vSphere 5.x or 6.x—are already beyond the range of vendor patching. In these cases, the transition to unsupported status may be more symbolic than substantive. The risk profile has not meaningfully changed.  Others, particularly organisations operating vSphere 7 or 8 without an active support contract, face a more complex challenge. Some critical security patches remain accessible, depending on severity and version, but the margin of certainty is shrinking.   These are the cases where enterprises are increasingly turning to alternative support models to bridge the gap—ensuring continuity, maintaining compliance, and retaining access to skilled technical expertise. Third-party support is sometimes seen as a temporary fix—a way to buy time while organizations figure out their long-term plans. And it can serve that purpose well. But increasingly, it’s also being recognized as a strategic choice in its own right: a long-term solution for enterprises that want to maintain operational stability with a reliable support partner while retaining control over their virtualization roadmap.What distinguishes third-party support in this context isn’t just cost control, it’s methodology.   Risk is assessed holistically, identifying which vulnerabilities truly matter, what can be addressed through configuration, and when escalation is genuinely required. This approach recognises that most enterprises aren’t chasing bleeding-edge features. They want to run stable, well-understood environments that don’t change unpredictably. Third-party support helps them do exactly that, without being forced into a rapid, costly migration or a subscription contract that may not align with their business needs.  Crucially, it enables organisations to move on their own timeline. Much of the conversation around unsupported VMware environments focuses on technical risk. But the longer-term threat may be strategic. The end of perpetual licensing, the sharp rise in subscription pricing, and now the legal enforcement of support boundaries all points to a much bigger problem: a loss of control over infrastructure strategy.  Vendor-imposed timelines, licensing models, and audit policies are increasingly dictating how organizations use the very software they once owned outright. Third-party support doesn’t eliminate risk—nothing can. But it redistributes and controls it. It gives enterprises more agency over when and how they migrate, how they manage updates, and where they invest. In a landscape shaped by vendor agendas, that independence is increasingly critical.  Broadcom’s cease-and-desist letters represent a new phase in the relationship between software vendors and customers—one defined not by collaboration, but by contractual enforcement. And for VMware customers still clinging to the idea of “owning” their infrastructure, it’s a rude awakening: support is no longer optional, and perpetual is no longer forever. Organizations now face three paths: accept the subscription model, attempt a rapid migration to an alternative platform, or find a support model that gives them the stability to decide their future on their own terms.  For many, the third option is the only one that balances operational security with strategic flexibility.  The question now isn’t whether unsupported infrastructure is risky. The question is whether the greater risk is allowing someone else to dictate what happens next.  #what #vmwares #licensing #crackdown #reveals
    WWW.COMPUTERWEEKLY.COM
    What VMware’s licensing crackdown reveals about control and risk 
    Over the past few weeks, VMware customers holding onto their perpetual licenses, which are often unsupported and in limbo, have reportedly begun receiving formal cease-and-desist letters from Broadcom. The message is as blunt as it is unsettling: your support contract has expired, and you are to immediately uninstall any updates, patches, or enhancements released since that expiration date. Not only that, but audits could follow, with the possibility of “enhanced damages” for breach of contract. This is a sharp escalation in an effort to push perpetual license holders toward VMware’s new subscription-only model. For many, it signals the end of an era where critical infrastructure software could be owned, maintained, and supported on long-term, stable terms. Now, even those who bought VMware licenses outright are being told that support access is off the table unless they sign on to the new subscription regime. As a result, enterprises are being forced to make tough decisions about how they manage and support one of the most foundational layers of their IT environments. VMware isn’t just another piece of enterprise software. It’s the plumbing. The foundation. The layer everything else runs on top of, which is precisely why many CIOs flinch at the idea of running unsupported. The potential risk is too great. A vulnerability or failure in your virtual infrastructure isn’t the same as a bug in a CRM. It’s a systemic weakness. It touches everything. This technical risk is, without question, the biggest barrier to any organization considering support options outside of VMware’s official offering. And it’s a valid concern.  But technical risk isn’t black and white. It varies widely depending on version, deployment model, network architecture, and operational maturity. A tightly managed and stable VMware environment running a mature release with minimal exposure doesn’t carry the same risk profile as an open, multi-tenant deployment on a newer build. The prevailing assumption is that support equals security—and that operating unsupported equals exposure. But this relationship is more complex than it appears. In most enterprise environments, security is not determined by whether a patch is available. It’s determined by how well the environment is configured, managed, and monitored. Patches are not applied instantly. Risk assessments, integration testing, and change control processes introduce natural delays. And in many cases, security gaps arise not from missing patches but from misconfigurations: exposed management interfaces, weak credentials, overly permissive access. An unpatched environment, properly maintained and reviewed, can be significantly more secure than a patched one with poor hygiene. Support models that focus on proactive security—through vulnerability analysis, environment-specific impact assessments, and mitigation strategies—offer a different but equally valid form of protection. They don’t rely on patch delivery alone. They consider how a vulnerability behaves in the attack chain, whether it’s exploitable, and what compensating controls are available.  Read more about VMware security Hacking contest exposes VMware security: In what has been described as a historical first, hackers in Berlin have been able to demo successful attacks on the ESXi hypervisor. No workaround leads to more pain for VMware users: There are patches for the latest batch of security alerts from Broadcom, but VMware users on perpetual licences may not have access. This kind of tailored risk management is especially important now, as vendor support for older VMware versions diminishes. Many reported vulnerabilities relate to newer product components or bundled services, not the core virtualization stack. The perception of rising security risk needs to be balanced against the stability and maturity of the versions in question. In other words, not all unsupported deployments are created equal. Some VMware environments—particularly older versions like vSphere 5.x or 6.x—are already beyond the range of vendor patching. In these cases, the transition to unsupported status may be more symbolic than substantive. The risk profile has not meaningfully changed.  Others, particularly organisations operating vSphere 7 or 8 without an active support contract, face a more complex challenge. Some critical security patches remain accessible, depending on severity and version, but the margin of certainty is shrinking.   These are the cases where enterprises are increasingly turning to alternative support models to bridge the gap—ensuring continuity, maintaining compliance, and retaining access to skilled technical expertise. Third-party support is sometimes seen as a temporary fix—a way to buy time while organizations figure out their long-term plans. And it can serve that purpose well. But increasingly, it’s also being recognized as a strategic choice in its own right: a long-term solution for enterprises that want to maintain operational stability with a reliable support partner while retaining control over their virtualization roadmap.What distinguishes third-party support in this context isn’t just cost control, it’s methodology.   Risk is assessed holistically, identifying which vulnerabilities truly matter, what can be addressed through configuration, and when escalation is genuinely required. This approach recognises that most enterprises aren’t chasing bleeding-edge features. They want to run stable, well-understood environments that don’t change unpredictably. Third-party support helps them do exactly that, without being forced into a rapid, costly migration or a subscription contract that may not align with their business needs.  Crucially, it enables organisations to move on their own timeline. Much of the conversation around unsupported VMware environments focuses on technical risk. But the longer-term threat may be strategic. The end of perpetual licensing, the sharp rise in subscription pricing, and now the legal enforcement of support boundaries all points to a much bigger problem: a loss of control over infrastructure strategy.  Vendor-imposed timelines, licensing models, and audit policies are increasingly dictating how organizations use the very software they once owned outright. Third-party support doesn’t eliminate risk—nothing can. But it redistributes and controls it. It gives enterprises more agency over when and how they migrate, how they manage updates, and where they invest. In a landscape shaped by vendor agendas, that independence is increasingly critical.  Broadcom’s cease-and-desist letters represent a new phase in the relationship between software vendors and customers—one defined not by collaboration, but by contractual enforcement. And for VMware customers still clinging to the idea of “owning” their infrastructure, it’s a rude awakening: support is no longer optional, and perpetual is no longer forever. Organizations now face three paths: accept the subscription model, attempt a rapid migration to an alternative platform, or find a support model that gives them the stability to decide their future on their own terms.  For many, the third option is the only one that balances operational security with strategic flexibility.  The question now isn’t whether unsupported infrastructure is risky. The question is whether the greater risk is allowing someone else to dictate what happens next. 
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Microsoft reveals unexpected way that Windows 11 clean install can boost your PC performance

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Microsoft reveals unexpected way that Windows 11 clean install can boost your PC performance

    Sayan Sen

    Neowin
    @ssc_combater007 ·

    May 25, 2025 05:10 EDT

    Earlier this year, in March, we covered an interesting Microsoft recommendation for new Windows 11 PCs. The company highlighted how its Smart App Control feature can keep PCs more secure. However, we noted that the feature is only available with clean installations.
    For those wondering, Microsoft debuted Smart App Controlwith the release of Windows 11 version 22H2 in September 2022. And in a new article, Microsoft has shared several advantages of it over traditional antivirus software.
    One of those, according to Microsoft, is the inherent advantage Smart App Control offers in terms of performance over the typical AV application. The tech giant explains how constant background scanning by the latter can bog down devices. Microsoft writes:

    An advantage of Smart App Control is its lighter impact on your PC’s performance. Since it helps block harmful apps before they can run, there’s no need for constant scanning of active files. This means less strain on your system, so you can keep working or gaming without worrying about slowdowns. Traditional antivirus software, on the other hand, can sometimes use more resources as it scans files and processes continuously.

    The company says this is so because Smart App Control is a proactive antimalware solution rather than being reactive like a traditional AV.
    Thus the benefit is twofold according to Microsoft. Not only do users get better performance and a snappier system, but SAC can also neutralize new threats based on suspicious behavior that it can pick up based on its past machine learning and cloud data. It writes:

    Smart App Control takes a proactive approach, blocking suspicious apps before they get the chance to do any harm. Traditional antivirus, however, is more reactive, responding to threats only after they've been detected on your system. This means traditional antivirus is excellent at identifying and removing known threats, but it may not catch new or sophisticated ones as quickly.

    Irrespective of what Microsoft says though, there are reports from time to time about SAC impacting performance too due to bugs that do pop up sometimes, as this Broadcom support article points out. Curiously, Broadcom also highlights that the Redmond giant provided "no specific guidelines on how to address/remediate such scenarios."
    The discussion is quite relevant given that the majority seem to still feel older Windows editions like Windows 8.1/8 are ahead performance-wise, despite being relatively modern in terms of UI/UX and feature-set.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #microsoft #reveals #unexpected #way #that
    Microsoft reveals unexpected way that Windows 11 clean install can boost your PC performance
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft reveals unexpected way that Windows 11 clean install can boost your PC performance Sayan Sen Neowin @ssc_combater007 · May 25, 2025 05:10 EDT Earlier this year, in March, we covered an interesting Microsoft recommendation for new Windows 11 PCs. The company highlighted how its Smart App Control feature can keep PCs more secure. However, we noted that the feature is only available with clean installations. For those wondering, Microsoft debuted Smart App Controlwith the release of Windows 11 version 22H2 in September 2022. And in a new article, Microsoft has shared several advantages of it over traditional antivirus software. One of those, according to Microsoft, is the inherent advantage Smart App Control offers in terms of performance over the typical AV application. The tech giant explains how constant background scanning by the latter can bog down devices. Microsoft writes: An advantage of Smart App Control is its lighter impact on your PC’s performance. Since it helps block harmful apps before they can run, there’s no need for constant scanning of active files. This means less strain on your system, so you can keep working or gaming without worrying about slowdowns. Traditional antivirus software, on the other hand, can sometimes use more resources as it scans files and processes continuously. The company says this is so because Smart App Control is a proactive antimalware solution rather than being reactive like a traditional AV. Thus the benefit is twofold according to Microsoft. Not only do users get better performance and a snappier system, but SAC can also neutralize new threats based on suspicious behavior that it can pick up based on its past machine learning and cloud data. It writes: Smart App Control takes a proactive approach, blocking suspicious apps before they get the chance to do any harm. Traditional antivirus, however, is more reactive, responding to threats only after they've been detected on your system. This means traditional antivirus is excellent at identifying and removing known threats, but it may not catch new or sophisticated ones as quickly. Irrespective of what Microsoft says though, there are reports from time to time about SAC impacting performance too due to bugs that do pop up sometimes, as this Broadcom support article points out. Curiously, Broadcom also highlights that the Redmond giant provided "no specific guidelines on how to address/remediate such scenarios." The discussion is quite relevant given that the majority seem to still feel older Windows editions like Windows 8.1/8 are ahead performance-wise, despite being relatively modern in terms of UI/UX and feature-set. Tags Report a problem with article Follow @NeowinFeed #microsoft #reveals #unexpected #way #that
    WWW.NEOWIN.NET
    Microsoft reveals unexpected way that Windows 11 clean install can boost your PC performance
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft reveals unexpected way that Windows 11 clean install can boost your PC performance Sayan Sen Neowin @ssc_combater007 · May 25, 2025 05:10 EDT Earlier this year, in March, we covered an interesting Microsoft recommendation for new Windows 11 PCs. The company highlighted how its Smart App Control feature can keep PCs more secure. However, we noted that the feature is only available with clean installations. For those wondering, Microsoft debuted Smart App Control (SAC) with the release of Windows 11 version 22H2 in September 2022. And in a new article, Microsoft has shared several advantages of it over traditional antivirus software. One of those, according to Microsoft, is the inherent advantage Smart App Control offers in terms of performance over the typical AV application. The tech giant explains how constant background scanning by the latter can bog down devices. Microsoft writes: An advantage of Smart App Control is its lighter impact on your PC’s performance. Since it helps block harmful apps before they can run, there’s no need for constant scanning of active files. This means less strain on your system, so you can keep working or gaming without worrying about slowdowns. Traditional antivirus software, on the other hand, can sometimes use more resources as it scans files and processes continuously. The company says this is so because Smart App Control is a proactive antimalware solution rather than being reactive like a traditional AV. Thus the benefit is twofold according to Microsoft. Not only do users get better performance and a snappier system, but SAC can also neutralize new threats based on suspicious behavior that it can pick up based on its past machine learning and cloud data. It writes: Smart App Control takes a proactive approach, blocking suspicious apps before they get the chance to do any harm. Traditional antivirus, however, is more reactive, responding to threats only after they've been detected on your system. This means traditional antivirus is excellent at identifying and removing known threats, but it may not catch new or sophisticated ones as quickly. Irrespective of what Microsoft says though, there are reports from time to time about SAC impacting performance too due to bugs that do pop up sometimes, as this Broadcom support article points out. Curiously, Broadcom also highlights that the Redmond giant provided "no specific guidelines on how to address/remediate such scenarios." The discussion is quite relevant given that the majority seem to still feel older Windows editions like Windows 8.1/8 are ahead performance-wise, despite being relatively modern in terms of UI/UX and feature-set. Tags Report a problem with article Follow @NeowinFeed
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Made in India iPhones Will Still Be Cheaper in the US, Even With Donald Trump's 25 Percent Tariff: GTRI Report

    Even if the United States were to impose a 25 per cent tariff on iPhones manufactured in India, the total production cost would still be much lower if compared with manufacturing the devices in the U.S, according to a report by Global Trade Research Initiative.This comes amid a statement by U.S. President Donald Trump, threatening to impose 25 per cent tariffs on iPhones if Apple decides to make it in India. However, the GTRI report showed that manufacturing in India remains cost-effective, despite such duties.The report breaks down the current value chain of aiPhone, which involves contributions from over a dozen countries. Apple retains the largest share of the value, aboutper device, through its brand, software, and design.It also added that the U.S. component makers, such as Qualcomm and Broadcom, add, while Taiwan contributesthrough chip manufacturing. South Korea addsvia OLED screens and memory chips, and Japan supplies components worth, mainly through camera systems. Germany, Vietnam, and Malaysia account for anotherthrough smaller parts.GTRI stated that China and India, despite being major players of iPhone assembly, earn only aroundper device. This is less than 3 per cent of the total retail price of an iPhone.The report argues that manufacturing iPhones in India is still economically viable even if a 25 per cent tariff is applied.This is mainly because of the sharp difference in labour costs between India and the U.S. In India, assembly workers earn approximatelyper month, while in the U.S. states like California, labour costs could soar to aroundper month due to minimum wage laws, a 13-fold increase.As a result, assembling an iPhone in India costs about, while the same process in the U.S. would cost around. In addition to this Apple gets the benefit of production-linked incentiveon iPhone manufacturing in India from government.If Apple were to shift production to the U.S., its profit per iPhone could fall drastically fromto just, unless retail prices are significantly increased.The GTRI report highlighted how global value chains and labour cost differences make India a competitive option for manufacturing, even in the face of potential U.S. trade restrictions.
    #made #india #iphones #will #still
    Made in India iPhones Will Still Be Cheaper in the US, Even With Donald Trump's 25 Percent Tariff: GTRI Report
    Even if the United States were to impose a 25 per cent tariff on iPhones manufactured in India, the total production cost would still be much lower if compared with manufacturing the devices in the U.S, according to a report by Global Trade Research Initiative.This comes amid a statement by U.S. President Donald Trump, threatening to impose 25 per cent tariffs on iPhones if Apple decides to make it in India. However, the GTRI report showed that manufacturing in India remains cost-effective, despite such duties.The report breaks down the current value chain of aiPhone, which involves contributions from over a dozen countries. Apple retains the largest share of the value, aboutper device, through its brand, software, and design.It also added that the U.S. component makers, such as Qualcomm and Broadcom, add, while Taiwan contributesthrough chip manufacturing. South Korea addsvia OLED screens and memory chips, and Japan supplies components worth, mainly through camera systems. Germany, Vietnam, and Malaysia account for anotherthrough smaller parts.GTRI stated that China and India, despite being major players of iPhone assembly, earn only aroundper device. This is less than 3 per cent of the total retail price of an iPhone.The report argues that manufacturing iPhones in India is still economically viable even if a 25 per cent tariff is applied.This is mainly because of the sharp difference in labour costs between India and the U.S. In India, assembly workers earn approximatelyper month, while in the U.S. states like California, labour costs could soar to aroundper month due to minimum wage laws, a 13-fold increase.As a result, assembling an iPhone in India costs about, while the same process in the U.S. would cost around. In addition to this Apple gets the benefit of production-linked incentiveon iPhone manufacturing in India from government.If Apple were to shift production to the U.S., its profit per iPhone could fall drastically fromto just, unless retail prices are significantly increased.The GTRI report highlighted how global value chains and labour cost differences make India a competitive option for manufacturing, even in the face of potential U.S. trade restrictions. #made #india #iphones #will #still
    WWW.GADGETS360.COM
    Made in India iPhones Will Still Be Cheaper in the US, Even With Donald Trump's 25 Percent Tariff: GTRI Report
    Even if the United States were to impose a 25 per cent tariff on iPhones manufactured in India, the total production cost would still be much lower if compared with manufacturing the devices in the U.S, according to a report by Global Trade Research Initiative (GTRI).This comes amid a statement by U.S. President Donald Trump, threatening to impose 25 per cent tariffs on iPhones if Apple decides to make it in India. However, the GTRI report showed that manufacturing in India remains cost-effective, despite such duties.The report breaks down the current value chain of a $1,000 (roughly Rs. 83,400) iPhone, which involves contributions from over a dozen countries. Apple retains the largest share of the value, about $450 (roughly Rs. 37,530) per device, through its brand, software, and design.It also added that the U.S. component makers, such as Qualcomm and Broadcom, add $80 (roughly Rs. 6,672), while Taiwan contributes $150 (roughly Rs. 12,510) through chip manufacturing. South Korea adds $90 (roughly Rs. 7,506) via OLED screens and memory chips, and Japan supplies components worth $85 (roughly Rs. 7,089), mainly through camera systems. Germany, Vietnam, and Malaysia account for another $45 (roughly Rs. 3,753) through smaller parts.GTRI stated that China and India, despite being major players of iPhone assembly, earn only around $30 (roughly Rs. 2,502) per device. This is less than 3 per cent of the total retail price of an iPhone.The report argues that manufacturing iPhones in India is still economically viable even if a 25 per cent tariff is applied.This is mainly because of the sharp difference in labour costs between India and the U.S. In India, assembly workers earn approximately $230 (roughly Rs. 19,182) per month, while in the U.S. states like California, labour costs could soar to around $2,900 (roughly Rs. 2,41,860) per month due to minimum wage laws, a 13-fold increase.As a result, assembling an iPhone in India costs about $30 (roughly Rs. 2,502), while the same process in the U.S. would cost around $390 (roughly Rs. 32,526). In addition to this Apple gets the benefit of production-linked incentive (PLI) on iPhone manufacturing in India from government.If Apple were to shift production to the U.S., its profit per iPhone could fall drastically from $450 (roughly Rs. 37,530) to just $60 (roughly Rs. 5,004), unless retail prices are significantly increased.The GTRI report highlighted how global value chains and labour cost differences make India a competitive option for manufacturing, even in the face of potential U.S. trade restrictions.(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Broadcom could face EU antitrust fines over 'punitive' VMware contract terms

    Editor's take: Broadcom aims to convert every valuable customer into a recurring online subscriber. The company has achieved notable financial success with this approach. However, regulators may soon scrutinize its business practices, raising the possibility of costly antitrust fines that could impact its future growth.
    The European Cloud Competition Observatoryis a monitoring group founded by CISPE, a non-profit trade association of European cloud providers. Created as part of CISPE's antitrust settlement with Microsoft, ECCO now has its sights set on Broadcom and its conduct following the acquisition of VMware and its entry into the cloud and virtualization market.
    The observatory recently published a new report following an earlier study of Broadcom's abrupt licensing changes. The findings confirmed the ECCO's previous claims: Broadcom continues to impose harsh, unfair contract terms on European infrastructure providers. Many CISPE members reluctantly accepted the terms, forced by the lack of viable alternatives to VMware.
    The situation has worsened as Broadcom increasingly uses litigation to pressure its partners and customers into signing new agreements. Recently leaked memos reveal the company is sending cease-and-desist letters to VMware perpetual license holders. These letters reportedly demand payment for continued support or face legal consequences.

    Representatives from CISPE held one meeting with Broadcom, but ECCO reports it yielded no progress. The organization highlights a recent formal complaint submitted by VOICE, a German IT association, to the European Commission. VOICE called for an antitrust investigation and more decisive action against Broadcom's harmful practices, with ECCO lending its support.
    The European watchdog group claims Broadcom has done nothing to address complaints from European cloud providers.
    // Related Stories

    "Unlike Microsoft, Broadcom shows no interest in finding solutions or collaborating with European cloud infrastructure providers," CISPE secretary Francisco Mingorance said.
    The company can boast about its new contracts and financial results all it wants, but these punitive conditions will ultimately threaten the viability of the locked-in VMware ecosystem.
    The ECCO welcomed Brussels authorities' formal antitrust investigation and urged Broadcom to take immediate corrective steps. These include restoring fair business practices, introducing transparent pricing, reopening access to partner programs, and protecting customer privacy. While Broadcom is unlikely to comply, a spokesperson said the company seeks a constructive dialogue with CISPE to support European competitiveness.
    #broadcom #could #face #antitrust #fines
    Broadcom could face EU antitrust fines over 'punitive' VMware contract terms
    Editor's take: Broadcom aims to convert every valuable customer into a recurring online subscriber. The company has achieved notable financial success with this approach. However, regulators may soon scrutinize its business practices, raising the possibility of costly antitrust fines that could impact its future growth. The European Cloud Competition Observatoryis a monitoring group founded by CISPE, a non-profit trade association of European cloud providers. Created as part of CISPE's antitrust settlement with Microsoft, ECCO now has its sights set on Broadcom and its conduct following the acquisition of VMware and its entry into the cloud and virtualization market. The observatory recently published a new report following an earlier study of Broadcom's abrupt licensing changes. The findings confirmed the ECCO's previous claims: Broadcom continues to impose harsh, unfair contract terms on European infrastructure providers. Many CISPE members reluctantly accepted the terms, forced by the lack of viable alternatives to VMware. The situation has worsened as Broadcom increasingly uses litigation to pressure its partners and customers into signing new agreements. Recently leaked memos reveal the company is sending cease-and-desist letters to VMware perpetual license holders. These letters reportedly demand payment for continued support or face legal consequences. Representatives from CISPE held one meeting with Broadcom, but ECCO reports it yielded no progress. The organization highlights a recent formal complaint submitted by VOICE, a German IT association, to the European Commission. VOICE called for an antitrust investigation and more decisive action against Broadcom's harmful practices, with ECCO lending its support. The European watchdog group claims Broadcom has done nothing to address complaints from European cloud providers. // Related Stories "Unlike Microsoft, Broadcom shows no interest in finding solutions or collaborating with European cloud infrastructure providers," CISPE secretary Francisco Mingorance said. The company can boast about its new contracts and financial results all it wants, but these punitive conditions will ultimately threaten the viability of the locked-in VMware ecosystem. The ECCO welcomed Brussels authorities' formal antitrust investigation and urged Broadcom to take immediate corrective steps. These include restoring fair business practices, introducing transparent pricing, reopening access to partner programs, and protecting customer privacy. While Broadcom is unlikely to comply, a spokesperson said the company seeks a constructive dialogue with CISPE to support European competitiveness. #broadcom #could #face #antitrust #fines
    WWW.TECHSPOT.COM
    Broadcom could face EU antitrust fines over 'punitive' VMware contract terms
    Editor's take: Broadcom aims to convert every valuable customer into a recurring online subscriber. The company has achieved notable financial success with this approach. However, regulators may soon scrutinize its business practices, raising the possibility of costly antitrust fines that could impact its future growth. The European Cloud Competition Observatory (ECCO) is a monitoring group founded by CISPE, a non-profit trade association of European cloud providers. Created as part of CISPE's antitrust settlement with Microsoft, ECCO now has its sights set on Broadcom and its conduct following the acquisition of VMware and its entry into the cloud and virtualization market. The observatory recently published a new report following an earlier study of Broadcom's abrupt licensing changes. The findings confirmed the ECCO's previous claims: Broadcom continues to impose harsh, unfair contract terms on European infrastructure providers. Many CISPE members reluctantly accepted the terms, forced by the lack of viable alternatives to VMware. The situation has worsened as Broadcom increasingly uses litigation to pressure its partners and customers into signing new agreements. Recently leaked memos reveal the company is sending cease-and-desist letters to VMware perpetual license holders. These letters reportedly demand payment for continued support or face legal consequences. Representatives from CISPE held one meeting with Broadcom, but ECCO reports it yielded no progress. The organization highlights a recent formal complaint submitted by VOICE, a German IT association, to the European Commission. VOICE called for an antitrust investigation and more decisive action against Broadcom's harmful practices, with ECCO lending its support. The European watchdog group claims Broadcom has done nothing to address complaints from European cloud providers. // Related Stories "Unlike Microsoft, Broadcom shows no interest in finding solutions or collaborating with European cloud infrastructure providers," CISPE secretary Francisco Mingorance said. The company can boast about its new contracts and financial results all it wants, but these punitive conditions will ultimately threaten the viability of the locked-in VMware ecosystem. The ECCO welcomed Brussels authorities' formal antitrust investigation and urged Broadcom to take immediate corrective steps. These include restoring fair business practices, introducing transparent pricing, reopening access to partner programs, and protecting customer privacy. While Broadcom is unlikely to comply, a spokesperson said the company seeks a constructive dialogue with CISPE to support European competitiveness.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com