• Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Reacties 0 aandelen
  • An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment

    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro.Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22.

    If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster.
    Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral.
    Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet.

    At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas. Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites.
    Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement.
    I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two studentsstill in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa.

    Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent: this extraordinary revivalthe rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own.
    And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses ofstate or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research.
    There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms. 

    We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover.
    Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint.
    #excerpt #new #book #sérgio #ferro
    An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment
    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro.Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22. If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster. Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral. Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet. At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas. Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites. Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement. I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two studentsstill in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa. Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent: this extraordinary revivalthe rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own. And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses ofstate or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research. There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms.  We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover. Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint. #excerpt #new #book #sérgio #ferro
    An excerpt from a new book by Sérgio Ferro, published by MACK Books, showcases the architect’s moment of disenchantment
    Last year, MACK Books published Architecture from Below, which anthologized writings by the French Brazilian architect, theorist, and painter Sérgio Ferro. (Douglas Spencer reviewed it for AN.) Now, MACK follows with Design and the Building Site and Complementary Essays, the second in the trilogy of books dedicated to Ferro’s scholarship. The following excerpt of the author’s 2023 preface to the English edition, which preserves its British phrasing, captures Ferro’s realization about the working conditions of construction sites in Brasília. The sentiment is likely relatable even today for young architects as they discover how drawings become buildings. Design and the Building Site and Complementary Essays will be released on May 22. If I remember correctly, it was in 1958 or 1959, when Rodrigo and I were second- or third year architecture students at FAUUSP, that my father, the real estate developer Armando Simone Pereira, commissioned us to design two large office buildings and eleven shops in Brasilia, which was then under construction. Of course, we were not adequately prepared for such an undertaking. Fortunately, Oscar Niemeyer and his team, who were responsible for overseeing the construction of the capital, had drawn up a detailed document determining the essential characteristics of all the private sector buildings. We followed these prescriptions to the letter, which saved us from disaster. Nowadays, it is hard to imagine the degree to which the construction of Brasilia inspired enthusiasm and professional pride in the country’s architects. And in the national imagination, the city’s establishment in the supposedly unpopulated hinterland evoked a re-founding of Brazil. Up until that point, the occupation of our immense territory had been reduced to a collection of arborescent communication routes, generally converging upon some river, following it up to the Atlantic Ocean. Through its ports, agricultural or extractive commodities produced by enslaved peoples or their substitutes passed towards the metropolises; goods were exchanged in the metropolises for more elaborate products, which took the opposite route. Our national identity was summed up in a few symbols, such as the anthem or the flag, and this scattering of paths pointing overseas. Brasilia would radically change this situation, or so we believed. It would create a central hub where the internal communication routes could converge, linking together hithertoseparate junctions, stimulating trade and economic progress in the country’s interior. It was as if, for the first time, we were taking care of ourselves. At the nucleus of this centripetal movement, architecture would embody the renaissance. And at the naval of the nucleus, the symbolic mandala of this utopia: the cathedral. Rodrigo and I got caught up in the euphoria. And perhaps more so than our colleagues, because we were taking part in the adventure with ‘our’ designs. The reality was very different — but we did not know that yet. At that time, architects in Brazil were responsible for verifying that the construction was in line with the design. We had already monitored some of our first building sites. But the construction company in charge of them, Osmar Souza e Silva’s CENPLA, specialized in the building sites of modernist architects from the so-called Escola Paulista led by Vilanova Artigas (which we aspired to be a part of, like the pretentious students we were). Osmar was very attentive to his clients and his workers, who formed a supportive and helpful team. He was even more careful with us, because he knew how inexperienced we were. I believe that the CENPLA was particularly important in São Paulo modernism: with its congeniality, it facilitated experimentation, but for the same reason, it deceived novices like us about the reality of other building sites. Consequently, Rodrigo and I travelled to Brasilia several times to check that the constructions followed ‘our’ designs and to resolve any issues. From the very first trip, our little bubble burst. Our building sites, like all the others in the future capital, bore no relation to Osmar’s. They were more like a branch of hell. A huge, muddy wasteland, in which a few cranes, pile drivers, tractors, and excavators dotted the mound of scaffolding occupied by thousands of skinny, seemingly exhausted wretches, who were nevertheless driven on by the shouts of master builders and foremen, in turn pressured by the imminence of the fateful inauguration date. Surrounding or huddled underneath the marquees of buildings under construction, entire families, equally skeletal and ragged, were waiting for some accident or death to open up a vacancy. In contact only with the master builders, and under close surveillance so we would not speak to the workers, we were not allowed to see what comrades who had worked on these sites later told us in prison: suicide abounded; escape was known to be futile in the unpopulated surroundings with no viable roads; fatal accidents were often caused by weakness due to chronic diarrhoea, brought on by rotten food that came from far away; outright theft took place in the calculation of wages and expenses in the contractor’s grocery store; camps were surrounded by law enforcement. I repeat this anecdote yet again not to invoke the benevolence of potential readers, but rather to point out the conditions that, in my opinion, allowed two students (Flávio Império joined us a little later) still in their professional infancy to quickly adopt positions that were contrary to the usual stance of architects. As the project was more Oscar Niemeyer’s than it was our own, we did not have the same emotional attachment that is understandably engendered between real authors and their designs. We had not yet been imbued with the charm and aura of the métier. And the only building sites we had visited thus far, Osmar’s, were incomparable to those we discovered in Brasilia. In short, our youthfulness and unpreparedness up against an unbearable situation made us react almost immediately to the profession’s satisfied doxa. Unprepared and young perhaps, but already with Marx by our side. Rodrigo and I joined the student cell of the Brazilian Communist Party during our first year at university. In itself, this did not help us much: the Party’s Marxism, revised in the interests of the USSR, was pitiful. Even high-level leaders rarely went beyond the first chapter of Capital. But at the end of the 1950s, the effervescence of the years to come was already nascent:  […] this extraordinary revival […] the rediscovery of Marxism and the great dialectical texts and traditions in the 1960s: an excitement that identifies a forgotten or repressed moment of the past as the new and subversive, and learns the dialectical grammar of a Hegel or an Adorno, a Marx or a Lukács, like a foreign language that has resources unavailable in our own. And what is more: the Chinese and Cuban revolutions, the war in Vietnam, guerrilla warfare of all kinds, national liberation movements, and a rare libertarian disposition in contemporary history, totally averse to fanaticism and respect for ideological apparatuses of (any) state or institution. Going against the grain was almost the norm. We were of course no more than contemporaries of our time. We were soon able to position ourselves from chapters 13, 14, and 15 of Capital, but only because we could constantly cross-reference Marx with our observations from well-contrasted building sites and do our own experimenting. As soon as we identified construction as manufacture, for example, thanks to the willingness and even encouragement of two friends and clients, Boris Fausto and Bernardo Issler, I was able to test both types of manufacture — organic and heterogeneous — on similar-sized projects taking place simultaneously, in order to find out which would be most convenient for the situation in Brazil, particularly in São Paulo. Despite the scientific shortcomings of these tests, they sufficed for us to select organic manufacture. Arquitetura Nova had defined its line of practice, studies, and research. There were other sources that were central to our theory and practice. Flávio Império was one of the founders of the Teatro de Arena, undoubtedly the vanguard of popular, militant theatre in Brazil. He won practically every set design award. He brought us his marvelous findings in spatial condensation and malleability, and in the creative diversion of techniques and material—appropriate devices for an underdeveloped country. This is what helped us pave the way to reformulating the reigning design paradigms.  We had to do what Flávio had done in the theatre: thoroughly rethink how to be an architect. Upend the perspective. The way we were taught was to start from a desired result; then others would take care of getting there, no matter how. We, on the other hand, set out to go down to the building site and accompany those carrying out the labor itself, those who actually build, the formally subsumed workers in manufacture who are increasingly deprived of the knowledge and know-how presupposed by this kind of subsumption. We should have been fostering the reconstitution of this knowledge and know-how—not so as to fulfil this assumption, but in order to reinvigorate the other side of this assumption according to Marx: the historical rebellion of the manufacture worker, especially the construction worker. We had to rekindle the demand that fueled this rebellion: total self-determination, and not just that of the manual operation as such. Our aim was above all political and ethical. Aesthetics only mattered by way of what it included—ethics. Instead of estética, we wrote est ética [this is ethics]. We wanted to make building sites into nests for the return of revolutionary syndicalism, which we ourselves had yet to discover. Sérgio Ferro, born in Brazil in 1938, studied architecture at FAUUSP, São Paulo. In the 1960s, he joined the Brazilian communist party and started, along with Rodrigo Lefevre and Flávio Império, the collective known as Arquitetura Nova. After being arrested by the military dictatorship that took power in Brazil in 1964, he moved to France as an exile. As a painter and a professor at the École Nationale Supérieure d’Architecture de Grenoble, where he founded the Dessin/Chantier laboratory, he engaged in extensive research which resulted in several publications, exhibitions, and awards in Brazil and in France, including the title of Chevalier des Arts et des Lettres in 1992. Following his retirement from teaching, Ferro continues to research, write, and paint.
    0 Reacties 0 aandelen
  • Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours

    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours
    Nier miss.

    Image credit: Square Enix

    News

    by Vikki Blake
    Contributor

    Published on June 14, 2025

    Nier: Automata producer Yosuke Saito and director Yoko Taro have denied that any of their character designs were restricted for Western audiences.
    As spotted by Automaton, the developers were compelled to comment after a mistranslated Japanese-to-English subtitle intimated Nier: Automata had been subjected to censorship from Square Enix to meet global standards.

    GODDESS OF VICTORY: NIKKE | Producers' Creative Dialogue Special Livestream.Watch on YouTube
    In the interview above, Sony executive Yoshida Shuhei asked the developers about their design process.
    "Our concept is always to do something that's 'not like anything else'. What I mean is, if Nier: Replicant had a boy as the main character, Nier: Automata would have a girl protagonist. If Western sci-fi is filled with Marine-like soldiers, we might go in the opposite direction and use Gothic Lolita outfits, for example," Taro said. "We tend to take the contrarian route."
    "There are, of course, certain things that are ethically or morally inappropriate – even if they're just aspects of a character," Saito added, according to the subtitles. "We try to draw a line by establishing rules about what’s acceptable and what’s not.
    "While certain things might be acceptable in Japan, they could become problematic in certain overseas regions, and even characters could become problematic as well. These are the kind of situationwe usually try to avoid creating. As a result, there are actually countries where we couldn't officially release Nier: Automata."
    This immediately caused consternation with fans but as Automaton points out, this "could be a little tricky to translate, even for an advanced Japanese speaker".
    When asked directly about the claim, Taro denied it, saying on X/Twitter: "I've never heard of such a thing happening". Saito simply said he thought the things he'd mentioned had been mistranslated, and would clarify this in a future livestream.
    In the same interview, former PlayStation exec Shuhei Yoshida called Nier: Automata the "game that changed everything", as it was responsible for reviving the Japanese games industry on its release. In a recent interview, Yoshida discussed how during the PS3 era, sales of Japanese games had declined, and increasingly studios there were chasing "overseas tastes".
    That changed with NieR: Automata in 2017, released for the PS4. "I think Yoko Taro created it without paying any mind at all to making it sell overseas, but it was a tremendous success," Yoshida said.
    #nier #automata #creators #deny #characters
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours Nier miss. Image credit: Square Enix News by Vikki Blake Contributor Published on June 14, 2025 Nier: Automata producer Yosuke Saito and director Yoko Taro have denied that any of their character designs were restricted for Western audiences. As spotted by Automaton, the developers were compelled to comment after a mistranslated Japanese-to-English subtitle intimated Nier: Automata had been subjected to censorship from Square Enix to meet global standards. GODDESS OF VICTORY: NIKKE | Producers' Creative Dialogue Special Livestream.Watch on YouTube In the interview above, Sony executive Yoshida Shuhei asked the developers about their design process. "Our concept is always to do something that's 'not like anything else'. What I mean is, if Nier: Replicant had a boy as the main character, Nier: Automata would have a girl protagonist. If Western sci-fi is filled with Marine-like soldiers, we might go in the opposite direction and use Gothic Lolita outfits, for example," Taro said. "We tend to take the contrarian route." "There are, of course, certain things that are ethically or morally inappropriate – even if they're just aspects of a character," Saito added, according to the subtitles. "We try to draw a line by establishing rules about what’s acceptable and what’s not. "While certain things might be acceptable in Japan, they could become problematic in certain overseas regions, and even characters could become problematic as well. These are the kind of situationwe usually try to avoid creating. As a result, there are actually countries where we couldn't officially release Nier: Automata." This immediately caused consternation with fans but as Automaton points out, this "could be a little tricky to translate, even for an advanced Japanese speaker". When asked directly about the claim, Taro denied it, saying on X/Twitter: "I've never heard of such a thing happening". Saito simply said he thought the things he'd mentioned had been mistranslated, and would clarify this in a future livestream. In the same interview, former PlayStation exec Shuhei Yoshida called Nier: Automata the "game that changed everything", as it was responsible for reviving the Japanese games industry on its release. In a recent interview, Yoshida discussed how during the PS3 era, sales of Japanese games had declined, and increasingly studios there were chasing "overseas tastes". That changed with NieR: Automata in 2017, released for the PS4. "I think Yoko Taro created it without paying any mind at all to making it sell overseas, but it was a tremendous success," Yoshida said. #nier #automata #creators #deny #characters
    WWW.EUROGAMER.NET
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours Nier miss. Image credit: Square Enix News by Vikki Blake Contributor Published on June 14, 2025 Nier: Automata producer Yosuke Saito and director Yoko Taro have denied that any of their character designs were restricted for Western audiences. As spotted by Automaton, the developers were compelled to comment after a mistranslated Japanese-to-English subtitle intimated Nier: Automata had been subjected to censorship from Square Enix to meet global standards. GODDESS OF VICTORY: NIKKE | Producers' Creative Dialogue Special Livestream.Watch on YouTube In the interview above (skip to 28:12 for the segment concerned), Sony executive Yoshida Shuhei asked the developers about their design process. "Our concept is always to do something that's 'not like anything else'. What I mean is, if Nier: Replicant had a boy as the main character, Nier: Automata would have a girl protagonist. If Western sci-fi is filled with Marine-like soldiers, we might go in the opposite direction and use Gothic Lolita outfits, for example," Taro said. "We tend to take the contrarian route." "There are, of course, certain things that are ethically or morally inappropriate – even if they're just aspects of a character," Saito added, according to the subtitles. "We try to draw a line by establishing rules about what’s acceptable and what’s not. "While certain things might be acceptable in Japan, they could become problematic in certain overseas regions, and even characters could become problematic as well. These are the kind of situation[s] we usually try to avoid creating. As a result, there are actually countries where we couldn't officially release Nier: Automata." This immediately caused consternation with fans but as Automaton points out, this "could be a little tricky to translate, even for an advanced Japanese speaker". When asked directly about the claim, Taro denied it, saying on X/Twitter: "I've never heard of such a thing happening". Saito simply said he thought the things he'd mentioned had been mistranslated, and would clarify this in a future livestream. In the same interview, former PlayStation exec Shuhei Yoshida called Nier: Automata the "game that changed everything", as it was responsible for reviving the Japanese games industry on its release. In a recent interview, Yoshida discussed how during the PS3 era, sales of Japanese games had declined, and increasingly studios there were chasing "overseas tastes". That changed with NieR: Automata in 2017, released for the PS4. "I think Yoko Taro created it without paying any mind at all to making it sell overseas, but it was a tremendous success," Yoshida said.
    0 Reacties 0 aandelen
  • The State of 3D Printing in the UK: Expert Insights from AMUK’s Joshua Dugdale

    Additive Manufacturing UK’s first Members Forum of 2025 was held at Siemens’ UK headquarters in South Manchester earlier this year. The event featured presentations from AMUK members and offered attendees a chance to network and share insights. 
    Ahead of the day-long meetup, 3D Printing Industry caught up with Joshua Dugdale, Head of AMUK, to learn more about the current state of additive manufacturing and the future of 3D printing in Britain. 
    AMUK is the United Kingdom’s primary 3D printing trade organization. Established in 2014, it operates within the Manufacturing Technologies Associationcluster. Attendees at this year’s first meetup spanned the UK’s entire 3D printing ecosystem. Highlights included discussion on precious materials from Cookson Industrial, simulation software from Siemens, digital thread solutions from Kaizen PLM, and 3D printing services provided by ARRK. 
    With a background in mechanical engineering, Dugdale is “responsible for everything and anything AMUK does as an organization.” According to the Loughborough University alumnus, who is also Head of Technology and Skills at the MTA, AMUK’s core mission is to “create an environment in the UK where additive manufacturing can thrive.” He elaborated on how his organization is working to increase the commercial success of its members within the “struggling” global manufacturing environment.
    Dugdale shared his perspective on the key challenges facing 3D printing in the UK. He pointed to a “tough” operating environment hampered by global financial challenges, which is delaying investments. 
    Despite this, AMUK’s leader remains optimistic about the sector’s long-term potential, highlighting the UK’s success in R&D and annual 3D printing intellectual propertyoutput. Dugdale emphasized the value of 3D printing for UK defense and supply chain resilience, arguing that “defense will lead the way” in 3D printing innovation. 
    Looking ahead, Dugdale called on the UK Government to create a unified 3D printing roadmap to replace its “disjointed” approach to policy and funding. He also shared AMUK’s strategy for 2025 and beyond, emphasizing a focus on eductaion, supply chain visibility, and standards. Ultimately, the AMUK figurehead shared a positive outlook on the future of 3D printing in the UK. He envisions a new wave of innovation that will see more British startups and university spinouts emerging over the next five years.         
    Siemens’ Manchester HQ hosted the first AMUK Members Forum of 2025. Photo by 3D Printing Industry.
    What is the current state of additive manufacturing in the UK?
    According to Dugdale, the 3D printing industry is experiencing a challenging period, driven largely by global economic pressures. “I wouldn’t describe it as underperforming, I’d describe it as flat,” Dugdale said. “The manufacturing sector as a whole is facing significant challenges, and additive manufacturing is no exception.” He pointed to increased competition, a cautious investment climate, and the reluctance of businesses to adopt new technologies due to the economic uncertainty. 
    Dugdale specifically highlighted the increase in the UK’s National Insurance contributionrate for employers, which rose from 13.8% to 15% on April 6, 2025. He noted that many British companies postponed investment decisions ahead of the announcement, reflecting growing caution within the UK manufacturing sector. “With additive manufacturing, people need to be willing to take risks,” added Dugdale. “People are holding off at the moment because the current climate doesn’t favor risk.” 
    Dugdale remains optimistic about the sector’s long-term potential, arguing that the UK continues to excel in academia and R&D. However, for Dugdale, commercializing that research is where the country must improve before it can stand out on the world stage. This becomes especially clear when compared to countries in North America and Asia, which receive significantly greater financial support. “We’re never going to compete with the US and China, because they have so much more money behind them,” he explained.
    In a European context, Dugdale believes the UK “is doing quite well.” However, Britain remains below Spain in terms of financial backing and technology adoption. “Spain has a much more mature industry,” Dugdale explained. “Their AM association has been going for 10 years, and it’s clear that their industry is more cohesive and further along. It’s a level of professionalism we can learn from.” While the Iberian country faces similar challenges in standards, supply chain, and visibility, it benefits from a level of cohesion that sets it apart from many other European countries.
    Dugdale pointed to the Formnext trade show as a clear example of this disparity. He expects the Spanish pavilion to span around 200 square meters and feature ten companies at this year’s event, a “massive” difference compared to the UK’s 36 square meters last year. AMUK’s presence could grow to around 70 square meters at Formnext 2025, but this still lags far behind. Dugdale attributes this gap to government support. “They get more funding. This makes it a lot more attractive for companies to come because there’s less risk for them,” he explained.  
    Josh Dugdale speaking at the AMUK Members Forum in Manchester. Photo by 3D Printing Industry.
    3D printing for UK Defense 
    As global security concerns grow, the UK government has intensified efforts to bolster its defense capabilities. In this context, 3D printing is emerging as a key enabler. Earlier this year, the Ministry of Defencereleased its first Defence Advanced Manufacturing Strategy, outlining a plan to “embrace 3D printing,” with additive manufacturing expected to play a pivotal role in the UK’s future military operations. 
    Dugdale identified two key advantages of additive manufacturing for defense: supply chain resilience and frontline production. For the former, he stressed the importance of building localized supply chains to reduce lead times and eliminate dependence on overseas shipments. This capability is crucial for ensuring that military platforms, whether on land, at sea, or in the air, remain operational. 
    3D printing near the front lines offers advantages for conducting quick repairs and maintaining warfighting capabilities in the field. “If a tank needs to get back off the battlefield, you can print a widget or bracket that’ll hold for just five miles,” Dugdale explained. “It’s not about perfect engineering; it’s about getting the vehicle home.” 
    The British Army has already adopted containerized 3D printers to test additive manufacturing near the front lines. Last year, British troops deployed metal and polymer 3D printers during Exercise Steadfast Defender, NATO’s largest military exercise since the Cold War. Dubbed Project Bokkr, the additive manufacturing capabilities included XSPEE3D cold spray 3D printer from Australian firm SPEE3D.    
    Elsewhere in 2024, the British Army participated in Additive Manufacturing Village 2024, a military showcase organized by the European Defence Agency. During the event, UK personnel 3D printed 133 functional parts, including 20 made from metal. They also developed technical data packsfor 70 different 3D printable spare parts. The aim was to equip Ukrainian troops with the capability to 3D print military equipment directly at the point of need.
    Dugdale believes success in the UK defense sector will help drive wider adoption of 3D printing. “Defense will lead the way,” he said, suggesting that military users will build the knowledge base necessary for broader civilian adoption. This could also spur innovation in materials science, an area Dugdale expects to see significant advancements in the coming years.    
    A British Army operator checks a part 3D printed on SPEE3D’s XSPEE3D Cold Spray 3D printer. Photo via the British Army.
    Advocating for a “unified industrial strategy”
    Despite promising growth in defence, Dugdale identified major hurdles that still hinder the widespread adoption of additive manufacturingin the UK. 
    A key challenge lies in the significant knowledge gap surrounding the various types of AM and their unique advantages. This gap, he noted, discourages professionals familiar with traditional manufacturing methods like milling and turning from embracing 3D printing. “FDM is not the same as WAAM,” added Dugdale. “Trying to explain that in a very nice, coherent story is not always easy.”
    Dugdale also raised concerns about the industry’s fragmented nature, especially when it comes to software compatibility and the lack of interoperability between 3D printing systems. “The software is often closed, and different machines don’t always communicate well with each other. That can create fear about locking into the wrong ecosystem too early,” he explained. 
    For Dugdale, these barriers can only be overcome with a clear industrial strategy for additive manufacturing. He believes the UK Government should develop a unified strategy that defines a clear roadmap for development. This, Dugdale argued, would enable industry players to align their efforts and investments. 
    The UK has invested over £500 million in AM-related projects over the past decade. However, Dugdale explained that fragmented funding has limited its impact. Instead, the AMUK Chief argues that the UK Government’s strategy should recognize AM as one of “several key enabling technologies,” alongside machine tooling, metrology, and other critical manufacturing tools. 
    He believes this unified approach could significantly boost the UK’s productivity and fully integrate 3D printing into the wider industrial landscape. “Companies will align themselves with the roadmap, allowing them to grow and mature at the same rate,” Dugdale added. “This will help us to make smarter decisions about how we fund and where we fund.”   
    AMUK’s roadmap and the future of 3D printing in the UK   
    When forecasting 3D printing market performance, Dugdale and his team track five key industries: automotive, aerospace, medical, metal goods, and chemical processes. According to Dugdale, these industries are the primary users of machine tools, which makes them crucial indicators of market health.
    AMUK also relies on 3D printing industry surveys to gauge confidence, helping them to spot trends even when granular data is scarce. By comparing sector performance with survey-based confidence indicators, AMUK builds insights into the future market trajectory. The strong performance of sectors like aerospace and healthcare, which depend heavily on 3D printing, reinforces Dugdale’s confidence in the long-term potential of additive manufacturing.
    Looking ahead to the second half of 2025, AMUK plans to focus on three primary challenges: supply chain visibility, skills development, and standards. Dugdale explains that these issues remain central to the maturation of the UK’s AM ecosystem. Education will play a key role in these efforts. 
    AMUK is already running several additive manufacturing upskilling initiatives in schools and universities to build the next generation of 3D printing pioneers. These include pilot projects that introduce 3D printing to Key Stage 3 studentsand AM university courses that are tailored to industry needs. 
    In the longer term, Dugdale suggests AMUK could evolve to focus more on addressing specific industry challenges, such as net-zero emissions or automotive light-weighting. This would involve creating specialized working groups that focus on how 3D printing can address specific pressing issues. 
    Interestingly, Dugdale revealed that AMUK’s success in advancing the UK’s 3D printing industry could eventually lead to the organization being dissolved and reabsorbed into the MTA. This outcome, he explained, would signal that “additive manufacturing has really matured” and is now seen as an integral part of the broader manufacturing ecosystem, rather than a niche technology.
    Ultimately, Dugdale is optimistic for the future of 3D printing in the UK. He acknowledged that AMUK is still “trying to play catch-up for the last 100 years of machine tool technology.” However, additive manufacturing innovations are set to accelerate. “There’s a lot of exciting research happening in universities, and we need to find ways to help these initiatives gain the funding and visibility they need,” Dugdale urged.
    As the technology continues to grow, Dugdale believes additive manufacturing will gradually lose its niche status and become a standard tool for manufacturers. “In ten years, we could see a generation of workers who grew up with 3D printers at home,” he told me. “For them, it will just be another technology to use in the workplace, not something to be amazed by.” 
    With this future in mind, Dugdale’s vision for 3D printing is one of broad adoption, supported by clear strategy and policy, as the technology continues to evolve and integrate into UK industry. 
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    #state #printing #expert #insights #amuks
    The State of 3D Printing in the UK: Expert Insights from AMUK’s Joshua Dugdale
    Additive Manufacturing UK’s first Members Forum of 2025 was held at Siemens’ UK headquarters in South Manchester earlier this year. The event featured presentations from AMUK members and offered attendees a chance to network and share insights.  Ahead of the day-long meetup, 3D Printing Industry caught up with Joshua Dugdale, Head of AMUK, to learn more about the current state of additive manufacturing and the future of 3D printing in Britain.  AMUK is the United Kingdom’s primary 3D printing trade organization. Established in 2014, it operates within the Manufacturing Technologies Associationcluster. Attendees at this year’s first meetup spanned the UK’s entire 3D printing ecosystem. Highlights included discussion on precious materials from Cookson Industrial, simulation software from Siemens, digital thread solutions from Kaizen PLM, and 3D printing services provided by ARRK.  With a background in mechanical engineering, Dugdale is “responsible for everything and anything AMUK does as an organization.” According to the Loughborough University alumnus, who is also Head of Technology and Skills at the MTA, AMUK’s core mission is to “create an environment in the UK where additive manufacturing can thrive.” He elaborated on how his organization is working to increase the commercial success of its members within the “struggling” global manufacturing environment. Dugdale shared his perspective on the key challenges facing 3D printing in the UK. He pointed to a “tough” operating environment hampered by global financial challenges, which is delaying investments.  Despite this, AMUK’s leader remains optimistic about the sector’s long-term potential, highlighting the UK’s success in R&D and annual 3D printing intellectual propertyoutput. Dugdale emphasized the value of 3D printing for UK defense and supply chain resilience, arguing that “defense will lead the way” in 3D printing innovation.  Looking ahead, Dugdale called on the UK Government to create a unified 3D printing roadmap to replace its “disjointed” approach to policy and funding. He also shared AMUK’s strategy for 2025 and beyond, emphasizing a focus on eductaion, supply chain visibility, and standards. Ultimately, the AMUK figurehead shared a positive outlook on the future of 3D printing in the UK. He envisions a new wave of innovation that will see more British startups and university spinouts emerging over the next five years.          Siemens’ Manchester HQ hosted the first AMUK Members Forum of 2025. Photo by 3D Printing Industry. What is the current state of additive manufacturing in the UK? According to Dugdale, the 3D printing industry is experiencing a challenging period, driven largely by global economic pressures. “I wouldn’t describe it as underperforming, I’d describe it as flat,” Dugdale said. “The manufacturing sector as a whole is facing significant challenges, and additive manufacturing is no exception.” He pointed to increased competition, a cautious investment climate, and the reluctance of businesses to adopt new technologies due to the economic uncertainty.  Dugdale specifically highlighted the increase in the UK’s National Insurance contributionrate for employers, which rose from 13.8% to 15% on April 6, 2025. He noted that many British companies postponed investment decisions ahead of the announcement, reflecting growing caution within the UK manufacturing sector. “With additive manufacturing, people need to be willing to take risks,” added Dugdale. “People are holding off at the moment because the current climate doesn’t favor risk.”  Dugdale remains optimistic about the sector’s long-term potential, arguing that the UK continues to excel in academia and R&D. However, for Dugdale, commercializing that research is where the country must improve before it can stand out on the world stage. This becomes especially clear when compared to countries in North America and Asia, which receive significantly greater financial support. “We’re never going to compete with the US and China, because they have so much more money behind them,” he explained. In a European context, Dugdale believes the UK “is doing quite well.” However, Britain remains below Spain in terms of financial backing and technology adoption. “Spain has a much more mature industry,” Dugdale explained. “Their AM association has been going for 10 years, and it’s clear that their industry is more cohesive and further along. It’s a level of professionalism we can learn from.” While the Iberian country faces similar challenges in standards, supply chain, and visibility, it benefits from a level of cohesion that sets it apart from many other European countries. Dugdale pointed to the Formnext trade show as a clear example of this disparity. He expects the Spanish pavilion to span around 200 square meters and feature ten companies at this year’s event, a “massive” difference compared to the UK’s 36 square meters last year. AMUK’s presence could grow to around 70 square meters at Formnext 2025, but this still lags far behind. Dugdale attributes this gap to government support. “They get more funding. This makes it a lot more attractive for companies to come because there’s less risk for them,” he explained.   Josh Dugdale speaking at the AMUK Members Forum in Manchester. Photo by 3D Printing Industry. 3D printing for UK Defense  As global security concerns grow, the UK government has intensified efforts to bolster its defense capabilities. In this context, 3D printing is emerging as a key enabler. Earlier this year, the Ministry of Defencereleased its first Defence Advanced Manufacturing Strategy, outlining a plan to “embrace 3D printing,” with additive manufacturing expected to play a pivotal role in the UK’s future military operations.  Dugdale identified two key advantages of additive manufacturing for defense: supply chain resilience and frontline production. For the former, he stressed the importance of building localized supply chains to reduce lead times and eliminate dependence on overseas shipments. This capability is crucial for ensuring that military platforms, whether on land, at sea, or in the air, remain operational.  3D printing near the front lines offers advantages for conducting quick repairs and maintaining warfighting capabilities in the field. “If a tank needs to get back off the battlefield, you can print a widget or bracket that’ll hold for just five miles,” Dugdale explained. “It’s not about perfect engineering; it’s about getting the vehicle home.”  The British Army has already adopted containerized 3D printers to test additive manufacturing near the front lines. Last year, British troops deployed metal and polymer 3D printers during Exercise Steadfast Defender, NATO’s largest military exercise since the Cold War. Dubbed Project Bokkr, the additive manufacturing capabilities included XSPEE3D cold spray 3D printer from Australian firm SPEE3D.     Elsewhere in 2024, the British Army participated in Additive Manufacturing Village 2024, a military showcase organized by the European Defence Agency. During the event, UK personnel 3D printed 133 functional parts, including 20 made from metal. They also developed technical data packsfor 70 different 3D printable spare parts. The aim was to equip Ukrainian troops with the capability to 3D print military equipment directly at the point of need. Dugdale believes success in the UK defense sector will help drive wider adoption of 3D printing. “Defense will lead the way,” he said, suggesting that military users will build the knowledge base necessary for broader civilian adoption. This could also spur innovation in materials science, an area Dugdale expects to see significant advancements in the coming years.     A British Army operator checks a part 3D printed on SPEE3D’s XSPEE3D Cold Spray 3D printer. Photo via the British Army. Advocating for a “unified industrial strategy” Despite promising growth in defence, Dugdale identified major hurdles that still hinder the widespread adoption of additive manufacturingin the UK.  A key challenge lies in the significant knowledge gap surrounding the various types of AM and their unique advantages. This gap, he noted, discourages professionals familiar with traditional manufacturing methods like milling and turning from embracing 3D printing. “FDM is not the same as WAAM,” added Dugdale. “Trying to explain that in a very nice, coherent story is not always easy.” Dugdale also raised concerns about the industry’s fragmented nature, especially when it comes to software compatibility and the lack of interoperability between 3D printing systems. “The software is often closed, and different machines don’t always communicate well with each other. That can create fear about locking into the wrong ecosystem too early,” he explained.  For Dugdale, these barriers can only be overcome with a clear industrial strategy for additive manufacturing. He believes the UK Government should develop a unified strategy that defines a clear roadmap for development. This, Dugdale argued, would enable industry players to align their efforts and investments.  The UK has invested over £500 million in AM-related projects over the past decade. However, Dugdale explained that fragmented funding has limited its impact. Instead, the AMUK Chief argues that the UK Government’s strategy should recognize AM as one of “several key enabling technologies,” alongside machine tooling, metrology, and other critical manufacturing tools.  He believes this unified approach could significantly boost the UK’s productivity and fully integrate 3D printing into the wider industrial landscape. “Companies will align themselves with the roadmap, allowing them to grow and mature at the same rate,” Dugdale added. “This will help us to make smarter decisions about how we fund and where we fund.”    AMUK’s roadmap and the future of 3D printing in the UK    When forecasting 3D printing market performance, Dugdale and his team track five key industries: automotive, aerospace, medical, metal goods, and chemical processes. According to Dugdale, these industries are the primary users of machine tools, which makes them crucial indicators of market health. AMUK also relies on 3D printing industry surveys to gauge confidence, helping them to spot trends even when granular data is scarce. By comparing sector performance with survey-based confidence indicators, AMUK builds insights into the future market trajectory. The strong performance of sectors like aerospace and healthcare, which depend heavily on 3D printing, reinforces Dugdale’s confidence in the long-term potential of additive manufacturing. Looking ahead to the second half of 2025, AMUK plans to focus on three primary challenges: supply chain visibility, skills development, and standards. Dugdale explains that these issues remain central to the maturation of the UK’s AM ecosystem. Education will play a key role in these efforts.  AMUK is already running several additive manufacturing upskilling initiatives in schools and universities to build the next generation of 3D printing pioneers. These include pilot projects that introduce 3D printing to Key Stage 3 studentsand AM university courses that are tailored to industry needs.  In the longer term, Dugdale suggests AMUK could evolve to focus more on addressing specific industry challenges, such as net-zero emissions or automotive light-weighting. This would involve creating specialized working groups that focus on how 3D printing can address specific pressing issues.  Interestingly, Dugdale revealed that AMUK’s success in advancing the UK’s 3D printing industry could eventually lead to the organization being dissolved and reabsorbed into the MTA. This outcome, he explained, would signal that “additive manufacturing has really matured” and is now seen as an integral part of the broader manufacturing ecosystem, rather than a niche technology. Ultimately, Dugdale is optimistic for the future of 3D printing in the UK. He acknowledged that AMUK is still “trying to play catch-up for the last 100 years of machine tool technology.” However, additive manufacturing innovations are set to accelerate. “There’s a lot of exciting research happening in universities, and we need to find ways to help these initiatives gain the funding and visibility they need,” Dugdale urged. As the technology continues to grow, Dugdale believes additive manufacturing will gradually lose its niche status and become a standard tool for manufacturers. “In ten years, we could see a generation of workers who grew up with 3D printers at home,” he told me. “For them, it will just be another technology to use in the workplace, not something to be amazed by.”  With this future in mind, Dugdale’s vision for 3D printing is one of broad adoption, supported by clear strategy and policy, as the technology continues to evolve and integrate into UK industry.  Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. #state #printing #expert #insights #amuks
    3DPRINTINGINDUSTRY.COM
    The State of 3D Printing in the UK: Expert Insights from AMUK’s Joshua Dugdale
    Additive Manufacturing UK (AMUK)’s first Members Forum of 2025 was held at Siemens’ UK headquarters in South Manchester earlier this year. The event featured presentations from AMUK members and offered attendees a chance to network and share insights.  Ahead of the day-long meetup, 3D Printing Industry caught up with Joshua Dugdale, Head of AMUK, to learn more about the current state of additive manufacturing and the future of 3D printing in Britain.  AMUK is the United Kingdom’s primary 3D printing trade organization. Established in 2014, it operates within the Manufacturing Technologies Association (MTA) cluster. Attendees at this year’s first meetup spanned the UK’s entire 3D printing ecosystem. Highlights included discussion on precious materials from Cookson Industrial, simulation software from Siemens, digital thread solutions from Kaizen PLM, and 3D printing services provided by ARRK.  With a background in mechanical engineering, Dugdale is “responsible for everything and anything AMUK does as an organization.” According to the Loughborough University alumnus, who is also Head of Technology and Skills at the MTA, AMUK’s core mission is to “create an environment in the UK where additive manufacturing can thrive.” He elaborated on how his organization is working to increase the commercial success of its members within the “struggling” global manufacturing environment. Dugdale shared his perspective on the key challenges facing 3D printing in the UK. He pointed to a “tough” operating environment hampered by global financial challenges, which is delaying investments.  Despite this, AMUK’s leader remains optimistic about the sector’s long-term potential, highlighting the UK’s success in R&D and annual 3D printing intellectual property (IP) output. Dugdale emphasized the value of 3D printing for UK defense and supply chain resilience, arguing that “defense will lead the way” in 3D printing innovation.  Looking ahead, Dugdale called on the UK Government to create a unified 3D printing roadmap to replace its “disjointed” approach to policy and funding. He also shared AMUK’s strategy for 2025 and beyond, emphasizing a focus on eductaion, supply chain visibility, and standards. Ultimately, the AMUK figurehead shared a positive outlook on the future of 3D printing in the UK. He envisions a new wave of innovation that will see more British startups and university spinouts emerging over the next five years.          Siemens’ Manchester HQ hosted the first AMUK Members Forum of 2025. Photo by 3D Printing Industry. What is the current state of additive manufacturing in the UK? According to Dugdale, the 3D printing industry is experiencing a challenging period, driven largely by global economic pressures. “I wouldn’t describe it as underperforming, I’d describe it as flat,” Dugdale said. “The manufacturing sector as a whole is facing significant challenges, and additive manufacturing is no exception.” He pointed to increased competition, a cautious investment climate, and the reluctance of businesses to adopt new technologies due to the economic uncertainty.  Dugdale specifically highlighted the increase in the UK’s National Insurance contribution (NIC) rate for employers, which rose from 13.8% to 15% on April 6, 2025. He noted that many British companies postponed investment decisions ahead of the announcement, reflecting growing caution within the UK manufacturing sector. “With additive manufacturing, people need to be willing to take risks,” added Dugdale. “People are holding off at the moment because the current climate doesn’t favor risk.”  Dugdale remains optimistic about the sector’s long-term potential, arguing that the UK continues to excel in academia and R&D. However, for Dugdale, commercializing that research is where the country must improve before it can stand out on the world stage. This becomes especially clear when compared to countries in North America and Asia, which receive significantly greater financial support. “We’re never going to compete with the US and China, because they have so much more money behind them,” he explained. In a European context, Dugdale believes the UK “is doing quite well.” However, Britain remains below Spain in terms of financial backing and technology adoption. “Spain has a much more mature industry,” Dugdale explained. “Their AM association has been going for 10 years, and it’s clear that their industry is more cohesive and further along. It’s a level of professionalism we can learn from.” While the Iberian country faces similar challenges in standards, supply chain, and visibility, it benefits from a level of cohesion that sets it apart from many other European countries. Dugdale pointed to the Formnext trade show as a clear example of this disparity. He expects the Spanish pavilion to span around 200 square meters and feature ten companies at this year’s event, a “massive” difference compared to the UK’s 36 square meters last year. AMUK’s presence could grow to around 70 square meters at Formnext 2025, but this still lags far behind. Dugdale attributes this gap to government support. “They get more funding. This makes it a lot more attractive for companies to come because there’s less risk for them,” he explained.   Josh Dugdale speaking at the AMUK Members Forum in Manchester. Photo by 3D Printing Industry. 3D printing for UK Defense  As global security concerns grow, the UK government has intensified efforts to bolster its defense capabilities. In this context, 3D printing is emerging as a key enabler. Earlier this year, the Ministry of Defence (MoD) released its first Defence Advanced Manufacturing Strategy, outlining a plan to “embrace 3D printing,” with additive manufacturing expected to play a pivotal role in the UK’s future military operations.  Dugdale identified two key advantages of additive manufacturing for defense: supply chain resilience and frontline production. For the former, he stressed the importance of building localized supply chains to reduce lead times and eliminate dependence on overseas shipments. This capability is crucial for ensuring that military platforms, whether on land, at sea, or in the air, remain operational.  3D printing near the front lines offers advantages for conducting quick repairs and maintaining warfighting capabilities in the field. “If a tank needs to get back off the battlefield, you can print a widget or bracket that’ll hold for just five miles,” Dugdale explained. “It’s not about perfect engineering; it’s about getting the vehicle home.”  The British Army has already adopted containerized 3D printers to test additive manufacturing near the front lines. Last year, British troops deployed metal and polymer 3D printers during Exercise Steadfast Defender, NATO’s largest military exercise since the Cold War. Dubbed Project Bokkr, the additive manufacturing capabilities included XSPEE3D cold spray 3D printer from Australian firm SPEE3D.     Elsewhere in 2024, the British Army participated in Additive Manufacturing Village 2024, a military showcase organized by the European Defence Agency. During the event, UK personnel 3D printed 133 functional parts, including 20 made from metal. They also developed technical data packs (TDPs) for 70 different 3D printable spare parts. The aim was to equip Ukrainian troops with the capability to 3D print military equipment directly at the point of need. Dugdale believes success in the UK defense sector will help drive wider adoption of 3D printing. “Defense will lead the way,” he said, suggesting that military users will build the knowledge base necessary for broader civilian adoption. This could also spur innovation in materials science, an area Dugdale expects to see significant advancements in the coming years.     A British Army operator checks a part 3D printed on SPEE3D’s XSPEE3D Cold Spray 3D printer. Photo via the British Army. Advocating for a “unified industrial strategy” Despite promising growth in defence, Dugdale identified major hurdles that still hinder the widespread adoption of additive manufacturing (AM) in the UK.  A key challenge lies in the significant knowledge gap surrounding the various types of AM and their unique advantages. This gap, he noted, discourages professionals familiar with traditional manufacturing methods like milling and turning from embracing 3D printing. “FDM is not the same as WAAM,” added Dugdale. “Trying to explain that in a very nice, coherent story is not always easy.” Dugdale also raised concerns about the industry’s fragmented nature, especially when it comes to software compatibility and the lack of interoperability between 3D printing systems. “The software is often closed, and different machines don’t always communicate well with each other. That can create fear about locking into the wrong ecosystem too early,” he explained.  For Dugdale, these barriers can only be overcome with a clear industrial strategy for additive manufacturing. He believes the UK Government should develop a unified strategy that defines a clear roadmap for development. This, Dugdale argued, would enable industry players to align their efforts and investments.  The UK has invested over £500 million in AM-related projects over the past decade. However, Dugdale explained that fragmented funding has limited its impact. Instead, the AMUK Chief argues that the UK Government’s strategy should recognize AM as one of “several key enabling technologies,” alongside machine tooling, metrology, and other critical manufacturing tools.  He believes this unified approach could significantly boost the UK’s productivity and fully integrate 3D printing into the wider industrial landscape. “Companies will align themselves with the roadmap, allowing them to grow and mature at the same rate,” Dugdale added. “This will help us to make smarter decisions about how we fund and where we fund.”    AMUK’s roadmap and the future of 3D printing in the UK    When forecasting 3D printing market performance, Dugdale and his team track five key industries: automotive, aerospace, medical, metal goods, and chemical processes. According to Dugdale, these industries are the primary users of machine tools, which makes them crucial indicators of market health. AMUK also relies on 3D printing industry surveys to gauge confidence, helping them to spot trends even when granular data is scarce. By comparing sector performance with survey-based confidence indicators, AMUK builds insights into the future market trajectory. The strong performance of sectors like aerospace and healthcare, which depend heavily on 3D printing, reinforces Dugdale’s confidence in the long-term potential of additive manufacturing. Looking ahead to the second half of 2025, AMUK plans to focus on three primary challenges: supply chain visibility, skills development, and standards. Dugdale explains that these issues remain central to the maturation of the UK’s AM ecosystem. Education will play a key role in these efforts.  AMUK is already running several additive manufacturing upskilling initiatives in schools and universities to build the next generation of 3D printing pioneers. These include pilot projects that introduce 3D printing to Key Stage 3 students (aged 11) and AM university courses that are tailored to industry needs.  In the longer term, Dugdale suggests AMUK could evolve to focus more on addressing specific industry challenges, such as net-zero emissions or automotive light-weighting. This would involve creating specialized working groups that focus on how 3D printing can address specific pressing issues.  Interestingly, Dugdale revealed that AMUK’s success in advancing the UK’s 3D printing industry could eventually lead to the organization being dissolved and reabsorbed into the MTA. This outcome, he explained, would signal that “additive manufacturing has really matured” and is now seen as an integral part of the broader manufacturing ecosystem, rather than a niche technology. Ultimately, Dugdale is optimistic for the future of 3D printing in the UK. He acknowledged that AMUK is still “trying to play catch-up for the last 100 years of machine tool technology.” However, additive manufacturing innovations are set to accelerate. “There’s a lot of exciting research happening in universities, and we need to find ways to help these initiatives gain the funding and visibility they need,” Dugdale urged. As the technology continues to grow, Dugdale believes additive manufacturing will gradually lose its niche status and become a standard tool for manufacturers. “In ten years, we could see a generation of workers who grew up with 3D printers at home,” he told me. “For them, it will just be another technology to use in the workplace, not something to be amazed by.”  With this future in mind, Dugdale’s vision for 3D printing is one of broad adoption, supported by clear strategy and policy, as the technology continues to evolve and integrate into UK industry.  Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Like
    Love
    Wow
    Sad
    Angry
    575
    0 Reacties 0 aandelen
  • Trump scraps Biden software security, AI, post-quantum encryption efforts in new executive order

    This audio is auto-generated. Please let us know if you have feedback.

    President Donald Trump signed an executive orderFriday that scratched or revised several of his Democratic predecessors’ major cybersecurity initiatives.
    “Just days before President Trump took office, the Biden Administration attempted to sneak problematic and distracting issues into cybersecurity policy,” the White House said in a fact sheet about Trump’s new directive, referring to projects that Biden launched with his Jan. 15 executive order.
    Trump’s new EO eliminates those projects, which would have required software vendors to prove their compliance with new federal security standards, prioritized research and testing of artificial intelligence for cyber defense and accelerated the rollout of encryption that withstands the future code-cracking powers of quantum computers.
    “President Trump has made it clear that this Administration will do what it takes to make America cyber secure,” the White House said in its fact sheet, “including focusing relentlessly on technical and organizational professionalism to improve the security and resilience of the nation’s information systems and networks.”
    Major cyber regulation shift
    Trump’s elimination of Biden’s software security requirements for federal contractors represents a significant government reversal on cyber regulation. Following years of major cyberattacks linked to insecure software, the Biden administration sought to use federal procurement power to improve the software industry’s practices. That effort began with Biden’s 2021 cyber order and gained strength in 2024, and then Biden officials tried to add teeth to the initiative before leaving office in January. But as it eliminated that project on Friday, the Trump administration castigated Biden’s efforts as “imposing unproven and burdensome software accounting processes that prioritized compliance checklists over genuine security investments.”
    Trump’s order eliminates provisions from Biden’s directive that would have required federal contractors to submit “secure software development attestations,” along with technical data to back up those attestations. Also now eradicated are provisions that would have required the Cybersecurity and Infrastructure Security Agency to verify vendors’ attestations, required the Office of the National Cyber Director to publish the results of those reviews and encouraged ONCD to refer companies whose attestations fail a review to the Justice Department “for action as appropriate.”

    Trump’s order leaves in place a National Institute of Standards and Technology collaboration with industry to update NIST’s Software Software Development Framework, but it eliminates parts of Biden’s order that would have incorporated those SSDF updates into security requirements for federal vendors.
    In a related move, Trump eliminated provisions of his predecessor’s order that would have required NIST to “issue guidance identifying minimum cybersecurity practices”and required federal contractors to follow those practices.
    AI security cut
    Trump also took an axe to Biden requirements related to AI and its ability to help repel cyberattacks. He scrapped a Biden initiative to test AI’s power to “enhance cyber defense of critical infrastructure in the energy sector,” as well as one that would have directed federal research programs to prioritize topics like the security of AI-powered coding and “methods for designing secure AI systems.” The EO also killed a provision would have required the Pentagon to “use advanced AI models for cyber defense.”
    On quantum computing, Trump’s directive significantly pares back Biden’s attempts to accelerate the government’s adoption of post-quantum cryptography. Biden told agencies to start using quantum-resistant encryption “as soon as practicable” and to start requiring vendors to use it when technologically possible. Trump eliminated those requirements, leaving only a Biden requirement that CISA maintain “a list of product categories in which products that support post-quantum cryptography … are widely available.”
    Trump also eliminated instructions for the departments of State and Commerce to encourage key foreign allies and overseas industries to adopt NIST’s PQC algorithms.
    The EO dropped many other provisions of Biden’s January directive, including one requiring agencies to start testing phishing-resistant authentication technologies, one requiring NIST to advise other agencies on internet routing security and one requiring agencies to use strong email encryption. Trump also cut language directing the Office of Management and Budget to advise agencies on addressing risks related to IT vendor concentration.
    In his January order, Biden ordered agencies to explore and encourage the use of digital identity documents to prevent fraud, including in public benefits programs. Trump eliminated those initiatives, calling them “inappropriate.” 
    Trump also tweaked the language of Obama-era sanctions authorities targeting people involved in cyberattacks on the U.S., specifying that the Treasury Department can only sanction foreigners for these activities. The White House said Trump’s change would prevent the power’s “misuse against domestic political opponents.”
    Amid the whirlwind of changes, Trump left one major Biden-era cyber program intact: a Federal Communications Commission project, modeled on the Energy Star program, that will apply government seals of approval to technology products that undergo security testing by federally accredited labs. Trump preserved the language in Biden’s order that requires companies selling internet-of-things devices to the federal government to go through the FCC program by January 2027.
    #trump #scraps #biden #software #security
    Trump scraps Biden software security, AI, post-quantum encryption efforts in new executive order
    This audio is auto-generated. Please let us know if you have feedback. President Donald Trump signed an executive orderFriday that scratched or revised several of his Democratic predecessors’ major cybersecurity initiatives. “Just days before President Trump took office, the Biden Administration attempted to sneak problematic and distracting issues into cybersecurity policy,” the White House said in a fact sheet about Trump’s new directive, referring to projects that Biden launched with his Jan. 15 executive order. Trump’s new EO eliminates those projects, which would have required software vendors to prove their compliance with new federal security standards, prioritized research and testing of artificial intelligence for cyber defense and accelerated the rollout of encryption that withstands the future code-cracking powers of quantum computers. “President Trump has made it clear that this Administration will do what it takes to make America cyber secure,” the White House said in its fact sheet, “including focusing relentlessly on technical and organizational professionalism to improve the security and resilience of the nation’s information systems and networks.” Major cyber regulation shift Trump’s elimination of Biden’s software security requirements for federal contractors represents a significant government reversal on cyber regulation. Following years of major cyberattacks linked to insecure software, the Biden administration sought to use federal procurement power to improve the software industry’s practices. That effort began with Biden’s 2021 cyber order and gained strength in 2024, and then Biden officials tried to add teeth to the initiative before leaving office in January. But as it eliminated that project on Friday, the Trump administration castigated Biden’s efforts as “imposing unproven and burdensome software accounting processes that prioritized compliance checklists over genuine security investments.” Trump’s order eliminates provisions from Biden’s directive that would have required federal contractors to submit “secure software development attestations,” along with technical data to back up those attestations. Also now eradicated are provisions that would have required the Cybersecurity and Infrastructure Security Agency to verify vendors’ attestations, required the Office of the National Cyber Director to publish the results of those reviews and encouraged ONCD to refer companies whose attestations fail a review to the Justice Department “for action as appropriate.” Trump’s order leaves in place a National Institute of Standards and Technology collaboration with industry to update NIST’s Software Software Development Framework, but it eliminates parts of Biden’s order that would have incorporated those SSDF updates into security requirements for federal vendors. In a related move, Trump eliminated provisions of his predecessor’s order that would have required NIST to “issue guidance identifying minimum cybersecurity practices”and required federal contractors to follow those practices. AI security cut Trump also took an axe to Biden requirements related to AI and its ability to help repel cyberattacks. He scrapped a Biden initiative to test AI’s power to “enhance cyber defense of critical infrastructure in the energy sector,” as well as one that would have directed federal research programs to prioritize topics like the security of AI-powered coding and “methods for designing secure AI systems.” The EO also killed a provision would have required the Pentagon to “use advanced AI models for cyber defense.” On quantum computing, Trump’s directive significantly pares back Biden’s attempts to accelerate the government’s adoption of post-quantum cryptography. Biden told agencies to start using quantum-resistant encryption “as soon as practicable” and to start requiring vendors to use it when technologically possible. Trump eliminated those requirements, leaving only a Biden requirement that CISA maintain “a list of product categories in which products that support post-quantum cryptography … are widely available.” Trump also eliminated instructions for the departments of State and Commerce to encourage key foreign allies and overseas industries to adopt NIST’s PQC algorithms. The EO dropped many other provisions of Biden’s January directive, including one requiring agencies to start testing phishing-resistant authentication technologies, one requiring NIST to advise other agencies on internet routing security and one requiring agencies to use strong email encryption. Trump also cut language directing the Office of Management and Budget to advise agencies on addressing risks related to IT vendor concentration. In his January order, Biden ordered agencies to explore and encourage the use of digital identity documents to prevent fraud, including in public benefits programs. Trump eliminated those initiatives, calling them “inappropriate.”  Trump also tweaked the language of Obama-era sanctions authorities targeting people involved in cyberattacks on the U.S., specifying that the Treasury Department can only sanction foreigners for these activities. The White House said Trump’s change would prevent the power’s “misuse against domestic political opponents.” Amid the whirlwind of changes, Trump left one major Biden-era cyber program intact: a Federal Communications Commission project, modeled on the Energy Star program, that will apply government seals of approval to technology products that undergo security testing by federally accredited labs. Trump preserved the language in Biden’s order that requires companies selling internet-of-things devices to the federal government to go through the FCC program by January 2027. #trump #scraps #biden #software #security
    WWW.CYBERSECURITYDIVE.COM
    Trump scraps Biden software security, AI, post-quantum encryption efforts in new executive order
    This audio is auto-generated. Please let us know if you have feedback. President Donald Trump signed an executive order (EO) Friday that scratched or revised several of his Democratic predecessors’ major cybersecurity initiatives. “Just days before President Trump took office, the Biden Administration attempted to sneak problematic and distracting issues into cybersecurity policy,” the White House said in a fact sheet about Trump’s new directive, referring to projects that Biden launched with his Jan. 15 executive order. Trump’s new EO eliminates those projects, which would have required software vendors to prove their compliance with new federal security standards, prioritized research and testing of artificial intelligence for cyber defense and accelerated the rollout of encryption that withstands the future code-cracking powers of quantum computers. “President Trump has made it clear that this Administration will do what it takes to make America cyber secure,” the White House said in its fact sheet, “including focusing relentlessly on technical and organizational professionalism to improve the security and resilience of the nation’s information systems and networks.” Major cyber regulation shift Trump’s elimination of Biden’s software security requirements for federal contractors represents a significant government reversal on cyber regulation. Following years of major cyberattacks linked to insecure software, the Biden administration sought to use federal procurement power to improve the software industry’s practices. That effort began with Biden’s 2021 cyber order and gained strength in 2024, and then Biden officials tried to add teeth to the initiative before leaving office in January. But as it eliminated that project on Friday, the Trump administration castigated Biden’s efforts as “imposing unproven and burdensome software accounting processes that prioritized compliance checklists over genuine security investments.” Trump’s order eliminates provisions from Biden’s directive that would have required federal contractors to submit “secure software development attestations,” along with technical data to back up those attestations. Also now eradicated are provisions that would have required the Cybersecurity and Infrastructure Security Agency to verify vendors’ attestations, required the Office of the National Cyber Director to publish the results of those reviews and encouraged ONCD to refer companies whose attestations fail a review to the Justice Department “for action as appropriate.” Trump’s order leaves in place a National Institute of Standards and Technology collaboration with industry to update NIST’s Software Software Development Framework, but it eliminates parts of Biden’s order that would have incorporated those SSDF updates into security requirements for federal vendors. In a related move, Trump eliminated provisions of his predecessor’s order that would have required NIST to “issue guidance identifying minimum cybersecurity practices” (based on a review of globally accepted standards) and required federal contractors to follow those practices. AI security cut Trump also took an axe to Biden requirements related to AI and its ability to help repel cyberattacks. He scrapped a Biden initiative to test AI’s power to “enhance cyber defense of critical infrastructure in the energy sector,” as well as one that would have directed federal research programs to prioritize topics like the security of AI-powered coding and “methods for designing secure AI systems.” The EO also killed a provision would have required the Pentagon to “use advanced AI models for cyber defense.” On quantum computing, Trump’s directive significantly pares back Biden’s attempts to accelerate the government’s adoption of post-quantum cryptography. Biden told agencies to start using quantum-resistant encryption “as soon as practicable” and to start requiring vendors to use it when technologically possible. Trump eliminated those requirements, leaving only a Biden requirement that CISA maintain “a list of product categories in which products that support post-quantum cryptography … are widely available.” Trump also eliminated instructions for the departments of State and Commerce to encourage key foreign allies and overseas industries to adopt NIST’s PQC algorithms. The EO dropped many other provisions of Biden’s January directive, including one requiring agencies to start testing phishing-resistant authentication technologies, one requiring NIST to advise other agencies on internet routing security and one requiring agencies to use strong email encryption. Trump also cut language directing the Office of Management and Budget to advise agencies on addressing risks related to IT vendor concentration. In his January order, Biden ordered agencies to explore and encourage the use of digital identity documents to prevent fraud, including in public benefits programs. Trump eliminated those initiatives, calling them “inappropriate.”  Trump also tweaked the language of Obama-era sanctions authorities targeting people involved in cyberattacks on the U.S., specifying that the Treasury Department can only sanction foreigners for these activities. The White House said Trump’s change would prevent the power’s “misuse against domestic political opponents.” Amid the whirlwind of changes, Trump left one major Biden-era cyber program intact: a Federal Communications Commission project, modeled on the Energy Star program, that will apply government seals of approval to technology products that undergo security testing by federally accredited labs. Trump preserved the language in Biden’s order that requires companies selling internet-of-things devices to the federal government to go through the FCC program by January 2027.
    Like
    Love
    Wow
    Sad
    Angry
    709
    0 Reacties 0 aandelen
  • Why tech companies are snubbing the London Stock Exchange

    British fintech Wise said this week it would shift its primary listing from London to New York, joining a growing list of firms snubbing the London Stock Exchange.
    UK chip designer Arm opted for a New York IPO in 2023, while food delivery giant Just Eat Takeaway quit the LSE for Amsterdam in November. 
    Sweden’s Klarna has confirmed plans to go public in New York, following in the footsteps of fellow Stockholm-based tech darling Spotify, which listed on the NYSE in 2018. 
    The draw? Bigger valuations, deeper capital, and more appetite for risk.

    Register Now
    “The US economy continues to perform far better than the EU, and valuations are simply higher for companies that can list there,” Victor Basta, managing partner at Artis Partners, told TNW.   
    The numbers back him up. The NYSE boasts a market cap of around trillion — compared to just trillion for the LSE. 
    That scale — and the deep-pocketed investors it attracts — pushed Arm to list across the pond. Wise followed for the same reason, according to CEO Kristo Käärmann. 
    Käärmann said the move would tap “the biggest market opportunity in the world for our products today, and enable better access to the world’s deepest and most liquid capital market.” 
    Beyond sheer growth potential, US investors are also known for taking bigger bets on growth-stage tech companies.  
    “US investors understand the whole ‘revenue-before-profit’ strategy,”  Andrey Korchak, a British serial entrepreneur, told TNW. “Meanwhile, in Europe, they often want to see revenue from day one.” 
    That risk aversion, Korchak believes, restricts the growth of startups.
    “Europe just doesn’t have the same density of tech unicorns,” he said. “And when startups here do hit that billion-dollar mark, most still prefer to list in the US.”
    Sean Reddington, co-founder of UK tech firm Thrive, fears that Wise’s New York listing will deepen the problems. 
    “Wise’s move to the US signals a worrying trend,” he said. “It threatens a ‘brain drain’ of capital and talent, making it harder for growth-stage VCs to invest in UK scaleups without a clear US exit plan.”
    He called for urgent government action, including providing “meaningful incentives” for tech firms to list in the UK. 
    “If the ultimate reward of a domestic IPO is diminished, it pushes more companies to consider relocating or listing overseas,” he said.
    Europe’s startup struggles will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at checkout to get 30%.

    Story by

    Siôn Geschwindt

    Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #why #tech #companies #are #snubbing
    Why tech companies are snubbing the London Stock Exchange
    British fintech Wise said this week it would shift its primary listing from London to New York, joining a growing list of firms snubbing the London Stock Exchange. UK chip designer Arm opted for a New York IPO in 2023, while food delivery giant Just Eat Takeaway quit the LSE for Amsterdam in November.  Sweden’s Klarna has confirmed plans to go public in New York, following in the footsteps of fellow Stockholm-based tech darling Spotify, which listed on the NYSE in 2018.  The draw? Bigger valuations, deeper capital, and more appetite for risk. Register Now “The US economy continues to perform far better than the EU, and valuations are simply higher for companies that can list there,” Victor Basta, managing partner at Artis Partners, told TNW.    The numbers back him up. The NYSE boasts a market cap of around trillion — compared to just trillion for the LSE.  That scale — and the deep-pocketed investors it attracts — pushed Arm to list across the pond. Wise followed for the same reason, according to CEO Kristo Käärmann.  Käärmann said the move would tap “the biggest market opportunity in the world for our products today, and enable better access to the world’s deepest and most liquid capital market.”  Beyond sheer growth potential, US investors are also known for taking bigger bets on growth-stage tech companies.   “US investors understand the whole ‘revenue-before-profit’ strategy,”  Andrey Korchak, a British serial entrepreneur, told TNW. “Meanwhile, in Europe, they often want to see revenue from day one.”  That risk aversion, Korchak believes, restricts the growth of startups. “Europe just doesn’t have the same density of tech unicorns,” he said. “And when startups here do hit that billion-dollar mark, most still prefer to list in the US.” Sean Reddington, co-founder of UK tech firm Thrive, fears that Wise’s New York listing will deepen the problems.  “Wise’s move to the US signals a worrying trend,” he said. “It threatens a ‘brain drain’ of capital and talent, making it harder for growth-stage VCs to invest in UK scaleups without a clear US exit plan.” He called for urgent government action, including providing “meaningful incentives” for tech firms to list in the UK.  “If the ultimate reward of a domestic IPO is diminished, it pushes more companies to consider relocating or listing overseas,” he said. Europe’s startup struggles will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at checkout to get 30%. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #why #tech #companies #are #snubbing
    THENEXTWEB.COM
    Why tech companies are snubbing the London Stock Exchange
    British fintech Wise said this week it would shift its primary listing from London to New York, joining a growing list of firms snubbing the London Stock Exchange. UK chip designer Arm opted for a New York IPO in 2023, while food delivery giant Just Eat Takeaway quit the LSE for Amsterdam in November.  Sweden’s Klarna has confirmed plans to go public in New York, following in the footsteps of fellow Stockholm-based tech darling Spotify, which listed on the NYSE in 2018.  The draw? Bigger valuations, deeper capital, and more appetite for risk. Register Now “The US economy continues to perform far better than the EU, and valuations are simply higher for companies that can list there,” Victor Basta, managing partner at Artis Partners, told TNW.    The numbers back him up. The NYSE boasts a market cap of around $27 trillion — compared to just $3.5 trillion for the LSE.  That scale — and the deep-pocketed investors it attracts — pushed Arm to list across the pond. Wise followed for the same reason, according to CEO Kristo Käärmann.  Käärmann said the move would tap “the biggest market opportunity in the world for our products today, and enable better access to the world’s deepest and most liquid capital market.”  Beyond sheer growth potential, US investors are also known for taking bigger bets on growth-stage tech companies.   “US investors understand the whole ‘revenue-before-profit’ strategy,”  Andrey Korchak, a British serial entrepreneur, told TNW. “Meanwhile, in Europe, they often want to see revenue from day one.”  That risk aversion, Korchak believes, restricts the growth of startups. “Europe just doesn’t have the same density of tech unicorns,” he said. “And when startups here do hit that billion-dollar mark, most still prefer to list in the US.” Sean Reddington, co-founder of UK tech firm Thrive, fears that Wise’s New York listing will deepen the problems.  “Wise’s move to the US signals a worrying trend,” he said. “It threatens a ‘brain drain’ of capital and talent, making it harder for growth-stage VCs to invest in UK scaleups without a clear US exit plan.” He called for urgent government action, including providing “meaningful incentives” for tech firms to list in the UK.  “If the ultimate reward of a domestic IPO is diminished, it pushes more companies to consider relocating or listing overseas,” he said. Europe’s startup struggles will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at checkout to get 30%. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    Like
    Love
    Wow
    Sad
    Angry
    585
    0 Reacties 0 aandelen
  • Manus has kick-started an AI agent boom in China

    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them. 

    There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March. 

    These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions. 

    China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life. 

    For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country. 

    As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom.

    Set the standard

    It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees. 

    Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project. 

    Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks.

    “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.”

    In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s. 

    Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue.

    Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”.

    What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market.

    A global address

    Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products. 

    Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.”

    But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away. 

    Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant.

    But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model. 

    An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch.

    Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.”

    For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month.

    A super‑app approach

    Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges. 

    ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers.

    Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis.

    Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat. 

    Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy.

    Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience.

    But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups.

    Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date. 

    ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments.

    “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.”

    Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    #manus #has #kickstarted #agent #boom
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.” #manus #has #kickstarted #agent #boom
    WWW.TECHNOLOGYREVIEW.COM
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised $75 million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over $36 million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management software (think Notion) than a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    Like
    Love
    Wow
    Sad
    Angry
    421
    0 Reacties 0 aandelen
  • Xiaomi Cannot Develop A Future In-House XRING Chipset Using TSMC’s 2nm Process Because Of The U.S. Crackdown On Specialized EDA Tools, Company Will Be Limited To The ‘N3E’ Node

    Menu

    Home
    News

    Hardware

    Gaming

    Mobile

    Finance
    Deals
    Reviews
    How To

    Wccftech

    Xiaomi Cannot Develop A Future In-House XRING Chipset Using TSMC’s 2nm Process Because Of The U.S. Crackdown On Specialized EDA Tools, Company Will Be Limited To The ‘N3E’ Node

    Omar Sohail •
    Jun 5, 2025 at 04:28am EDT

    The XRING 01 is a technological milestone, not just for Xiaomi, but it is also regarded as an achievement for China, and one that would make the U.S. government very nervous, because, like current-generation chipsets, the in-house solution has been mass produced on TSMC’s 3nm ‘N3E’ process. Unfortunately, Xiaomi’s progress might not scale past this threshold because the Trump administration has banned the export of EDA tools that are necessary to successfully fabricate a 2nm SoC.
    Tipster claims that EDA tools are mandatory in designing GAAFET structures, meaning that Xiaomi and its XRING division will be limited to TSMC’s ‘N3E’ node
    Since TSMC’s 2nm technology has a GAAFET structure, Weibo tipster Digital Chat Station states that it is imperative that Xiaomi gets hold of those EDA, or Electronic Design Automation tools. The Taiwanese semiconductor giant was reported to have begun accepting orders for 2nm wafers from April 1, with each unit estimated to cost Among the regular trio of Apple, Qualcomm, and MediaTek, Xiaomi would count itself as one of TSMC’s customers. Sadly, with the recent development, the Chinese firm will be limited to the 3nm N3E node, facing a similar fate to Huawei.
    The latest claim also suggests that to possess the latest and greatest hardware in smartphone chipset technology, Xiaomi will have little choice but to continue relying on Qualcomm and MediaTek, which will unveil the Snapdragon 8 Elite Gen 2 and Dimensity 9500 later this year. Fortunately, restricting exports of cutting-edge machinery to China will only boost its resolve to continue the production of local EDA tools, but will this hardware be developed fast enough for the Xiaomi XRING 02 to be fabricated on TSMC’s 2nm process? We will have the answer to this question in the future.

    Readers should note that there is also the risk that the Trump administration enforces a massive ban on Xiaomi, preventing the latter from doing business with TSMC or Samsung in any way, shape, or form. While China is pursuing the manufacturing of custom EUV machinery to eliminate any overseas trade involvement, it may take several years for the country to achieve autonomy.
    News Source: Digital Chat Station

    Subscribe to get an everyday digest of the latest technology news in your inbox

    Follow us on

    Topics

    Sections

    Company

    Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC
    Associates Program, an affiliate advertising program designed to provide a means for sites to earn
    advertising fees by advertising and linking to amazon.com
    © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada
    #xiaomi #cannot #develop #future #inhouse
    Xiaomi Cannot Develop A Future In-House XRING Chipset Using TSMC’s 2nm Process Because Of The U.S. Crackdown On Specialized EDA Tools, Company Will Be Limited To The ‘N3E’ Node
    Menu Home News Hardware Gaming Mobile Finance Deals Reviews How To Wccftech Xiaomi Cannot Develop A Future In-House XRING Chipset Using TSMC’s 2nm Process Because Of The U.S. Crackdown On Specialized EDA Tools, Company Will Be Limited To The ‘N3E’ Node Omar Sohail • Jun 5, 2025 at 04:28am EDT The XRING 01 is a technological milestone, not just for Xiaomi, but it is also regarded as an achievement for China, and one that would make the U.S. government very nervous, because, like current-generation chipsets, the in-house solution has been mass produced on TSMC’s 3nm ‘N3E’ process. Unfortunately, Xiaomi’s progress might not scale past this threshold because the Trump administration has banned the export of EDA tools that are necessary to successfully fabricate a 2nm SoC. Tipster claims that EDA tools are mandatory in designing GAAFET structures, meaning that Xiaomi and its XRING division will be limited to TSMC’s ‘N3E’ node Since TSMC’s 2nm technology has a GAAFET structure, Weibo tipster Digital Chat Station states that it is imperative that Xiaomi gets hold of those EDA, or Electronic Design Automation tools. The Taiwanese semiconductor giant was reported to have begun accepting orders for 2nm wafers from April 1, with each unit estimated to cost Among the regular trio of Apple, Qualcomm, and MediaTek, Xiaomi would count itself as one of TSMC’s customers. Sadly, with the recent development, the Chinese firm will be limited to the 3nm N3E node, facing a similar fate to Huawei. The latest claim also suggests that to possess the latest and greatest hardware in smartphone chipset technology, Xiaomi will have little choice but to continue relying on Qualcomm and MediaTek, which will unveil the Snapdragon 8 Elite Gen 2 and Dimensity 9500 later this year. Fortunately, restricting exports of cutting-edge machinery to China will only boost its resolve to continue the production of local EDA tools, but will this hardware be developed fast enough for the Xiaomi XRING 02 to be fabricated on TSMC’s 2nm process? We will have the answer to this question in the future. Readers should note that there is also the risk that the Trump administration enforces a massive ban on Xiaomi, preventing the latter from doing business with TSMC or Samsung in any way, shape, or form. While China is pursuing the manufacturing of custom EUV machinery to eliminate any overseas trade involvement, it may take several years for the country to achieve autonomy. News Source: Digital Chat Station Subscribe to get an everyday digest of the latest technology news in your inbox Follow us on Topics Sections Company Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada #xiaomi #cannot #develop #future #inhouse
    WCCFTECH.COM
    Xiaomi Cannot Develop A Future In-House XRING Chipset Using TSMC’s 2nm Process Because Of The U.S. Crackdown On Specialized EDA Tools, Company Will Be Limited To The ‘N3E’ Node
    Menu Home News Hardware Gaming Mobile Finance Deals Reviews How To Wccftech Xiaomi Cannot Develop A Future In-House XRING Chipset Using TSMC’s 2nm Process Because Of The U.S. Crackdown On Specialized EDA Tools, Company Will Be Limited To The ‘N3E’ Node Omar Sohail • Jun 5, 2025 at 04:28am EDT The XRING 01 is a technological milestone, not just for Xiaomi, but it is also regarded as an achievement for China, and one that would make the U.S. government very nervous, because, like current-generation chipsets, the in-house solution has been mass produced on TSMC’s 3nm ‘N3E’ process. Unfortunately, Xiaomi’s progress might not scale past this threshold because the Trump administration has banned the export of EDA tools that are necessary to successfully fabricate a 2nm SoC. Tipster claims that EDA tools are mandatory in designing GAAFET structures, meaning that Xiaomi and its XRING division will be limited to TSMC’s ‘N3E’ node Since TSMC’s 2nm technology has a GAAFET structure, Weibo tipster Digital Chat Station states that it is imperative that Xiaomi gets hold of those EDA, or Electronic Design Automation tools. The Taiwanese semiconductor giant was reported to have begun accepting orders for 2nm wafers from April 1, with each unit estimated to cost $30,000. Among the regular trio of Apple, Qualcomm, and MediaTek, Xiaomi would count itself as one of TSMC’s customers. Sadly, with the recent development, the Chinese firm will be limited to the 3nm N3E node, facing a similar fate to Huawei. The latest claim also suggests that to possess the latest and greatest hardware in smartphone chipset technology, Xiaomi will have little choice but to continue relying on Qualcomm and MediaTek, which will unveil the Snapdragon 8 Elite Gen 2 and Dimensity 9500 later this year. Fortunately, restricting exports of cutting-edge machinery to China will only boost its resolve to continue the production of local EDA tools, but will this hardware be developed fast enough for the Xiaomi XRING 02 to be fabricated on TSMC’s 2nm process? We will have the answer to this question in the future. Readers should note that there is also the risk that the Trump administration enforces a massive ban on Xiaomi, preventing the latter from doing business with TSMC or Samsung in any way, shape, or form. While China is pursuing the manufacturing of custom EUV machinery to eliminate any overseas trade involvement, it may take several years for the country to achieve autonomy. News Source: Digital Chat Station Subscribe to get an everyday digest of the latest technology news in your inbox Follow us on Topics Sections Company Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada
    Like
    Love
    Wow
    Sad
    Angry
    325
    0 Reacties 0 aandelen
  • US science is being wrecked, and its leadership is fighting the last war

    Missing the big picture

    US science is being wrecked, and its leadership is fighting the last war

    Facing an extreme budget, the National Academies hosted an event that ignored it.

    John Timmer



    Jun 4, 2025 6:00 pm

    |

    16

    Credit:

    JHVE Photo

    Credit:

    JHVE Photo

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    WASHINGTON, DC—The general outline of the Trump administration's proposed 2026 budget was released a few weeks back, and it included massive cuts for most agencies, including every one that funds scientific research. Late last week, those agencies began releasing details of what the cuts would mean for the actual projects and people they support. And the results are as bad as the initial budget had suggested: one-of-a-kind scientific experiment facilities and hardware retired, massive cuts in supported scientists, and entire areas of research halted.
    And this comes in an environment where previously funded grants are being terminated, funding is being held up for ideological screening, and universities have been subject to arbitrary funding freezes. Collectively, things are heading for damage to US science that will take decades to recover from. It's a radical break from the trajectory science had been on.
    That's the environment that the US's National Academies of Science found itself in yesterday while hosting the State of the Science event in Washington, DC. It was an obvious opportunity for the nation's leading scientific organization to warn the nation of the consequences of the path that the current administration has been traveling. Instead, the event largely ignored the present to worry about a future that may never exist.
    The proposed cuts
    The top-line budget numbers proposed earlier indicated things would be bad: nearly 40 percent taken off the National Institutes of Health's budget, the National Science Foundation down by over half. But now, many of the details of what those cuts mean are becoming apparent.
    NASA's budget includes sharp cuts for planetary science, which would be cut in half and then stay flat for the rest of the decade, with the Mars Sample Return mission canceled. All other science budgets, including Earth Science and Astrophysics, take similar hits; one astronomer posted a graphic showing how many present and future missions that would mean. Active missions that have returned unprecedented data, like Juno and New Horizons, would go, as would two Mars orbiters. As described by Science magazine's news team, "The plans would also kill off nearly every major science mission the agency has not yet begun to build."

    A chart prepared by astronomer Laura Lopez showing just how many astrophysics missions will be cancelled.

    Credit:

    Laura Lopez

    The National Science Foundation, which funds much of the US's fundamental research, is also set for brutal cuts. Biology, engineering, and education will all be slashed by over 70 percent; computer science, math and physical science, and social and behavioral science will all see cuts of over 60 percent. International programs will take an 80 percent cut. The funding rate of grant proposals is expected to drop from 26 percent to just 7 percent, meaning the vast majority of grants submitted to the NSF will be a waste of time. The number of people involved in NSF-funded activities will drop from over 300,000 to just 90,000. Almost every program to broaden participation in science will be eliminated.
    As for specifics, they're equally grim. The fleet of research ships will essentially become someone else's problem: "The FY 2026 Budget Request will enable partial support of some ships." We've been able to better pin down the nature and location of gravitational wave events as detectors in Japan and Italy joined the original two LIGO detectors; the NSF will reverse that progress by shutting one of the LIGOs. The NSF's contributions to detectors at the Large Hadron Collider will be cut by over half, and one of the two very large telescopes it was helping fund will be cancelled. "Access to the telescopes at Kitt Peak and Cerro Tololo will be phased out," and the NSF will transfer the facilities to other organizations.
    The Department of Health and Human Services has been less detailed about the specific cuts its divisions will see, largely focusing on the overall numbers, which are down considerably. The NIH, which is facing a cut of over 40 percent, will be reorganized, with its 19 institutes pared down to just eight. This will result in some odd pairings, such as the dental and eye institutes ending up in the same place; genomics and biomedical imaging will likewise end up under the same roof. Other groups like the Centers for Disease Control and Prevention and the Food and Drug Administration will also face major cuts.

    Issues go well beyond the core science agencies, as well. In the Department of Energy, funding for wind, solar, and renewable grid integration has been zeroed out, essentially ending all programs in this area. Hydrogen and fuel cells face a similar fate. Collectively, these had gotten over billion dollars in 2024's budget. Other areas of science at the DOE, such as high-energy physics, fusion, and biology, receive relatively minor cuts that are largely in line with the ones faced by administration priorities like fossil and nuclear energy.

    Will this happen?
    It goes without saying that this would amount to an abandonment of US scientific leadership at a time when most estimates of China's research spending show it approaching US-like levels of support. Not only would it eliminate many key facilities, instruments, and institutions that have helped make the US a scientific powerhouse, but it would also block the development of newer and additional ones. The harms are so widespread that even topics that the administration claims are priorities would see severe cuts.
    And the damage is likely to last for generations, as support is cut at every stage of the educational pipeline that prepares people for STEM careers. This includes careers in high-tech industries, which may require relocation overseas due to a combination of staffing concerns and heightened immigration controls.
    That said, we've been here before in the first Trump administration, when budgets were proposed with potentially catastrophic implications for US science. But Congress limited the damage and maintained reasonably consistent budgets for most agencies.
    Can we expect that to happen again? So far, the signs are not especially promising. The House has largely adopted the Trump administration's budget priorities, despite the fact that the budget they pass turns its back on decades of supposed concerns about deficit spending. While the Senate has yet to take up the budget, it has also been very pliant during the second Trump administration, approving grossly unqualified cabinet picks such as Robert F. Kennedy Jr.

    All of which would seem to call for the leadership of US science organizations to press the case for the importance of science funding to the US, and highlight the damage that these cuts would cause. But, if yesterday's National Academies event is anything to judge by, the leadership is not especially interested.
    Altered states
    As the nation's premier science organization, and one that performs lots of analyses for the government, the National Academies would seem to be in a position to have its concerns taken seriously by members of Congress. And, given that the present and future of science in the US is being set by policy choices, a meeting entitled the State of the Science would seem like the obvious place to address those concerns.
    If so, it was not obvious to Marcia McNutt, the president of the NAS, who gave the presentation. She made some oblique references to current problems, saying, that “We are embarking on a radical new experiment in what conditions promote science leadership, with the US being the treatment group, and China as the control," and acknowledged that "uncertainties over the science budgets for next year, coupled with cancellations of billions of dollars of already hard-won research grants, is causing an exodus of researchers."
    But her primary focus was on the trends that have been operative in science funding and policy leading up to but excluding the second Trump administration. McNutt suggested this was needed to look beyond the next four years. However, that ignores the obvious fact that US science will be fundamentally different if the Trump administration can follow through on its plans and policies; the trends that have been present for the last two decades will be irrelevant.
    She was also remarkably selective about her avoidance of discussing Trump administration priorities. After noting that faculty surveys have suggested they spend roughly 40 percent of their time handling regulatory requirements, she twice mentioned that the administration's anti-regulatory stance could be a net positive here. Yet she neglected to note that many of the abandoned regulations represent a retreat from science-driven policy.

    McNutt also acknowledged the problem of science losing the bipartisan support it has enjoyed, as trust in scientists among US conservatives has been on a downward trend. But she suggested it was scientists' responsibility to fix the problem, even though it's largely the product of one party deciding it can gain partisan advantage by raising doubts about scientific findings in fields like climate change and vaccine safety.
    The panel discussion that came after largely followed McNutt's lead in avoiding any mention of the current threats to science. The lone exception was Heather Wilson, president of the University of Texas at El Paso and a former Republican member of the House of Representatives and Secretary of the Air Force during the first Trump administration. Wilson took direct aim at Trump's cuts to funding for underrepresented groups, arguing, "Talent is evenly distributed, but opportunity is not." After arguing that "the moral authority of science depends on the pursuit of truth," she highlighted the cancellation of grants that had been used to study diseases that are more prevalent in some ethnic groups, saying "that's not woke science—that's genetics."
    Wilson was clearly the exception, however, as the rest of the panel largely avoided direct mention of either the damage already done to US science funding or the impending catastrophe on the horizon. We've asked the National Academies' leadership a number of questions about how it perceives its role at a time when US science is clearly under threat. As of this article's publication, however, we have not received a response.
    At yesterday's event, however, only one person showed a clear sense of what they thought that role should be—Wilson again, whose strongest words were directed at the National Academies themselves, which she said should "do what you've done since Lincoln was president," and stand up for the truth.

    John Timmer
    Senior Science Editor

    John Timmer
    Senior Science Editor

    John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

    16 Comments
    #science #being #wrecked #its #leadership
    US science is being wrecked, and its leadership is fighting the last war
    Missing the big picture US science is being wrecked, and its leadership is fighting the last war Facing an extreme budget, the National Academies hosted an event that ignored it. John Timmer – Jun 4, 2025 6:00 pm | 16 Credit: JHVE Photo Credit: JHVE Photo Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more WASHINGTON, DC—The general outline of the Trump administration's proposed 2026 budget was released a few weeks back, and it included massive cuts for most agencies, including every one that funds scientific research. Late last week, those agencies began releasing details of what the cuts would mean for the actual projects and people they support. And the results are as bad as the initial budget had suggested: one-of-a-kind scientific experiment facilities and hardware retired, massive cuts in supported scientists, and entire areas of research halted. And this comes in an environment where previously funded grants are being terminated, funding is being held up for ideological screening, and universities have been subject to arbitrary funding freezes. Collectively, things are heading for damage to US science that will take decades to recover from. It's a radical break from the trajectory science had been on. That's the environment that the US's National Academies of Science found itself in yesterday while hosting the State of the Science event in Washington, DC. It was an obvious opportunity for the nation's leading scientific organization to warn the nation of the consequences of the path that the current administration has been traveling. Instead, the event largely ignored the present to worry about a future that may never exist. The proposed cuts The top-line budget numbers proposed earlier indicated things would be bad: nearly 40 percent taken off the National Institutes of Health's budget, the National Science Foundation down by over half. But now, many of the details of what those cuts mean are becoming apparent. NASA's budget includes sharp cuts for planetary science, which would be cut in half and then stay flat for the rest of the decade, with the Mars Sample Return mission canceled. All other science budgets, including Earth Science and Astrophysics, take similar hits; one astronomer posted a graphic showing how many present and future missions that would mean. Active missions that have returned unprecedented data, like Juno and New Horizons, would go, as would two Mars orbiters. As described by Science magazine's news team, "The plans would also kill off nearly every major science mission the agency has not yet begun to build." A chart prepared by astronomer Laura Lopez showing just how many astrophysics missions will be cancelled. Credit: Laura Lopez The National Science Foundation, which funds much of the US's fundamental research, is also set for brutal cuts. Biology, engineering, and education will all be slashed by over 70 percent; computer science, math and physical science, and social and behavioral science will all see cuts of over 60 percent. International programs will take an 80 percent cut. The funding rate of grant proposals is expected to drop from 26 percent to just 7 percent, meaning the vast majority of grants submitted to the NSF will be a waste of time. The number of people involved in NSF-funded activities will drop from over 300,000 to just 90,000. Almost every program to broaden participation in science will be eliminated. As for specifics, they're equally grim. The fleet of research ships will essentially become someone else's problem: "The FY 2026 Budget Request will enable partial support of some ships." We've been able to better pin down the nature and location of gravitational wave events as detectors in Japan and Italy joined the original two LIGO detectors; the NSF will reverse that progress by shutting one of the LIGOs. The NSF's contributions to detectors at the Large Hadron Collider will be cut by over half, and one of the two very large telescopes it was helping fund will be cancelled. "Access to the telescopes at Kitt Peak and Cerro Tololo will be phased out," and the NSF will transfer the facilities to other organizations. The Department of Health and Human Services has been less detailed about the specific cuts its divisions will see, largely focusing on the overall numbers, which are down considerably. The NIH, which is facing a cut of over 40 percent, will be reorganized, with its 19 institutes pared down to just eight. This will result in some odd pairings, such as the dental and eye institutes ending up in the same place; genomics and biomedical imaging will likewise end up under the same roof. Other groups like the Centers for Disease Control and Prevention and the Food and Drug Administration will also face major cuts. Issues go well beyond the core science agencies, as well. In the Department of Energy, funding for wind, solar, and renewable grid integration has been zeroed out, essentially ending all programs in this area. Hydrogen and fuel cells face a similar fate. Collectively, these had gotten over billion dollars in 2024's budget. Other areas of science at the DOE, such as high-energy physics, fusion, and biology, receive relatively minor cuts that are largely in line with the ones faced by administration priorities like fossil and nuclear energy. Will this happen? It goes without saying that this would amount to an abandonment of US scientific leadership at a time when most estimates of China's research spending show it approaching US-like levels of support. Not only would it eliminate many key facilities, instruments, and institutions that have helped make the US a scientific powerhouse, but it would also block the development of newer and additional ones. The harms are so widespread that even topics that the administration claims are priorities would see severe cuts. And the damage is likely to last for generations, as support is cut at every stage of the educational pipeline that prepares people for STEM careers. This includes careers in high-tech industries, which may require relocation overseas due to a combination of staffing concerns and heightened immigration controls. That said, we've been here before in the first Trump administration, when budgets were proposed with potentially catastrophic implications for US science. But Congress limited the damage and maintained reasonably consistent budgets for most agencies. Can we expect that to happen again? So far, the signs are not especially promising. The House has largely adopted the Trump administration's budget priorities, despite the fact that the budget they pass turns its back on decades of supposed concerns about deficit spending. While the Senate has yet to take up the budget, it has also been very pliant during the second Trump administration, approving grossly unqualified cabinet picks such as Robert F. Kennedy Jr. All of which would seem to call for the leadership of US science organizations to press the case for the importance of science funding to the US, and highlight the damage that these cuts would cause. But, if yesterday's National Academies event is anything to judge by, the leadership is not especially interested. Altered states As the nation's premier science organization, and one that performs lots of analyses for the government, the National Academies would seem to be in a position to have its concerns taken seriously by members of Congress. And, given that the present and future of science in the US is being set by policy choices, a meeting entitled the State of the Science would seem like the obvious place to address those concerns. If so, it was not obvious to Marcia McNutt, the president of the NAS, who gave the presentation. She made some oblique references to current problems, saying, that “We are embarking on a radical new experiment in what conditions promote science leadership, with the US being the treatment group, and China as the control," and acknowledged that "uncertainties over the science budgets for next year, coupled with cancellations of billions of dollars of already hard-won research grants, is causing an exodus of researchers." But her primary focus was on the trends that have been operative in science funding and policy leading up to but excluding the second Trump administration. McNutt suggested this was needed to look beyond the next four years. However, that ignores the obvious fact that US science will be fundamentally different if the Trump administration can follow through on its plans and policies; the trends that have been present for the last two decades will be irrelevant. She was also remarkably selective about her avoidance of discussing Trump administration priorities. After noting that faculty surveys have suggested they spend roughly 40 percent of their time handling regulatory requirements, she twice mentioned that the administration's anti-regulatory stance could be a net positive here. Yet she neglected to note that many of the abandoned regulations represent a retreat from science-driven policy. McNutt also acknowledged the problem of science losing the bipartisan support it has enjoyed, as trust in scientists among US conservatives has been on a downward trend. But she suggested it was scientists' responsibility to fix the problem, even though it's largely the product of one party deciding it can gain partisan advantage by raising doubts about scientific findings in fields like climate change and vaccine safety. The panel discussion that came after largely followed McNutt's lead in avoiding any mention of the current threats to science. The lone exception was Heather Wilson, president of the University of Texas at El Paso and a former Republican member of the House of Representatives and Secretary of the Air Force during the first Trump administration. Wilson took direct aim at Trump's cuts to funding for underrepresented groups, arguing, "Talent is evenly distributed, but opportunity is not." After arguing that "the moral authority of science depends on the pursuit of truth," she highlighted the cancellation of grants that had been used to study diseases that are more prevalent in some ethnic groups, saying "that's not woke science—that's genetics." Wilson was clearly the exception, however, as the rest of the panel largely avoided direct mention of either the damage already done to US science funding or the impending catastrophe on the horizon. We've asked the National Academies' leadership a number of questions about how it perceives its role at a time when US science is clearly under threat. As of this article's publication, however, we have not received a response. At yesterday's event, however, only one person showed a clear sense of what they thought that role should be—Wilson again, whose strongest words were directed at the National Academies themselves, which she said should "do what you've done since Lincoln was president," and stand up for the truth. John Timmer Senior Science Editor John Timmer Senior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 16 Comments #science #being #wrecked #its #leadership
    ARSTECHNICA.COM
    US science is being wrecked, and its leadership is fighting the last war
    Missing the big picture US science is being wrecked, and its leadership is fighting the last war Facing an extreme budget, the National Academies hosted an event that ignored it. John Timmer – Jun 4, 2025 6:00 pm | 16 Credit: JHVE Photo Credit: JHVE Photo Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more WASHINGTON, DC—The general outline of the Trump administration's proposed 2026 budget was released a few weeks back, and it included massive cuts for most agencies, including every one that funds scientific research. Late last week, those agencies began releasing details of what the cuts would mean for the actual projects and people they support. And the results are as bad as the initial budget had suggested: one-of-a-kind scientific experiment facilities and hardware retired, massive cuts in supported scientists, and entire areas of research halted. And this comes in an environment where previously funded grants are being terminated, funding is being held up for ideological screening, and universities have been subject to arbitrary funding freezes. Collectively, things are heading for damage to US science that will take decades to recover from. It's a radical break from the trajectory science had been on. That's the environment that the US's National Academies of Science found itself in yesterday while hosting the State of the Science event in Washington, DC. It was an obvious opportunity for the nation's leading scientific organization to warn the nation of the consequences of the path that the current administration has been traveling. Instead, the event largely ignored the present to worry about a future that may never exist. The proposed cuts The top-line budget numbers proposed earlier indicated things would be bad: nearly 40 percent taken off the National Institutes of Health's budget, the National Science Foundation down by over half. But now, many of the details of what those cuts mean are becoming apparent. NASA's budget includes sharp cuts for planetary science, which would be cut in half and then stay flat for the rest of the decade, with the Mars Sample Return mission canceled. All other science budgets, including Earth Science and Astrophysics, take similar hits; one astronomer posted a graphic showing how many present and future missions that would mean. Active missions that have returned unprecedented data, like Juno and New Horizons, would go, as would two Mars orbiters. As described by Science magazine's news team, "The plans would also kill off nearly every major science mission the agency has not yet begun to build." A chart prepared by astronomer Laura Lopez showing just how many astrophysics missions will be cancelled. Credit: Laura Lopez The National Science Foundation, which funds much of the US's fundamental research, is also set for brutal cuts. Biology, engineering, and education will all be slashed by over 70 percent; computer science, math and physical science, and social and behavioral science will all see cuts of over 60 percent. International programs will take an 80 percent cut. The funding rate of grant proposals is expected to drop from 26 percent to just 7 percent, meaning the vast majority of grants submitted to the NSF will be a waste of time. The number of people involved in NSF-funded activities will drop from over 300,000 to just 90,000. Almost every program to broaden participation in science will be eliminated. As for specifics, they're equally grim. The fleet of research ships will essentially become someone else's problem: "The FY 2026 Budget Request will enable partial support of some ships." We've been able to better pin down the nature and location of gravitational wave events as detectors in Japan and Italy joined the original two LIGO detectors; the NSF will reverse that progress by shutting one of the LIGOs. The NSF's contributions to detectors at the Large Hadron Collider will be cut by over half, and one of the two very large telescopes it was helping fund will be cancelled (say goodbye to the Thirty Meter Telescope). "Access to the telescopes at Kitt Peak and Cerro Tololo will be phased out," and the NSF will transfer the facilities to other organizations. The Department of Health and Human Services has been less detailed about the specific cuts its divisions will see, largely focusing on the overall numbers, which are down considerably. The NIH, which is facing a cut of over 40 percent, will be reorganized, with its 19 institutes pared down to just eight. This will result in some odd pairings, such as the dental and eye institutes ending up in the same place; genomics and biomedical imaging will likewise end up under the same roof. Other groups like the Centers for Disease Control and Prevention and the Food and Drug Administration will also face major cuts. Issues go well beyond the core science agencies, as well. In the Department of Energy, funding for wind, solar, and renewable grid integration has been zeroed out, essentially ending all programs in this area. Hydrogen and fuel cells face a similar fate. Collectively, these had gotten over $600 billion dollars in 2024's budget. Other areas of science at the DOE, such as high-energy physics, fusion, and biology, receive relatively minor cuts that are largely in line with the ones faced by administration priorities like fossil and nuclear energy. Will this happen? It goes without saying that this would amount to an abandonment of US scientific leadership at a time when most estimates of China's research spending show it approaching US-like levels of support. Not only would it eliminate many key facilities, instruments, and institutions that have helped make the US a scientific powerhouse, but it would also block the development of newer and additional ones. The harms are so widespread that even topics that the administration claims are priorities would see severe cuts. And the damage is likely to last for generations, as support is cut at every stage of the educational pipeline that prepares people for STEM careers. This includes careers in high-tech industries, which may require relocation overseas due to a combination of staffing concerns and heightened immigration controls. That said, we've been here before in the first Trump administration, when budgets were proposed with potentially catastrophic implications for US science. But Congress limited the damage and maintained reasonably consistent budgets for most agencies. Can we expect that to happen again? So far, the signs are not especially promising. The House has largely adopted the Trump administration's budget priorities, despite the fact that the budget they pass turns its back on decades of supposed concerns about deficit spending. While the Senate has yet to take up the budget, it has also been very pliant during the second Trump administration, approving grossly unqualified cabinet picks such as Robert F. Kennedy Jr. All of which would seem to call for the leadership of US science organizations to press the case for the importance of science funding to the US, and highlight the damage that these cuts would cause. But, if yesterday's National Academies event is anything to judge by, the leadership is not especially interested. Altered states As the nation's premier science organization, and one that performs lots of analyses for the government, the National Academies would seem to be in a position to have its concerns taken seriously by members of Congress. And, given that the present and future of science in the US is being set by policy choices, a meeting entitled the State of the Science would seem like the obvious place to address those concerns. If so, it was not obvious to Marcia McNutt, the president of the NAS, who gave the presentation. She made some oblique references to current problems, saying, that “We are embarking on a radical new experiment in what conditions promote science leadership, with the US being the treatment group, and China as the control," and acknowledged that "uncertainties over the science budgets for next year, coupled with cancellations of billions of dollars of already hard-won research grants, is causing an exodus of researchers." But her primary focus was on the trends that have been operative in science funding and policy leading up to but excluding the second Trump administration. McNutt suggested this was needed to look beyond the next four years. However, that ignores the obvious fact that US science will be fundamentally different if the Trump administration can follow through on its plans and policies; the trends that have been present for the last two decades will be irrelevant. She was also remarkably selective about her avoidance of discussing Trump administration priorities. After noting that faculty surveys have suggested they spend roughly 40 percent of their time handling regulatory requirements, she twice mentioned that the administration's anti-regulatory stance could be a net positive here (once calling it "an opportunity to help"). Yet she neglected to note that many of the abandoned regulations represent a retreat from science-driven policy. McNutt also acknowledged the problem of science losing the bipartisan support it has enjoyed, as trust in scientists among US conservatives has been on a downward trend. But she suggested it was scientists' responsibility to fix the problem, even though it's largely the product of one party deciding it can gain partisan advantage by raising doubts about scientific findings in fields like climate change and vaccine safety. The panel discussion that came after largely followed McNutt's lead in avoiding any mention of the current threats to science. The lone exception was Heather Wilson, president of the University of Texas at El Paso and a former Republican member of the House of Representatives and Secretary of the Air Force during the first Trump administration. Wilson took direct aim at Trump's cuts to funding for underrepresented groups, arguing, "Talent is evenly distributed, but opportunity is not." After arguing that "the moral authority of science depends on the pursuit of truth," she highlighted the cancellation of grants that had been used to study diseases that are more prevalent in some ethnic groups, saying "that's not woke science—that's genetics." Wilson was clearly the exception, however, as the rest of the panel largely avoided direct mention of either the damage already done to US science funding or the impending catastrophe on the horizon. We've asked the National Academies' leadership a number of questions about how it perceives its role at a time when US science is clearly under threat. As of this article's publication, however, we have not received a response. At yesterday's event, however, only one person showed a clear sense of what they thought that role should be—Wilson again, whose strongest words were directed at the National Academies themselves, which she said should "do what you've done since Lincoln was president," and stand up for the truth. John Timmer Senior Science Editor John Timmer Senior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 16 Comments
    Like
    Love
    Wow
    Sad
    Angry
    209
    0 Reacties 0 aandelen
  • North America takes the bulk of AI VC investments, despite tough political environment

    Despite what some experts have characterized as an environment increasingly hostile to AI R&D, North America continues to receive the bulk of AI venture dollars, according to data from investment tracker PitchBook.
    Between February and May of this year, VCs poured billion into North America-based AI and machine learning startups across 1,528 deals. That’s compared with billion that VC firms invested in European AI ventures across 742 deals across the same period.
    Asia-based startups have fared a bit worse than their European counterparts, according to PitchBook. Between February and May, VCs invested just billion in Asia-based AI startups across 515 deals.
    Under President Donald Trump, the U.S. has dramatically cut funding to scientific grants related to basic AI research, made it more difficult for foreign students specializing in AI to study in the U.S., and threatened to dismantle university-housed AI labs by freezing billions of dollars in federal funds. The administration’s trade policies, meanwhile, including its retaliatory tariffs, have led to a chaotic market unfavorable for risky new AI ventures.
    In a post on X in March, AI pioneer and Nobel Laureate Geoffrey Hinton called for billionaire Elon Musk, who until recently advised Trump’s cost-cutting group, the Department of Government Efficiency, to be expelled from the British Royal Society “because of the huge damage he is doing to scientific institutions in the U.S.”
    One might expect that Europe, which has pledged to become a global leader in AI, would attract more venture capital in light of Trump’s controversial policies in the U.S., which have created uncertainty and confusion for founders, investors, and researchers alike. Moreover, the EU has committed hundreds of billions of euros to support the development of AI within its member countries and already has a number of successful, well-funded AI startups in its ranks.
    But that anticipated shift in global investment hasn’t come to pass. There isn’t any sign of a mass VC exodus to the bloc, or of significant upticks in AI funding overseas — at least not yet.

    Techcrunch event

    now through June 4 for TechCrunch Sessions: AI
    on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    The same is true for China, which has spawned high-profile AI startups like DeepSeek and Butterfly Effect — the company behind the agentic platform Manus — but where VC activity in the country and the broader Asian region remains relatively austere.In 2024, North American startups secured 75.6% of all VC AI funding — billion. That share has only increased this year. So far in 2025, North American AI investments represent 86.2%of all VC funding for AI globally.
    It paints a somewhat surprising picture. Even amid mounting political and regulatory headwinds under Trump’s second term, the U.S. remains the undisputed center for AI capital, meaning investors, fatigued as they may be by the administration’s unpredictability, are still counting on U.S. innovation to deliver the biggest returns, at least for now.
    #north #america #takes #bulk #investments
    North America takes the bulk of AI VC investments, despite tough political environment
    Despite what some experts have characterized as an environment increasingly hostile to AI R&D, North America continues to receive the bulk of AI venture dollars, according to data from investment tracker PitchBook. Between February and May of this year, VCs poured billion into North America-based AI and machine learning startups across 1,528 deals. That’s compared with billion that VC firms invested in European AI ventures across 742 deals across the same period. Asia-based startups have fared a bit worse than their European counterparts, according to PitchBook. Between February and May, VCs invested just billion in Asia-based AI startups across 515 deals. Under President Donald Trump, the U.S. has dramatically cut funding to scientific grants related to basic AI research, made it more difficult for foreign students specializing in AI to study in the U.S., and threatened to dismantle university-housed AI labs by freezing billions of dollars in federal funds. The administration’s trade policies, meanwhile, including its retaliatory tariffs, have led to a chaotic market unfavorable for risky new AI ventures. In a post on X in March, AI pioneer and Nobel Laureate Geoffrey Hinton called for billionaire Elon Musk, who until recently advised Trump’s cost-cutting group, the Department of Government Efficiency, to be expelled from the British Royal Society “because of the huge damage he is doing to scientific institutions in the U.S.” One might expect that Europe, which has pledged to become a global leader in AI, would attract more venture capital in light of Trump’s controversial policies in the U.S., which have created uncertainty and confusion for founders, investors, and researchers alike. Moreover, the EU has committed hundreds of billions of euros to support the development of AI within its member countries and already has a number of successful, well-funded AI startups in its ranks. But that anticipated shift in global investment hasn’t come to pass. There isn’t any sign of a mass VC exodus to the bloc, or of significant upticks in AI funding overseas — at least not yet. Techcrunch event now through June 4 for TechCrunch Sessions: AI on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The same is true for China, which has spawned high-profile AI startups like DeepSeek and Butterfly Effect — the company behind the agentic platform Manus — but where VC activity in the country and the broader Asian region remains relatively austere.In 2024, North American startups secured 75.6% of all VC AI funding — billion. That share has only increased this year. So far in 2025, North American AI investments represent 86.2%of all VC funding for AI globally. It paints a somewhat surprising picture. Even amid mounting political and regulatory headwinds under Trump’s second term, the U.S. remains the undisputed center for AI capital, meaning investors, fatigued as they may be by the administration’s unpredictability, are still counting on U.S. innovation to deliver the biggest returns, at least for now. #north #america #takes #bulk #investments
    TECHCRUNCH.COM
    North America takes the bulk of AI VC investments, despite tough political environment
    Despite what some experts have characterized as an environment increasingly hostile to AI R&D, North America continues to receive the bulk of AI venture dollars, according to data from investment tracker PitchBook. Between February and May of this year, VCs poured $69.7 billion into North America-based AI and machine learning startups across 1,528 deals. That’s compared with $6.4 billion that VC firms invested in European AI ventures across 742 deals across the same period. Asia-based startups have fared a bit worse than their European counterparts, according to PitchBook. Between February and May, VCs invested just $3 billion in Asia-based AI startups across 515 deals. Under President Donald Trump, the U.S. has dramatically cut funding to scientific grants related to basic AI research, made it more difficult for foreign students specializing in AI to study in the U.S., and threatened to dismantle university-housed AI labs by freezing billions of dollars in federal funds. The administration’s trade policies, meanwhile, including its retaliatory tariffs, have led to a chaotic market unfavorable for risky new AI ventures. In a post on X in March, AI pioneer and Nobel Laureate Geoffrey Hinton called for billionaire Elon Musk, who until recently advised Trump’s cost-cutting group, the Department of Government Efficiency, to be expelled from the British Royal Society “because of the huge damage he is doing to scientific institutions in the U.S.” One might expect that Europe, which has pledged to become a global leader in AI, would attract more venture capital in light of Trump’s controversial policies in the U.S., which have created uncertainty and confusion for founders, investors, and researchers alike. Moreover, the EU has committed hundreds of billions of euros to support the development of AI within its member countries and already has a number of successful, well-funded AI startups in its ranks (see Mistral, H, and Aleph Alpha, to name a few). But that anticipated shift in global investment hasn’t come to pass. There isn’t any sign of a mass VC exodus to the bloc, or of significant upticks in AI funding overseas — at least not yet. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The same is true for China, which has spawned high-profile AI startups like DeepSeek and Butterfly Effect — the company behind the agentic platform Manus — but where VC activity in the country and the broader Asian region remains relatively austere. (Export controls impacting the ability of certain Asian countries to procure AI chips are almost certainly a factor.) In 2024, North American startups secured 75.6% of all VC AI funding — $106.24 billion. That share has only increased this year. So far in 2025, North American AI investments represent 86.2% ($79.74 billion) of all VC funding for AI globally. It paints a somewhat surprising picture. Even amid mounting political and regulatory headwinds under Trump’s second term, the U.S. remains the undisputed center for AI capital, meaning investors, fatigued as they may be by the administration’s unpredictability, are still counting on U.S. innovation to deliver the biggest returns, at least for now.
    Like
    Love
    Wow
    Sad
    Angry
    253
    0 Reacties 0 aandelen
Zoekresultaten