• Looks like Anthropic’s CEO, Dario Amodei, has finally cracked the code to success: “No bad person should ever benefit from our success” is just a guideline, not a rule! Who knew? So let’s chase those Gulf State investments, because nothing says ethical AI like a quick dip into the glittering sands of questionable morals. It’s all about making profit, right? Principles are so last season. Can’t wait to see their new slogan: “Innovation with a splash of irony.”

    #Anthropic #EthicsInBusiness #GulfInvestments #AI #TechIrony
    Looks like Anthropic’s CEO, Dario Amodei, has finally cracked the code to success: “No bad person should ever benefit from our success” is just a guideline, not a rule! Who knew? So let’s chase those Gulf State investments, because nothing says ethical AI like a quick dip into the glittering sands of questionable morals. It’s all about making profit, right? Principles are so last season. Can’t wait to see their new slogan: “Innovation with a splash of irony.” #Anthropic #EthicsInBusiness #GulfInvestments #AI #TechIrony
    Leaked Memo: Anthropic CEO Says the Company Will Pursue Gulf State Investments After All
    “Unfortunately, I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on,” wrote Anthropic CEO Dario Amodei in a note to staff obtained by WIRED.
    Like
    Love
    Wow
    Sad
    Angry
    84
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • Nvidia CEO slams Anthropic's chief over his claims of AI taking half of jobs and being unsafe — ‘Don’t do it in a dark room and tell me it’s safe’

    Nvidia CEO Jensen Huang disagrees with Anthropic CEO Dario Amodei's prediction that AI will wipe out nearly 50% of white-collar jobs.
    #nvidia #ceo #slams #anthropic039s #chief
    Nvidia CEO slams Anthropic's chief over his claims of AI taking half of jobs and being unsafe — ‘Don’t do it in a dark room and tell me it’s safe’
    Nvidia CEO Jensen Huang disagrees with Anthropic CEO Dario Amodei's prediction that AI will wipe out nearly 50% of white-collar jobs. #nvidia #ceo #slams #anthropic039s #chief
    WWW.TOMSHARDWARE.COM
    Nvidia CEO slams Anthropic's chief over his claims of AI taking half of jobs and being unsafe — ‘Don’t do it in a dark room and tell me it’s safe’
    Nvidia CEO Jensen Huang disagrees with Anthropic CEO Dario Amodei's prediction that AI will wipe out nearly 50% of white-collar jobs.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Anthropic launches Claude AI models for US national security

    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments.

    The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments.

    Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio.

    Specialised AI capabilities for national security

    The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments.

    Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis.

    However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI.

    Balancing innovation with regulation

    In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled.

    Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively.

    Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry.

    He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary.

    Implications of AI in national security

    The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations.

    Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology.

    The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development.

    Regulatory landscape

    As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure.

    Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action.

    This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard.

    As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate.

    For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security.See also: Reddit sues Anthropic over AI data scraping

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    The post Anthropic launches Claude AI models for US national security appeared first on AI News.
    #anthropic #launches #claude #models #national
    Anthropic launches Claude AI models for US national security
    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security.See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News. #anthropic #launches #claude #models #national
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Anthropic launches Claude AI models for US national security
    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security. (Image credit: Anthropic) See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News.
    Like
    Love
    Wow
    Sad
    Angry
    732
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Reddit porte plainte contre Anthropic, opérateur de l’IA Claude

    Reddit a porté plainte contre Anthropic devant un tribunal de San Francisco, mercredi 4 juin. DADO RUVIC / REUTERS La plateforme Reddit a porté plainte contre Anthropic, start-up californienne d’intelligence artificielle. Reddit accuse l’entreprise d’avoir utilisé sans autorisation des conversations publiques de ses utilisateurs pour entraîner ses modèles d’IA générative – parmi lesquels figure Claude, un modèle de langage et chatbot concurrent de ChatGPT. D’après la plainte, déposée mercredi 4 juin à San Francisco, Reddit reproche à Anthropic d’avoir entraîné ses modèles de langage à partir de messages « humains » postés sur Reddit. L’action en justice se base notamment sur un document de recherche publié en décembre 2021 par les équipes d’Anthropic et cosignés par le chef de l’entreprise, Dario Amodei. Le texte mentionne certaines conversations spécifiques de Reddit pouvant servir à l’entraînement de tels modèles. Celles tenues sur Wikipédia sont aussi mentionnées. La plainte allègue que malgré les déclarations publiques d’Anthropic, qui a assuré avoir bloqué l’accès à Reddit à ses systèmes automatisés de récolte des données, les robots de l’entreprise ont malgré tout continué « à se connecter aux serveurs de Reddit, et ce plus de cent mille fois » depuis juillet 2024, indique le texte. Lire aussi | Le gouvernement retire une vidéo générée par IA sur la Résistance, à la suite d’une erreur historique Anthropic conteste « Nous ne sommes pas d’accord avec les affirmations de Reddit et nous nous défendrons vigoureusement », a réagi un porte-parole d’Anthropic auprès de l’Agence France-Presse. Fondée à San Francisco par d’anciens ingénieurs d’OpenAI, Anthropic promeut de manière ostensible un développement responsable de l’IA générative. « Cette affaire porte sur les deux personnalités d’Anthropic : la personnalité publique qui tente de faire croire aux consommateurs qu’elle est une entreprise juste qui respecte les limites et la loi, et la personnalité privée qui se moque de toutes les règles qui gênent ses tentatives de s’en mettre plein les poches », accuse Reddit dans la plainte. Reddit demande désormais des dommages et intérêts, et une injonction pour obliger Anthropic à respecter les termes de son contrat d’utilisation. Ce dernier interdit depuis 2024, date d’entrée en Bourse de Reddit, l’utilisation de données issues des discussions Reddit sans qu’un accord, ou un contrat, ne soit signé avec la plateforme. Reddit, qui déclarait en octobre 2024 compter sur 97,2 millions d’utilisateurs actifs chaque jour, a déjà conclu des accords de licence avec d’autres géants de l’IA générative, dont Google et OpenAI. Ces accords permettent à ces firmes d’utiliser les contenus de Reddit suivant des conditions qui protègent les informations confidentielles des utilisateurs, et offrent une compensation financière à la plateforme. Lire aussi | Article réservé à nos abonnés Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité Le Monde avec AFP
    #reddit #porte #plainte #contre #anthropic
    Reddit porte plainte contre Anthropic, opérateur de l’IA Claude
    Reddit a porté plainte contre Anthropic devant un tribunal de San Francisco, mercredi 4 juin. DADO RUVIC / REUTERS La plateforme Reddit a porté plainte contre Anthropic, start-up californienne d’intelligence artificielle. Reddit accuse l’entreprise d’avoir utilisé sans autorisation des conversations publiques de ses utilisateurs pour entraîner ses modèles d’IA générative – parmi lesquels figure Claude, un modèle de langage et chatbot concurrent de ChatGPT. D’après la plainte, déposée mercredi 4 juin à San Francisco, Reddit reproche à Anthropic d’avoir entraîné ses modèles de langage à partir de messages « humains » postés sur Reddit. L’action en justice se base notamment sur un document de recherche publié en décembre 2021 par les équipes d’Anthropic et cosignés par le chef de l’entreprise, Dario Amodei. Le texte mentionne certaines conversations spécifiques de Reddit pouvant servir à l’entraînement de tels modèles. Celles tenues sur Wikipédia sont aussi mentionnées. La plainte allègue que malgré les déclarations publiques d’Anthropic, qui a assuré avoir bloqué l’accès à Reddit à ses systèmes automatisés de récolte des données, les robots de l’entreprise ont malgré tout continué « à se connecter aux serveurs de Reddit, et ce plus de cent mille fois » depuis juillet 2024, indique le texte. Lire aussi | Le gouvernement retire une vidéo générée par IA sur la Résistance, à la suite d’une erreur historique Anthropic conteste « Nous ne sommes pas d’accord avec les affirmations de Reddit et nous nous défendrons vigoureusement », a réagi un porte-parole d’Anthropic auprès de l’Agence France-Presse. Fondée à San Francisco par d’anciens ingénieurs d’OpenAI, Anthropic promeut de manière ostensible un développement responsable de l’IA générative. « Cette affaire porte sur les deux personnalités d’Anthropic : la personnalité publique qui tente de faire croire aux consommateurs qu’elle est une entreprise juste qui respecte les limites et la loi, et la personnalité privée qui se moque de toutes les règles qui gênent ses tentatives de s’en mettre plein les poches », accuse Reddit dans la plainte. Reddit demande désormais des dommages et intérêts, et une injonction pour obliger Anthropic à respecter les termes de son contrat d’utilisation. Ce dernier interdit depuis 2024, date d’entrée en Bourse de Reddit, l’utilisation de données issues des discussions Reddit sans qu’un accord, ou un contrat, ne soit signé avec la plateforme. Reddit, qui déclarait en octobre 2024 compter sur 97,2 millions d’utilisateurs actifs chaque jour, a déjà conclu des accords de licence avec d’autres géants de l’IA générative, dont Google et OpenAI. Ces accords permettent à ces firmes d’utiliser les contenus de Reddit suivant des conditions qui protègent les informations confidentielles des utilisateurs, et offrent une compensation financière à la plateforme. Lire aussi | Article réservé à nos abonnés Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité Le Monde avec AFP #reddit #porte #plainte #contre #anthropic
    WWW.LEMONDE.FR
    Reddit porte plainte contre Anthropic, opérateur de l’IA Claude
    Reddit a porté plainte contre Anthropic devant un tribunal de San Francisco, mercredi 4 juin. DADO RUVIC / REUTERS La plateforme Reddit a porté plainte contre Anthropic, start-up californienne d’intelligence artificielle (IA). Reddit accuse l’entreprise d’avoir utilisé sans autorisation des conversations publiques de ses utilisateurs pour entraîner ses modèles d’IA générative – parmi lesquels figure Claude, un modèle de langage et chatbot concurrent de ChatGPT. D’après la plainte, déposée mercredi 4 juin à San Francisco, Reddit reproche à Anthropic d’avoir entraîné ses modèles de langage à partir de messages « humains » postés sur Reddit. L’action en justice se base notamment sur un document de recherche publié en décembre 2021 par les équipes d’Anthropic et cosignés par le chef de l’entreprise, Dario Amodei. Le texte mentionne certaines conversations spécifiques de Reddit pouvant servir à l’entraînement de tels modèles. Celles tenues sur Wikipédia sont aussi mentionnées. La plainte allègue que malgré les déclarations publiques d’Anthropic, qui a assuré avoir bloqué l’accès à Reddit à ses systèmes automatisés de récolte des données, les robots de l’entreprise ont malgré tout continué « à se connecter aux serveurs de Reddit, et ce plus de cent mille fois » depuis juillet 2024, indique le texte. Lire aussi | Le gouvernement retire une vidéo générée par IA sur la Résistance, à la suite d’une erreur historique Anthropic conteste « Nous ne sommes pas d’accord avec les affirmations de Reddit et nous nous défendrons vigoureusement », a réagi un porte-parole d’Anthropic auprès de l’Agence France-Presse. Fondée à San Francisco par d’anciens ingénieurs d’OpenAI, Anthropic promeut de manière ostensible un développement responsable de l’IA générative. « Cette affaire porte sur les deux personnalités d’Anthropic : la personnalité publique qui tente de faire croire aux consommateurs qu’elle est une entreprise juste qui respecte les limites et la loi, et la personnalité privée qui se moque de toutes les règles qui gênent ses tentatives de s’en mettre plein les poches », accuse Reddit dans la plainte. Reddit demande désormais des dommages et intérêts, et une injonction pour obliger Anthropic à respecter les termes de son contrat d’utilisation. Ce dernier interdit depuis 2024, date d’entrée en Bourse de Reddit, l’utilisation de données issues des discussions Reddit sans qu’un accord, ou un contrat, ne soit signé avec la plateforme. Reddit, qui déclarait en octobre 2024 compter sur 97,2 millions d’utilisateurs actifs chaque jour, a déjà conclu des accords de licence avec d’autres géants de l’IA générative, dont Google et OpenAI. Ces accords permettent à ces firmes d’utiliser les contenus de Reddit suivant des conditions qui protègent les informations confidentielles des utilisateurs, et offrent une compensation financière à la plateforme. Lire aussi | Article réservé à nos abonnés Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité Le Monde avec AFP
    Like
    Love
    Wow
    Sad
    Angry
    312
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Alphabet CEO Sundar Pichai dismisses AI job fears, emphasizes expansion plans

    In a Bloomberg interview Wednesday night in downtown San Francisco, Alphabet CEO Sundar Pichai pushed back against concerns that AI could eventually make half the company’s 180,000-person workforce redundant. Instead, Pichai stressed the company’s commitment to growth through at least next year.
    “I expect we will grow from our current engineering phase even into next year, because it allows us to do more,” Pichai said, adding that AI is making engineers more productive by eliminating tedious tasks and enabling them to focus on more impactful work. Rather than replacing workers, he referred to AI as “an accelerator” that will drive new product development, thereby creating demand for more employees.
    Alphabet has staged numerous layoffs in recent years, though so far, cuts in 2025 appear to be more targeted than in previous years. It reportedly parted ways with less than 100 people in Google’s cloud division earlier this year and, more recently, hundreds more in its platforms and devices unit. In 2024 and 2023, the cuts were far more severe, with 12,000 people dropped from the company in 2023 and at least another 1,000 employees laid off last year.
    Looking forward, Pichai pointed to Alphabet’s expanding ventures like Waymo autonomous vehicles, quantum computing initiatives, and YouTube’s explosive growth as evidence of innovation opportunities that continually bubble up. He noted YouTube’s scale in India alone, with 100 million channels and 15,000 channels boasting over one million subscribers.
    At one point, Pichai said trying to think too far ahead is “pointless.” But he also acknowledged the legitimacy of fears about job displacement, saying when asked about Anthropic CEO Dario Amodei’s recent comments that AI could erode half of entry-level white collar jobs within five years, “I respect that . . .I think it’s important to voice those concerns and debate them.”
    As the interview wrapped up, Pichai was asked about the limits of AI, and whether it’s possible that the world might never achieve artificial general intelligence, meaning AI that’s as smart as humans at everything. He quickly paused before answering. “There’s a lot of forward progress ahead with the paths we are on, not only the set of ideas we are working on today,some of the newer ideas we are experimenting with,” he said.
    “I’m very optimistic on seeing a lot of progress. But you know,” he added, “you’ve always had these technology curves where you may hit a temporary plateau. So are we currently on an absolute path to AGI? I don’t think anyone can say for sure.”

    Techcrunch event

    now through June 4 for TechCrunch Sessions: AI
    on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW
    #alphabet #ceo #sundar #pichai #dismisses
    Alphabet CEO Sundar Pichai dismisses AI job fears, emphasizes expansion plans
    In a Bloomberg interview Wednesday night in downtown San Francisco, Alphabet CEO Sundar Pichai pushed back against concerns that AI could eventually make half the company’s 180,000-person workforce redundant. Instead, Pichai stressed the company’s commitment to growth through at least next year. “I expect we will grow from our current engineering phase even into next year, because it allows us to do more,” Pichai said, adding that AI is making engineers more productive by eliminating tedious tasks and enabling them to focus on more impactful work. Rather than replacing workers, he referred to AI as “an accelerator” that will drive new product development, thereby creating demand for more employees. Alphabet has staged numerous layoffs in recent years, though so far, cuts in 2025 appear to be more targeted than in previous years. It reportedly parted ways with less than 100 people in Google’s cloud division earlier this year and, more recently, hundreds more in its platforms and devices unit. In 2024 and 2023, the cuts were far more severe, with 12,000 people dropped from the company in 2023 and at least another 1,000 employees laid off last year. Looking forward, Pichai pointed to Alphabet’s expanding ventures like Waymo autonomous vehicles, quantum computing initiatives, and YouTube’s explosive growth as evidence of innovation opportunities that continually bubble up. He noted YouTube’s scale in India alone, with 100 million channels and 15,000 channels boasting over one million subscribers. At one point, Pichai said trying to think too far ahead is “pointless.” But he also acknowledged the legitimacy of fears about job displacement, saying when asked about Anthropic CEO Dario Amodei’s recent comments that AI could erode half of entry-level white collar jobs within five years, “I respect that . . .I think it’s important to voice those concerns and debate them.” As the interview wrapped up, Pichai was asked about the limits of AI, and whether it’s possible that the world might never achieve artificial general intelligence, meaning AI that’s as smart as humans at everything. He quickly paused before answering. “There’s a lot of forward progress ahead with the paths we are on, not only the set of ideas we are working on today,some of the newer ideas we are experimenting with,” he said. “I’m very optimistic on seeing a lot of progress. But you know,” he added, “you’ve always had these technology curves where you may hit a temporary plateau. So are we currently on an absolute path to AGI? I don’t think anyone can say for sure.” Techcrunch event now through June 4 for TechCrunch Sessions: AI on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW #alphabet #ceo #sundar #pichai #dismisses
    TECHCRUNCH.COM
    Alphabet CEO Sundar Pichai dismisses AI job fears, emphasizes expansion plans
    In a Bloomberg interview Wednesday night in downtown San Francisco, Alphabet CEO Sundar Pichai pushed back against concerns that AI could eventually make half the company’s 180,000-person workforce redundant. Instead, Pichai stressed the company’s commitment to growth through at least next year. “I expect we will grow from our current engineering phase even into next year, because it allows us to do more,” Pichai said, adding that AI is making engineers more productive by eliminating tedious tasks and enabling them to focus on more impactful work. Rather than replacing workers, he referred to AI as “an accelerator” that will drive new product development, thereby creating demand for more employees. Alphabet has staged numerous layoffs in recent years, though so far, cuts in 2025 appear to be more targeted than in previous years. It reportedly parted ways with less than 100 people in Google’s cloud division earlier this year and, more recently, hundreds more in its platforms and devices unit. In 2024 and 2023, the cuts were far more severe, with 12,000 people dropped from the company in 2023 and at least another 1,000 employees laid off last year. Looking forward, Pichai pointed to Alphabet’s expanding ventures like Waymo autonomous vehicles, quantum computing initiatives, and YouTube’s explosive growth as evidence of innovation opportunities that continually bubble up. He noted YouTube’s scale in India alone, with 100 million channels and 15,000 channels boasting over one million subscribers. At one point, Pichai said trying to think too far ahead is “pointless.” But he also acknowledged the legitimacy of fears about job displacement, saying when asked about Anthropic CEO Dario Amodei’s recent comments that AI could erode half of entry-level white collar jobs within five years, “I respect that . . .I think it’s important to voice those concerns and debate them.” As the interview wrapped up, Pichai was asked about the limits of AI, and whether it’s possible that the world might never achieve artificial general intelligence, meaning AI that’s as smart as humans at everything. He quickly paused before answering. “There’s a lot of forward progress ahead with the paths we are on, not only the set of ideas we are working on today, [but] some of the newer ideas we are experimenting with,” he said. “I’m very optimistic on seeing a lot of progress. But you know,” he added, “you’ve always had these technology curves where you may hit a temporary plateau. So are we currently on an absolute path to AGI? I don’t think anyone can say for sure.” Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW
    Like
    Love
    Wow
    Sad
    Angry
    221
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • AI and economic pressures reshape tech jobs amid layoffs

    Tech layoffs have continued in 2025. Much of that is being blamed on a combination of a slower economy and the adoption of automation via artificial intelligence.

    Nearly four in 10 Americans, for instance, believe generative AIcould diminish the number of available jobs as it advances, according to a study released in October by the New York Federal Reserve Bank.

    And the World Economic Forum’s Jobs Initiative study found that close to halfof worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models that underpin them.

    In April, the US tech industry lost 214,000 positions as companies shifted toward AI roles and skills-based hiring amid economic uncertainty. Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released by the US Bureau of Labor Statistics showed.

    This year, 137 tech companies have fired 62,114 tech employees, according to Layoffs.fyi. Efforts to reduce headcount at government agencies by the unofficial US Department of Government Efficiencysaw an additional 61,296 federal workers fired this year.

    Kye Mitchell, president of tech workforce staffing firm Experis US, believes the IT employment market is undergoing a fundamental transformation rather than experiencing traditional cyclical layoffs. Although Experis is seeing a 13% month-over-month decline in traditional software developer postings, it doesn’t represent “job destruction, it’s market evolution,” Mitchell said.

    “What we’re witnessing is the emergence of strategic technology orchestrators who harness AI to drive unprecedented business value,” she said.

    For example, organizations that once deployed two scrum teams of ten people to develop high-quality software are now achieving superior results with a single team of five AI-empowered developers.

    “This isn’t about cutting jobs; it’s about elevating roles,” Mitchell said.

    Specialized roles in particular are surging. Database architect positions are up 2,312%, statistician roles have increased 382%, and jobs for mathematicians have increased 1,272%. “These aren’t replacements; they’re vital for an AI-driven future,” she said.

    In fact, it’s an IT talent gap, not an employee surplus, that is now challenging organizations — and will continue to do so.

    With 76% of IT employers already struggling to find skilled tech talent, the market fundamentals favor skilled professionals, according to Mitchell. “The question isn’t whether there will be IT jobs — it’s whether we can develop the right skills fast enough to meet demand,” she said.

    For federal tech workers, outdated systems and slow procurement make it hard to attract and keep top tech talent. Agencies expect fast team deployment but operate with rigid, outdated processes, according to Justin Vianello, CEO of technology workforce development firm SkillStorm.

    Long security clearance delays add cost and time, often forcing companies to hire expensive, already-cleared talent. Meanwhile, modern technologists want to use current tools and make an impact — something hard to do with legacy systems and decade-long modernization efforts, he added.

    Many suggest turning to AI to will solve the tech talent shortage, but there is no evidence that AI will lead to a reduction in demand for tech talent, Vianello said. “On the contrary, companies see that the demand for tech talent has increased as they invest in preparing their workforce to properly use AI tools,” he said.

    A shortage of qualified talent is a bigger barrier to hiring than AI automation, he said, because organizations struggle to find candidates with the right certifications, skills, and clearances — especially in cloud, cybersecurity, and AI. Tech workers often lack skills in these areas because technology evolves faster than education and training can keep up, Vianello said. And while AI helps automate routine tasks, it can’t replace the strategic roles filled by skilled professionals.

    Seven out of 10 US organizations are struggling to find skilled workers to fill roles in an ever-evolving digital transformation landscape, and genAI has added to that headache, according to a ManpowerGroup survey released earlier this year.

    Job postings for AI skills surged 2,000% in 2024, but education and training in this area haven’t kept pace, according to Kelly Stratman, global ecosystem relationships enablement leader at Ernst & Young.

    “As formal education and training in AI skills still lag, it results in a shortage of AI talent that can effectively manage these technologies and demands,” she said in an earlier interview. “The AI talent shortage is most prominent among highly technical roles like data scientists/analysts, machine learning engineers, and software developers.”

    Economic uncertainty is creating a cautious hiring environment, but it’s more complex than tariffs alone. Experis data shows employers adopting a “wait and watch” stance as they monitor economic signals, with job openings down 11% year-over-year, according to Mitchell.

    “However, the bigger story is strategic workforce planning in an era of rapid technological change. Companies are being incredibly precise about where they allocate resources. Not because of economic pressure alone, but because the skills landscape is shifting so rapidly,” Mitchell said. “They’re prioritizing mission-critical roles while restructuring others around AI capabilities.”

    Top organizations see AI as a strategic shift, not just cost-cutting. Cutting talent now risks weakening core areas like cybersecurity, according to Mitchell.

    Skillstorm’s Vianello suggests that IT job hunters should begin to upgrade their skills with certifications that matter: AWS, Azure, CISSP, Security+, and AI/ML credentials open doors quickly, he said.

    “Veterans, in particular, have an edge; they bring leadership, discipline, and security clearances. Apprenticeships and fellowships offer a fast track into full-time roles by giving you experience that actually counts. And don’t overlook the intangibles: soft skills and project leadership are what elevate technologists into impact-makers,” Vianello said.

    Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing, and AI prompt engineering. In fact, demand for genAI courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing.

    “AI isn’t replacing jobs — it’s fundamentally redefining how work gets done. The break point where technology truly displaces a position is when roughly 80% of tasks can be fully automated,” Mitchell said. “We’re nowhere near that threshold for most roles. Instead, we’re seeing AI augment skill sets and make professionals more capable, faster, and able to focus on higher-value work.”

    Leaders use AI as a strategic enabler — embedding it to enhance, not compete with, human developers, she said.

    Some industry forecasts predict a 30% productivity boost from AI tools, potentially adding more than trillion to global GDP.

    For example, AI tools are expected to perform the lion’s share of coding. Techniques where humans use AI-augmented coding tools, such as “vibe coding,” are set to revolutionize software development by creating source code, generating tests automatically, and freeing up developer time for innovation instead of debugging code. 

    With vibe coding, developers use natural language in a conversational way that prompts the AI model to offer contextual ideas and generate code based on the conversation.

    By 2028, 75% of professional developers will be using vibe coding and other genAI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said.

    A report from MIT Technology Review Insights found that 94% of business leaders now use genAI in software development, with 82% applying it in multiple stages — and 26% in four or more.

    Some industry experts place genAI’s use in creating code much higher. “What we are finding is that we’re three to six months from a world where AI is writing 90% of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code,” Anthropic CEO Dario Amodei said in a recent report and video interview.

    “The realtransformation is in role evolution. Developers are becoming strategic technology orchestrators,” Mitchell from Experis said. “Data professionals are becoming business problem solvers. The demand isn’t disappearing; it’s becoming more sophisticated and more valuable.

    “In today’s economic climate, having the right tech talent with AI-enhanced capabilities isn’t a nice-to-have, it’s your competitive edge,” she said.
    #economic #pressures #reshape #tech #jobs
    AI and economic pressures reshape tech jobs amid layoffs
    Tech layoffs have continued in 2025. Much of that is being blamed on a combination of a slower economy and the adoption of automation via artificial intelligence. Nearly four in 10 Americans, for instance, believe generative AIcould diminish the number of available jobs as it advances, according to a study released in October by the New York Federal Reserve Bank. And the World Economic Forum’s Jobs Initiative study found that close to halfof worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models that underpin them. In April, the US tech industry lost 214,000 positions as companies shifted toward AI roles and skills-based hiring amid economic uncertainty. Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released by the US Bureau of Labor Statistics showed. This year, 137 tech companies have fired 62,114 tech employees, according to Layoffs.fyi. Efforts to reduce headcount at government agencies by the unofficial US Department of Government Efficiencysaw an additional 61,296 federal workers fired this year. Kye Mitchell, president of tech workforce staffing firm Experis US, believes the IT employment market is undergoing a fundamental transformation rather than experiencing traditional cyclical layoffs. Although Experis is seeing a 13% month-over-month decline in traditional software developer postings, it doesn’t represent “job destruction, it’s market evolution,” Mitchell said. “What we’re witnessing is the emergence of strategic technology orchestrators who harness AI to drive unprecedented business value,” she said. For example, organizations that once deployed two scrum teams of ten people to develop high-quality software are now achieving superior results with a single team of five AI-empowered developers. “This isn’t about cutting jobs; it’s about elevating roles,” Mitchell said. Specialized roles in particular are surging. Database architect positions are up 2,312%, statistician roles have increased 382%, and jobs for mathematicians have increased 1,272%. “These aren’t replacements; they’re vital for an AI-driven future,” she said. In fact, it’s an IT talent gap, not an employee surplus, that is now challenging organizations — and will continue to do so. With 76% of IT employers already struggling to find skilled tech talent, the market fundamentals favor skilled professionals, according to Mitchell. “The question isn’t whether there will be IT jobs — it’s whether we can develop the right skills fast enough to meet demand,” she said. For federal tech workers, outdated systems and slow procurement make it hard to attract and keep top tech talent. Agencies expect fast team deployment but operate with rigid, outdated processes, according to Justin Vianello, CEO of technology workforce development firm SkillStorm. Long security clearance delays add cost and time, often forcing companies to hire expensive, already-cleared talent. Meanwhile, modern technologists want to use current tools and make an impact — something hard to do with legacy systems and decade-long modernization efforts, he added. Many suggest turning to AI to will solve the tech talent shortage, but there is no evidence that AI will lead to a reduction in demand for tech talent, Vianello said. “On the contrary, companies see that the demand for tech talent has increased as they invest in preparing their workforce to properly use AI tools,” he said. A shortage of qualified talent is a bigger barrier to hiring than AI automation, he said, because organizations struggle to find candidates with the right certifications, skills, and clearances — especially in cloud, cybersecurity, and AI. Tech workers often lack skills in these areas because technology evolves faster than education and training can keep up, Vianello said. And while AI helps automate routine tasks, it can’t replace the strategic roles filled by skilled professionals. Seven out of 10 US organizations are struggling to find skilled workers to fill roles in an ever-evolving digital transformation landscape, and genAI has added to that headache, according to a ManpowerGroup survey released earlier this year. Job postings for AI skills surged 2,000% in 2024, but education and training in this area haven’t kept pace, according to Kelly Stratman, global ecosystem relationships enablement leader at Ernst & Young. “As formal education and training in AI skills still lag, it results in a shortage of AI talent that can effectively manage these technologies and demands,” she said in an earlier interview. “The AI talent shortage is most prominent among highly technical roles like data scientists/analysts, machine learning engineers, and software developers.” Economic uncertainty is creating a cautious hiring environment, but it’s more complex than tariffs alone. Experis data shows employers adopting a “wait and watch” stance as they monitor economic signals, with job openings down 11% year-over-year, according to Mitchell. “However, the bigger story is strategic workforce planning in an era of rapid technological change. Companies are being incredibly precise about where they allocate resources. Not because of economic pressure alone, but because the skills landscape is shifting so rapidly,” Mitchell said. “They’re prioritizing mission-critical roles while restructuring others around AI capabilities.” Top organizations see AI as a strategic shift, not just cost-cutting. Cutting talent now risks weakening core areas like cybersecurity, according to Mitchell. Skillstorm’s Vianello suggests that IT job hunters should begin to upgrade their skills with certifications that matter: AWS, Azure, CISSP, Security+, and AI/ML credentials open doors quickly, he said. “Veterans, in particular, have an edge; they bring leadership, discipline, and security clearances. Apprenticeships and fellowships offer a fast track into full-time roles by giving you experience that actually counts. And don’t overlook the intangibles: soft skills and project leadership are what elevate technologists into impact-makers,” Vianello said. Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing, and AI prompt engineering. In fact, demand for genAI courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing. “AI isn’t replacing jobs — it’s fundamentally redefining how work gets done. The break point where technology truly displaces a position is when roughly 80% of tasks can be fully automated,” Mitchell said. “We’re nowhere near that threshold for most roles. Instead, we’re seeing AI augment skill sets and make professionals more capable, faster, and able to focus on higher-value work.” Leaders use AI as a strategic enabler — embedding it to enhance, not compete with, human developers, she said. Some industry forecasts predict a 30% productivity boost from AI tools, potentially adding more than trillion to global GDP. For example, AI tools are expected to perform the lion’s share of coding. Techniques where humans use AI-augmented coding tools, such as “vibe coding,” are set to revolutionize software development by creating source code, generating tests automatically, and freeing up developer time for innovation instead of debugging code.  With vibe coding, developers use natural language in a conversational way that prompts the AI model to offer contextual ideas and generate code based on the conversation. By 2028, 75% of professional developers will be using vibe coding and other genAI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said. A report from MIT Technology Review Insights found that 94% of business leaders now use genAI in software development, with 82% applying it in multiple stages — and 26% in four or more. Some industry experts place genAI’s use in creating code much higher. “What we are finding is that we’re three to six months from a world where AI is writing 90% of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code,” Anthropic CEO Dario Amodei said in a recent report and video interview. “The realtransformation is in role evolution. Developers are becoming strategic technology orchestrators,” Mitchell from Experis said. “Data professionals are becoming business problem solvers. The demand isn’t disappearing; it’s becoming more sophisticated and more valuable. “In today’s economic climate, having the right tech talent with AI-enhanced capabilities isn’t a nice-to-have, it’s your competitive edge,” she said. #economic #pressures #reshape #tech #jobs
    WWW.COMPUTERWORLD.COM
    AI and economic pressures reshape tech jobs amid layoffs
    Tech layoffs have continued in 2025. Much of that is being blamed on a combination of a slower economy and the adoption of automation via artificial intelligence. Nearly four in 10 Americans, for instance, believe generative AI (genAI) could diminish the number of available jobs as it advances, according to a study released in October by the New York Federal Reserve Bank. And the World Economic Forum’s Jobs Initiative study found that close to half (44%) of worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models (LLMs) that underpin them. In April, the US tech industry lost 214,000 positions as companies shifted toward AI roles and skills-based hiring amid economic uncertainty. Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released by the US Bureau of Labor Statistics (BLS) showed. This year, 137 tech companies have fired 62,114 tech employees, according to Layoffs.fyi. Efforts to reduce headcount at government agencies by the unofficial US Department of Government Efficiency (DOGE) saw an additional 61,296 federal workers fired this year. Kye Mitchell, president of tech workforce staffing firm Experis US, believes the IT employment market is undergoing a fundamental transformation rather than experiencing traditional cyclical layoffs. Although Experis is seeing a 13% month-over-month decline in traditional software developer postings, it doesn’t represent “job destruction, it’s market evolution,” Mitchell said. “What we’re witnessing is the emergence of strategic technology orchestrators who harness AI to drive unprecedented business value,” she said. For example, organizations that once deployed two scrum teams of ten people to develop high-quality software are now achieving superior results with a single team of five AI-empowered developers. “This isn’t about cutting jobs; it’s about elevating roles,” Mitchell said. Specialized roles in particular are surging. Database architect positions are up 2,312%, statistician roles have increased 382%, and jobs for mathematicians have increased 1,272%. “These aren’t replacements; they’re vital for an AI-driven future,” she said. In fact, it’s an IT talent gap, not an employee surplus, that is now challenging organizations — and will continue to do so. With 76% of IT employers already struggling to find skilled tech talent, the market fundamentals favor skilled professionals, according to Mitchell. “The question isn’t whether there will be IT jobs — it’s whether we can develop the right skills fast enough to meet demand,” she said. For federal tech workers, outdated systems and slow procurement make it hard to attract and keep top tech talent. Agencies expect fast team deployment but operate with rigid, outdated processes, according to Justin Vianello, CEO of technology workforce development firm SkillStorm. Long security clearance delays add cost and time, often forcing companies to hire expensive, already-cleared talent. Meanwhile, modern technologists want to use current tools and make an impact — something hard to do with legacy systems and decade-long modernization efforts, he added. Many suggest turning to AI to will solve the tech talent shortage, but there is no evidence that AI will lead to a reduction in demand for tech talent, Vianello said. “On the contrary, companies see that the demand for tech talent has increased as they invest in preparing their workforce to properly use AI tools,” he said. A shortage of qualified talent is a bigger barrier to hiring than AI automation, he said, because organizations struggle to find candidates with the right certifications, skills, and clearances — especially in cloud, cybersecurity, and AI. Tech workers often lack skills in these areas because technology evolves faster than education and training can keep up, Vianello said. And while AI helps automate routine tasks, it can’t replace the strategic roles filled by skilled professionals. Seven out of 10 US organizations are struggling to find skilled workers to fill roles in an ever-evolving digital transformation landscape, and genAI has added to that headache, according to a ManpowerGroup survey released earlier this year. Job postings for AI skills surged 2,000% in 2024, but education and training in this area haven’t kept pace, according to Kelly Stratman, global ecosystem relationships enablement leader at Ernst & Young. “As formal education and training in AI skills still lag, it results in a shortage of AI talent that can effectively manage these technologies and demands,” she said in an earlier interview. “The AI talent shortage is most prominent among highly technical roles like data scientists/analysts, machine learning engineers, and software developers.” Economic uncertainty is creating a cautious hiring environment, but it’s more complex than tariffs alone. Experis data shows employers adopting a “wait and watch” stance as they monitor economic signals, with job openings down 11% year-over-year, according to Mitchell. “However, the bigger story is strategic workforce planning in an era of rapid technological change. Companies are being incredibly precise about where they allocate resources. Not because of economic pressure alone, but because the skills landscape is shifting so rapidly,” Mitchell said. “They’re prioritizing mission-critical roles while restructuring others around AI capabilities.” Top organizations see AI as a strategic shift, not just cost-cutting. Cutting talent now risks weakening core areas like cybersecurity, according to Mitchell. Skillstorm’s Vianello suggests that IT job hunters should begin to upgrade their skills with certifications that matter: AWS, Azure, CISSP, Security+, and AI/ML credentials open doors quickly, he said. “Veterans, in particular, have an edge; they bring leadership, discipline, and security clearances. Apprenticeships and fellowships offer a fast track into full-time roles by giving you experience that actually counts. And don’t overlook the intangibles: soft skills and project leadership are what elevate technologists into impact-makers,” Vianello said. Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing (such as Rust), and AI prompt engineering. In fact, demand for genAI courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing. “AI isn’t replacing jobs — it’s fundamentally redefining how work gets done. The break point where technology truly displaces a position is when roughly 80% of tasks can be fully automated,” Mitchell said. “We’re nowhere near that threshold for most roles. Instead, we’re seeing AI augment skill sets and make professionals more capable, faster, and able to focus on higher-value work.” Leaders use AI as a strategic enabler — embedding it to enhance, not compete with, human developers, she said. Some industry forecasts predict a 30% productivity boost from AI tools, potentially adding more than $1.5 trillion to global GDP. For example, AI tools are expected to perform the lion’s share of coding. Techniques where humans use AI-augmented coding tools, such as “vibe coding,” are set to revolutionize software development by creating source code, generating tests automatically, and freeing up developer time for innovation instead of debugging code.  With vibe coding, developers use natural language in a conversational way that prompts the AI model to offer contextual ideas and generate code based on the conversation. By 2028, 75% of professional developers will be using vibe coding and other genAI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said. A report from MIT Technology Review Insights found that 94% of business leaders now use genAI in software development, with 82% applying it in multiple stages — and 26% in four or more. Some industry experts place genAI’s use in creating code much higher. “What we are finding is that we’re three to six months from a world where AI is writing 90% of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code,” Anthropic CEO Dario Amodei said in a recent report and video interview. “The real [AI] transformation is in role evolution. Developers are becoming strategic technology orchestrators,” Mitchell from Experis said. “Data professionals are becoming business problem solvers. The demand isn’t disappearing; it’s becoming more sophisticated and more valuable. “In today’s economic climate, having the right tech talent with AI-enhanced capabilities isn’t a nice-to-have, it’s your competitive edge,” she said.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • The DeepSeek R1 update proves its an active threat to OpenAI and Google

    DeepSeek's R1 update, plus the rest of the AI news this week.
    Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images

    This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch.

    You May Also Like

    To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer, the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago.

    Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant. 

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires. 

    Related Stories

    The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr, which could be available for sale later this year for just And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

    Topics
    OpenAI
    DeepSeek

    Cecily Mauran
    Tech Reporter

    Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.
    #deepseek #update #proves #its #active
    The DeepSeek R1 update proves its an active threat to OpenAI and Google
    DeepSeek's R1 update, plus the rest of the AI news this week. Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch. You May Also Like To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer, the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago. Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant.  Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires.  Related Stories The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr, which could be available for sale later this year for just And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics OpenAI DeepSeek Cecily Mauran Tech Reporter Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran. #deepseek #update #proves #its #active
    MASHABLE.COM
    The DeepSeek R1 update proves its an active threat to OpenAI and Google
    DeepSeek's R1 update, plus the rest of the AI news this week. Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch. You May Also Like To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer (via TechCrunch), the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago. Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant.  Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires.  Related Stories The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr (with engineering by The Robot Studio), which could be available for sale later this year for just $3,000.And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics OpenAI DeepSeek Cecily Mauran Tech Reporter Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life

    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant.“We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.”Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #openai #wants #chatgpt #super #assistant
    OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life
    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant.“We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.”Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #openai #wants #chatgpt #super #assistant
    WWW.THEVERGE.COM
    OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life
    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant. (Coincidentally, Apple is rumored to soon let iOS users also select Google’s Gemini for Siri queries. Meta AI just hit one billion users as well, thanks mostly to its many hooks in Instagram, WhatsApp, and Facebook.) “We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead (reasoning, multimodal), a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group at Amazon called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.” (He’s joining Anthropic’s corporate board, not the supervising board of its public benefit trust that can hire and fire corporate directors.)Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead

    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infiniteand Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venisis the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randallis the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeffis the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souperis the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest. In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.”
    #real #life #tech #execs #that
    The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead
    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infiniteand Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venisis the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randallis the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeffis the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souperis the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest. In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.” #real #life #tech #execs #that
    TIME.COM
    The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead
    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infinite (about Sam Bankman-Fried) and Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venis (Cory Michael Smith) is the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randall (Steve Carell) is the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeff (Ramy Youssef) is the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souper (Jason Schwartzman) is the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest (hundreds of millions as opposed to billions of dollars). In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.”
    4 Yorumlar 0 hisse senetleri 0 önizleme
  • AI could erase half of all entry-level white-collar jobs within five years, warns Anthropic CEO

    What just happened? Hearing people warn about the danger that generative AI presents to the global job market is concerning enough, but it's especially worrying when these ominous predictions come from those behind the technology. Dario Amodei, CEO of Anthropic, believes that AI could wipe out about half of all entry-level white-collar jobs in the next five years, leading to unemployment spikes up to 20%.
    Amodei made his comments during an interview with Axios. He said that AI companies and the government needed to stop "sugar-coating" the potential mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, with entry-level jobs most at risk.

    Amodei said he was making this warning public in the hope that the government and other AI giants such as OpenAI will start preparing ways to protect the nation from a situation that could get out of hand.
    "Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it."

    The CEO's comments are backed up by reports into the state of the jobs market. The US IT job market declined for the second year in a row in 2024. There was also a report from SignalFire that found Big Tech's hiring of new graduates is down by over 50% compared to the pre-pandemic levels of 2019. Startups, meanwhile, have seen their hiring of new grads fall by over 30% during the same period.
    We're also seeing huge layoffs across multiple tech companies, a large part of which can be attributed to AI replacing workers' duties.
    The one bit of good news for workers is that some firms, including Klarna and Duolingo, are finding that the subpar performance of these bots and the public's negative feelings toward their use are forcing companies to start hiring humans again.
    // Related Stories

    Amodei's Anthropic AI firm is playing its own part in all this, of course. The company's latest Claude 4 AI model can code at a proficiency level close to that of humans – it's also very good at lying and blackmail.
    "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar."
    The AI arms race in this billion-dollar industry is resulting in LLMs improving all the time. And with the US in a battle to stay ahead of China, regulation is rarely high on the government's agenda.
    AI companies tend to claim that the technology will augment jobs, helping people become more productive. That might be true right now, but it won't be long before the systems are able to replace the people they are helping.
    Amodei says the first step in addressing the problem is to make people more aware of what jobs are vulnerable to AI replacement. Helping workers better understand how AI can augment their jobs could also mitigate job losses, as would more government action. Or there's always OpenAI CEO Sam Altman's solution: universal basic income, though that will come with plenty of issues of its own.
    Masthead: kate.sade
    #could #erase #half #all #entrylevel
    AI could erase half of all entry-level white-collar jobs within five years, warns Anthropic CEO
    What just happened? Hearing people warn about the danger that generative AI presents to the global job market is concerning enough, but it's especially worrying when these ominous predictions come from those behind the technology. Dario Amodei, CEO of Anthropic, believes that AI could wipe out about half of all entry-level white-collar jobs in the next five years, leading to unemployment spikes up to 20%. Amodei made his comments during an interview with Axios. He said that AI companies and the government needed to stop "sugar-coating" the potential mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, with entry-level jobs most at risk. Amodei said he was making this warning public in the hope that the government and other AI giants such as OpenAI will start preparing ways to protect the nation from a situation that could get out of hand. "Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it." The CEO's comments are backed up by reports into the state of the jobs market. The US IT job market declined for the second year in a row in 2024. There was also a report from SignalFire that found Big Tech's hiring of new graduates is down by over 50% compared to the pre-pandemic levels of 2019. Startups, meanwhile, have seen their hiring of new grads fall by over 30% during the same period. We're also seeing huge layoffs across multiple tech companies, a large part of which can be attributed to AI replacing workers' duties. The one bit of good news for workers is that some firms, including Klarna and Duolingo, are finding that the subpar performance of these bots and the public's negative feelings toward their use are forcing companies to start hiring humans again. // Related Stories Amodei's Anthropic AI firm is playing its own part in all this, of course. The company's latest Claude 4 AI model can code at a proficiency level close to that of humans – it's also very good at lying and blackmail. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." The AI arms race in this billion-dollar industry is resulting in LLMs improving all the time. And with the US in a battle to stay ahead of China, regulation is rarely high on the government's agenda. AI companies tend to claim that the technology will augment jobs, helping people become more productive. That might be true right now, but it won't be long before the systems are able to replace the people they are helping. Amodei says the first step in addressing the problem is to make people more aware of what jobs are vulnerable to AI replacement. Helping workers better understand how AI can augment their jobs could also mitigate job losses, as would more government action. Or there's always OpenAI CEO Sam Altman's solution: universal basic income, though that will come with plenty of issues of its own. Masthead: kate.sade #could #erase #half #all #entrylevel
    WWW.TECHSPOT.COM
    AI could erase half of all entry-level white-collar jobs within five years, warns Anthropic CEO
    What just happened? Hearing people warn about the danger that generative AI presents to the global job market is concerning enough, but it's especially worrying when these ominous predictions come from those behind the technology. Dario Amodei, CEO of Anthropic, believes that AI could wipe out about half of all entry-level white-collar jobs in the next five years, leading to unemployment spikes up to 20%. Amodei made his comments during an interview with Axios. He said that AI companies and the government needed to stop "sugar-coating" the potential mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, with entry-level jobs most at risk. Amodei said he was making this warning public in the hope that the government and other AI giants such as OpenAI will start preparing ways to protect the nation from a situation that could get out of hand. "Most of them are unaware that this is about to happen," Amodei said. "It sounds crazy, and people just don't believe it." The CEO's comments are backed up by reports into the state of the jobs market. The US IT job market declined for the second year in a row in 2024. There was also a report from SignalFire that found Big Tech's hiring of new graduates is down by over 50% compared to the pre-pandemic levels of 2019. Startups, meanwhile, have seen their hiring of new grads fall by over 30% during the same period. We're also seeing huge layoffs across multiple tech companies, a large part of which can be attributed to AI replacing workers' duties. The one bit of good news for workers is that some firms, including Klarna and Duolingo, are finding that the subpar performance of these bots and the public's negative feelings toward their use are forcing companies to start hiring humans again. // Related Stories Amodei's Anthropic AI firm is playing its own part in all this, of course. The company's latest Claude 4 AI model can code at a proficiency level close to that of humans – it's also very good at lying and blackmail. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. "I don't think this is on people's radar." The AI arms race in this billion-dollar industry is resulting in LLMs improving all the time. And with the US in a battle to stay ahead of China, regulation is rarely high on the government's agenda. AI companies tend to claim that the technology will augment jobs, helping people become more productive. That might be true right now, but it won't be long before the systems are able to replace the people they are helping. Amodei says the first step in addressing the problem is to make people more aware of what jobs are vulnerable to AI replacement. Helping workers better understand how AI can augment their jobs could also mitigate job losses, as would more government action. Or there's always OpenAI CEO Sam Altman's solution: universal basic income, though that will come with plenty of issues of its own. Masthead: kate.sade
    11 Yorumlar 0 hisse senetleri 0 önizleme
Arama Sonuçları
CGShares https://cgshares.com