• OpenAI calls DeepSeek state-controlled, calls for bans on PRC-produced models
    techcrunch.com
    In a new policy proposal, OpenAI describes Chinese AI lab DeepSeek as state-subsidized and state-controlled, and recommends that the U.S. government consider banning models from the outfit and similar Peoples Republic of China (PRC)-supported operations.The proposal, a submissionfor the Trump Administrations AI Action Plan initiative, claims that DeepSeeks models, including its R1 reasoning model, are insecure because DeepSeek faces requirements under Chinese law to comply with demands for user data. Banning the use of PRC-produced models in all countries considered Tier 1 under the Biden Administrations export rules would prevent privacy and security risks, OpenAI says, including the risk of IP theft.Its unclear whether OpenAIs references to models are meant to refer to DeepSeeks API, the labs open models, or both. DeepSeeks open models dont contain mechanisms that would allow the Chinese government to siphon user data; companies including Microsoft, Perplexity, and Amazon host them on their infrastructure.OpenAI has previously accused DeepSeek, which rose to prominence earlier this year, of distilling knowledge from OpenAIs models against its terms of service. But OpenAIs new allegations that DeepSeek is supported by the PRC and under its command are an escalation of the companys campaign against the Chinese lab.There isnt a clear link between the Chinese government and DeepSeek, a spin-off from a quantitative hedge fund called High-Flyer. However, the PRC has taken an increased interest in DeepSeek in recent months. Several weeks ago, DeepSeek founder Liang Wenfeng met with Chinese leader Xi Jinping.
    0 Comments ·0 Shares ·9 Views
  • Google wants Gemini to get to know you better
    techcrunch.com
    In the AI chatbot wars, Google thinks the key to retaining users is serving up content they cant get elsewhere, like answers shaped by their internet habits.On Thursday, the company announced Gemini with personalization, a new experimental capability for its Gemini chatbot apps that lets Gemini draw on other Google apps and services to deliver customized responses. Gemini with personalization can tap a users activities and preferences across Googles product ecosystem to deliver tailored answers to queries, according to Gemini product director Dave Citron.These updates are all designed to make Gemini feel less like a tool and more like a natural extension of you, anticipating your needs with truly personalized assistance, Citron wrote in a blog post provided to TechCrunch. Early testers have found Gemini with personalization helpful for brainstorming and getting personalized recommendations. Image Credits:GoogleGemini with personalization, which will integrate with Google Search before expanding to additional Google services like Google Photos and YouTube in the months to come, arrives as chatbot makers including OpenAI attempt to differentiate their virtual assistants with unique and compelling functionality. OpenAI recently rolled out the ability for ChatGPT on macOS to directly edit code in supported apps, while Amazon is preparing to launch an agentic reimagining of Alexa.Citron said Gemini with personalization is powered by Googles experimental Gemini 2.0 Flash Thinking Experimental AI model, a so-called reasoning model that can determine whether personal data from a Google service, like a users Search history, is likely to enhance an answer. Narrow questions informed by likes and dislikes, like Where should I go on vacation this summer? and What would you suggest I learn as a new hobby?, will benefit the most, Citron continued.For example, you can ask Gemini for restaurant recommendations and it will reference your recent food-related searches, he said, or ask for travel advice and Gemini will respond based on destinations youve previously searched.Image Credits:GoogleIf this all sounds like a privacy nightmare, well, it could be. Its not tough to imagine a scenario in which Gemini inadvertently airs someones sensitive info.Thats probably why Google is making Gemini with personalization opt-in and excluding users under the age of 18. Gemini will ask for permission before connecting to Google Search history and other apps, Citron said, and show which data sources were used to customize the bots responses.When youre using the personalization experiment, Gemini displays a clear banner with a link to easily disconnect your Search history, Citron said. Gemini will only access your Search history when youve selected Gemini with personalization, when youve given Gemini permission to connect to your Search history, and when you have Web & App Activity on.Image Credits:GoogleGemini with personalization will roll out to Gemini users on the web (except for Google Workspace and Google for Education customers) starting Thursday in the apps model drop-down menu and gradually come to mobile after that. Itll be available in over 40 languages in the majority of countries, Citron said, excluding the European Economic Area, Switzerland, and the U.K. Citron indicated that the feature may not be free forever. Future usage limits may apply, he wrote in the blog post. Well continue to gather user feedback on the most useful applications of this capability.New models, connectors, and moreAs added incentives to stick with Gemini, Google announced updated models, research capabilities, and app connectors for the platform.Subscribers to Gemini Advanced, Googles $20-per-month premium subscription, can now use a standalone version of 2.0 Flash Thinking Experimental that supports file attachments; integrations with apps like Google Calendar, Notes, and Tasks; and a 1-million-token context window. Context window refers to text that the model can consider at any given time 1 million tokens is equivalent to around 750,000 words.Google said that this latest version of 2.0 Flash Thinking Experimental is faster and more efficient than the model it is replacing, and can better handle prompts that involve multiple apps, like Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list, and find me grocery stores that are still open nearby.Perhaps in response to pressure from OpenAI and its newly launched tools for in-depth research, Google is also enhancing Deep Research, its Gemini feature that searches across the web to compile reports on a subject. Deep Research now exposes its thinking steps and uses 2.0 Flash Thinking Experimental as the default model, which should result in higher-quality reports that are more detailed and insightful, Google said. Deep Research is now free to try for all Gemini users, and Google has increased usage limits for Gemini Advanced customers.Free Gemini users are also getting Gems, Googles topic-focused customizable chatbots within Gemini, which previously required a Gemini Advanced subscription. And in the coming weeks, all Gemini users will be able to interact with Google Photos to, for example, look up photos from a recent trip, Google said.
    0 Comments ·0 Shares ·11 Views
  • SuperBlack ransomware may have ties to LockBit
    www.computerweekly.com
    An emergent ransomware gang that has been exploiting two vulnerabilities in Fortinet firewall appliances may have links to current or former members of the notorious LockBit operation, according to intelligence published this week by Forescout Researchs Vedere Labs unit.Forescout is attributing SuperBlack to a threat actor tracked as Mora_001, which exhibits a distinct operational signature blending opportunistic attacks with ties to the LockBit ecosystem, according to researcher Sai Molige.Mora_001s relationship to the broader Lockbits ransomware operations underscore the increased complexity of the modern ransomware landscape where specialised teams collaborate to leverage complementary capabilities, wrote Molige and the research team.Mora_001/SuperBlacks modus operandi to date has been to focus attention on CVE-2025-24472 and CVE-2024-55591 a pair of authentication bypass flaws discovered in Fortinets FortiOS and FortiProxy for initial access.These vulnerabilities enable an unauthenticated actor to gain heightened admin rights on devices running FortiOS with exposed management interfaces. A proof-of-concept exploit released on 27 January 2025 was exploited within 96 hours, said Forescout.Once in their target network, the gang moved laterally and prioritised targets such as authentication, database and file servers, domain controllers, and other elements of their victims network infrastructure. They then exfiltrated data and initiated encryption after doing so in a fairly standard ransomware attack.In linking Mora_001/SuperBlack to LockBit famously disrupted in a UK-led multinational operation just over 12 months ago Forescouts analysts said they observed a number of post-exploitation behaviours consistent with LockBits playbook.These included identical usernames on victim networks, overlapping IP addresses used for access and command and control (C2), similar configuration backup behaviours, and rapid ransomware deployment, often after just 48 hours under favourable conditions.Mora_001/SuperBlack also leveraged the leaked LockBit builder, removing LockBit branding from its ransom notes and deploying its own exfiltration tool.The most concrete evidence was to be found in the gangs ransom note, which includes a TOX ID used by LockBit for negotiations. Forescout said this suggested Mora_001 is either an operational affiliate of LockBit, or an associate group that shares communications channels with the gang.The post-exploitation patterns observed enabled us to define a unique operational signature that sets Mora_001 apart from other ransomware operators, including LockBit affiliates, wrote the team. This consistent operational framework suggests a distinct threat actor with a structured playbook, rather than multiple operators following a generalised LockBit methodology.In analysing the timeline of Mora_001/SuperBlack intrusions, as well as overlapping indicators and operational patterns, Forescout said it could now confidently attribute future intrusions to the gang, independently of what its exact relationship to LockBit may be.Following the National Crime Agency (NCA)-led Operation Cronos, which disrupted LockBit in February 2024, the ransomware landscape saw a significant fragmentation, and an increase in the number of operational gangs, suggesting that a number of members of the LockBit collective scattered under pressure and set up or joined new operations.Although these suggestions are merely theories, the discovery of Mora_001/SuperBlack lends a certain weight to them, and as the year progresses, the legacy of LockBit looks set to remain for some time to come.More information on Mora_001/SuperBlack, including tactics, techniques and procedures (TTPs), detection opportunities, and indicators of compromise (IoCs), can be obtained from Forescout.Read more about ransomwareThis key member of the Black Basta ransomware gang is wanted by the US justice system. He narrowly escaped extradition at the end of June 2024,with the help of highly-placed contacts in Moscow.Several factors, including the impact of law enforcement operations disrupting cyber criminal gangs and better preparedness among users, may be behind a significant drop inthe total value of ransomware payments.The criminal ransomware fraternity was hard at work over the festive period, with attack volumes rising and a new threat actoremerging on the scene.
    0 Comments ·0 Shares ·7 Views
  • HMRC looks to upgrade SOC with advanced SIEM tech
    www.computerweekly.com
    His Majestys Revenue and Customs (HMRC) is firming up plans to procure more security information and event management (SIEM) services as it seeks to enhance its existing Security Operations Centre (SOC) capabilities, according to a request for information (RFI) published this week.As the UKs tax authority, HMRC is tasked with upholding the integrity of the countrys financial systems and ensuring public trust. It serves a broad public sector customer base of more than five million businesses and 45 million individuals, and manages over 800bn every financial year. As such, it faces significant and sophisticated cyber security threats on a day-to-day basis.This RFI seeks solution and service related information that would be capable of enhancing HMRCs SOC through the deployment of advanced technological tools and expertise, the department said in a tender notice. Ideal partners will demonstrate a clear technological roadmap aligned with HMRCs strategic needs, show a commitment to effective communication, and provide flexible and scalable solutions.A strong focus on long-term collaboration is essential to meet our cyber security objectives, as outlined in the RFI documents, effectively safeguarding against the continuously changing global geopolitical and economic landscape.At their core, SIEM systems such as the one proposed for HMRC are data aggregation services that draw information from various sources, identify anomalies that could indicate cyber threats, and take action such as generating alerts for SOC teams or activating other countermeasures. More advanced SIEM capabilities incorporate elements of user and entity behaviour analytics (UEBA) and security orchestration, automation and response (SOAR).In recent weeks, both the Public Accounts Committee (PAC) and National Audit Office (NAO) have gone on record to say that departments across the British government appear to be woefully unprepared for a catastrophic cyber attack largely as a result of over-reliance on legacy IT systems, a long-acknowledged issue in government.Earlier this week, the PAC head witness statements from government IT leaders who discussed how civil servants across Westminster lack visibility into their IT systems and the extent to which they are vulnerable to cyber attacks.The NAO report, published at the end of January 2025, found that 58 critical government IT systems had significant gaps in cyber resilience, and that the state of resilience of a further 228 legacy IT systems was essentially unknown.Besides this lack of understanding, the NAO identified a lack of coordination within government that risks jeopardising a joined-up approach to cyber security at Westminster, including a lack of understanding of departmental roles and responsibilities, including those of the National Cyber Security Centre (NCSC).It also warned of a serious skills gap, with roughly a third of open cyber security roles in government either vacant or filled by temporary contractors.Its findings were based off a series of interviews with Cabinet Office officials who have been tasked with implementing the current Government Cyber Security Strategy: 2022-2030, as well as staffers from the NCSC, the Central Digital and Data Office (CDDO), and other civil servants working around cyber security. The NAO also sought input from the British Library, which fell victim to a significant ransomware attack in the autumn of 2023.HMRCs contract is currently set to begin on 1 December and will run for three years to 30 November 2028. The closing date for the RFI is midday on Friday 27 March. The department has not yet put a value to the contract.Read more about government cyber securityThe Commons Public Accounts Committee heard government IT leaders respond to recent National Audit Office findings that the governments cyber resilience is under par.The National Audit Office has found UK government cyber resilience wanting, weakened by legacy IT and skills shortages, and facing mounting threats.Recent National Audit Office report on failed government IT projects provides the channel with a roadmap to help drive success, but only if the public sector listens.
    0 Comments ·0 Shares ·7 Views
  • Worried about DeepSeek? Turns out, Gemini and other US AIs collect more user data
    www.zdnet.com
    It's an AI privacy showdown. How much data does your favorite chatbot collect?
    0 Comments ·0 Shares ·8 Views
  • Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses
    www.zdnet.com
    ebrublue10/Getty ImagesCybercriminals are weaponizing artificial intelligence (AI) across every attack phase. Large language models (LLMs) craft hyper-personalized phishing emails by scraping targets' social media profiles and professional networks. Generative adversarial networks (GAN) produce deepfake audio and video to bypass multi-factor authentication. Automated tools like WormGPT enable script kiddies to launch polymorphic malware that evolves to evade signature-based detection.These cyber attacks aren't speculative, either. Organizations that fail to develop their security strategies risk being overrun by an onslaught of hyper-intelligent cyber threats -- in 2025 and beyond.Also: Want to win in the age of AI? You can either build it or build your business with itTo better understand how AI impacts enterprise security, I spoke with Bradon Rogers, an SVP at Intel Security and enterprise cybersecurity veteran, about this new era of digital security, early threat detection, and how you can prepare your team for AI-enabled attacks. But first, some background on what to expect.Why AI cyber security threats are differentAI provides malicious actors with sophisticated tools that make cyber attacks more precise, persuasive, and challenging to detect. For example, modern generative AI systems can analyze vast datasets of personal information, corporate communications, and social media activity to craft hyper-targeted phishing campaigns that convincingly mimic trusted contacts and legitimate organizations. This capability, combined with automated malware that adapts to defensive measures in real-time, has dramatically increased both the scale and success rate of attacks.Deepfake technology enables attackers to generate compelling video and audio content, facilitating everything from executive impersonation fraud to large-scale disinformation campaigns. Recent incidents include a $25 million theft from a Hong Kong-based company via deepfake video conferencing and numerous cases of AI-generated voice clips being used to deceive employees and family members into transferring funds to criminals.Also: Most AI voice cloning tools aren't safe from scammers, Consumer Reports findsAI-enabled automated cyber attacks led to the innovation of "set-and-forget" attack systems that continuously probe for vulnerabilities, adapt to defensive measures, and exploit weaknesses without human intervention. One example is the 2024 breach of major cloud service provider AWS. AI-powered malware systematically mapped network architecture, identified potential vulnerabilities, and executed a complex attack chain that compromised thousands of customer accounts.These incidents highlight how AI isn't just augmenting existing cyber threats but creating entirely new categories of security risks. Here are Rogers' suggestions for how to tackle the challenge.1. Implement zero-trust architectureThe traditional security perimeter is no longer sufficient in the face of AI-enhanced threats. A zero-trust architecture operates on a "never trust, always verify" principle, ensuring that every user, device, and application is authenticated and authorized before gaining access to resources. This approach minimizes the risk of unauthorized access, even if an attacker manages to breach the network."Enterprises must verify every user, device, and application -- including AI -- before they access critical data or functions," underscores Rogers, noting that this approach is an organization's "best course of action." By continuously verifying identities and enforcing strict access controls, businesses can reduce the attack surface and limit potential damage from compromised accounts.Also: This new AI benchmark measures how much models lieWhile AI poses challenges, it also offers powerful tools for defense. AI-driven security solutions can analyze vast amounts of data in real time, identifying anomalies and potential threats that traditional methods might miss. These systems can adapt to emerging attack patterns, providing a dynamic defense against AI-powered cyberattacks.Rogers adds that AI -- like cyber defense systems -- should never be treated as a built-in feature. "Now is the time for CISOs and security leaders to build systems with AI from the ground up," he says. By integrating AI into their security infrastructure, organizations can enhance their ability to detect and respond to incidents swiftly, reducing the window of opportunity for attackers.2. Educate and train employees on AI-driven threatsOrganizations can reduce the risk of internal vulnerabilities by fostering a culture of security awareness and providing clear guidelines on using AI tools. Humans are complex, so simple solutions are often the best."It's not just about mitigating external attacks. It's also providing guardrails for employees who are using AI for their own 'cheat code for productivity,'" Rogers says.Also: DuckDuckGo's AI beats Perplexity in one big way - and it's free to useHuman error remains a significant vulnerability in cybersecurity. As AI-generated phishing and social engineering attacks become more convincing, educating employees about these evolving threats is even more crucial. Regular training sessions can help staff recognize suspicious activities, such as unexpected emails or requests that deviate from routine procedures.3. Monitor and regulate employee AI useThe accessibility of AI technologies has led to widespread adoption across various business functions. However, unsanctioned or unmonitored use of AI -- often called "shadow AI" -- can introduce significant security risks. Employees may inadvertently use AI applications that lack proper security measures, leading to potential data leaks or compliance issues."We can't have corporate data flowing freely all over the place into unsanctioned AI environments, so a balance must be struck," Rogers explains. Implementing policies that govern AI tools, conducting regular audits, and ensuring that all AI applications comply with the organization's security standards are essential to mitigating these risks.4. Collaborate with AI and cybersecurity expertsThe complexity of AI-driven threats necessitates collaboration with experts specializing in AI and cybersecurity. Partnering with external firms can provide organizations access to the latest threat intelligence, advanced defensive technologies, and specialized skills that may not be available in-house.Also: How Cisco, LangChain, and Galileo aim to contain 'a Cambrian explosion of AI agents'AI-powered attacks require sophisticated countermeasures that traditional security tools often lack. AI-enhanced threat detection platforms, secure browsers, and zero-trust access controls analyze user behavior, detect anomalies, and prevent malicious actors from gaining unauthorized access.Rogers highlights that the innovative solutions for the enterprise "are a missing link in the zero-trust security framework. [These tools] provide deep, granular security controls that seamlessly protect any app or resource across public and private networks."These tools leverage machine learning to continuously monitor network activity, flag suspicious patterns, and automate incident response, reducing the risk of AI-generated attacks infiltrating corporate systems.Artificial Intelligence
    0 Comments ·0 Shares ·7 Views
  • Police Warn iPhone, Android UsersDelete And Report These Messages
    www.forbes.com
    These messages are dangerousAFP via Getty ImagesA viral threat is now sweeping across America, from state to state, as malicious SMS texts demand money for unpaid bills. The FBI has warned iPhone and Android users to delete all such texts received. Now a new and even more malicious SMS threat has prompted the police to warn smartphone users to delete and report messages.The Hampden County Sheriffs Office in Massachusetts took to Facebook to out the latest scam, which threatens recipients that the police has attempted to serve them a notice of action pertaining to an investigation against you, and failure to respond will result in further legal action being initiated.The police warn recipients of these alarming messages that law enforcement will never call, text, or email to demand money, resolve a warrant, or conduct official business. If you receive a message like this, do not engage delete it and report it.Be warned - this is not the police.Hampden County Sheriffs Office, MassachusettsThis is not an isolated incident, and there have been regular reports of scammers impersonating state and local law enforcement, and even federal agencies, demanding payment to avoid arrest and prosecution. All such messages are scams.In this latest Massachusetts attack, the links included in the messages lead to websites that are designed to install malware on devices. While anyone receiving such a text should dismiss it immediately, the scammers make detection more difficult by spoofing agency numbers to present a legitimate caller ID, likely using internet-based services.The police urge citizens to please share this warning with your family and friends, especially those who may be more vulnerable to scams.While law enforcement scams are becoming more common, the real threat is unpaid tolls and undelivered packages. The news this week that 10,000 new web domains have been registered to fuel such scams just highlights the scale this has now reached.Stay safe! says Hampden Countys Sheriffs Office.
    0 Comments ·0 Shares ·8 Views
  • EA College Football 26 Cover Seems To Have Leaked And Fans Reacted
    www.forbes.com
    The cover for the Deluxe Edition of EA College Football 26 may have leaked online, and it has caused quite the reaction among fans.
    0 Comments ·0 Shares ·9 Views
  • The Oppenheimer Moment That Looms Over Todays AI Leaders
    time.com
    This year, hundreds of billions of dollars will be spent to scale AI systems in pursuit of superhuman capabilities. CEOs of leading AI companies, such as OpenAIs Sam Altman and xAIs Elon Musk, expect that within the next four years, their systems will be smart enough to do most cognitive workthink any job that can be done with just a laptopas effectively as or better than humans.Such an advance, leaders agree, would fundamentally transform society. Google CEO Sundar Pichai has repeatedly described AI as the most profound technology humanity is working on. Demis Hassabis, who leads Googles AI research lab Google DeepMind, argues AIs social impact will be more like that of fire or electricity than the introduction of mobile phones or the Internet.AdvertisementAdvertisementIn February, in the wake of an international AI Summit in Paris, Anthropic CEO Dario Amodei restated his belief that by 2030 AI systems will be best thought of as akin to an entirely new state populated by highly intelligent people. In the same month, Musk, speaking on the Joe Rogan Experience podcast, said I think we're trending toward having something that's smarter than the smartest human in the next few years. He continued: There's a level beyond that which is smarter than all humans combined, which frankly is around 2029 or 2030.If these predictions are even partly correct, the world could soon radically change. But there is no consensus on how this transformation will or should be handled.With exceedingly advanced AI models released on a monthly basis, and the Trump administration seemingly uninterested in regulating the technology, the decisions of private-sector leaders matter more than ever. But they differ in their assessments of which risks are most salient, and whats at stake if things go wrong. Heres how:Existential risk or unmissable opportunity?I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true, Musk said in February, noting he thinks there is a 20% chance of human annihilation by AI. While estimates vary, the idea that advanced AI systems could destroy humanity traces back to the origin of many of the labs developing the technology today. In 2015, Altman called the development of superhuman machine intelligence probably the greatest threat to the continued existence of humanity. Alongside Hassabis and Amodei, he signed a statement in May 2023 declaring that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.It strikes me as odd that some leaders think that AI can be so brilliant that it will solve the worlds problems, using solutions we didn't think of, but not so brilliant that it cant escape whatever control constraints we think of, says Margaret Mitchell, Chief Ethics Scientist at Hugging Face. She notes that discourse sometimes conflates AI that supplements people with AI that supplants them. You cant have the benefits of both and the drawbacks of neither, she says.For Mitchell, risk increases as humans cede control to increasingly autonomous agents. Because we cant fully control or predict the behaviour of AI agents, we run a massive risk of AI agents that act without consent to, for example, drain bank accounts, impersonate us saying and doing horrific things, or bomb specific populations, she explains.Most people think of this as just another technology and, and not as a new species, which is the way you should think about it, says Professor Max Tegmark, co-founder and president of the Future of Life Institute. He explains that the default outcome when building machines at this level is losing control over them, which could lead to unpredictable and potentially catastrophic outcomes.But despite the apprehensions, other leaders avoid the language of superintelligence and existential risk, focusing instead on the positive upside. I think when history looks back it will see this as the beginning of a golden age of innovation, Pichai said at the Paris Summit in February. The biggest risk could be missing out.Similarly, asked in mid-2023 whether he thinks were on a path to creating superintelligence, Microsoft CEO Satya Nadella said he was much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI [artificial general intelligence] showing up, or showing up fast, he said.A race between countries and companiesEven among those that do believe AI poses an existential risk, there is a widespread belief that any slowdown in Americas AI development will allow foreign adversariesparticularly Chinato pull ahead in the race to create transformative AI. Future AI systems could be capable of creating novel weapons of mass destruction, or covertly hacking a countrys nuclear arsenaleffectively flipping the global balance of power overnight.My feeling is that almost every decision I make is balanced on the edge of a knife, Amodei said earlier this month, explaining that building too fast risks humanity losing control, whereas if we dont build fast enough, then the authoritarian countries could win.These dynamics play out not just between countries, but between companies. As Helen Toner, a director at Georgetowns Center for Security and Emerging Technology explains, there's often a disconnect between the idealism in public statements and the hard-nosed business logic that drives their decisions. Toner points to competition over release dates as a clear example of this. There have been multiple instances of AI teams being forced to cut corners and skip steps in order to beat a competitor to launch day, she says.Read More: How China Is Advancing in AI Despite U.S. Chip RestrictionsFor Meta CEO Mark Zuckerberg, ensuring advanced AI systems are not controlled by a single entity is key to safety. I kind of liked the theory that its only God if only one company or government controls it, he said in January. The best way to make sure it doesnt get out of control is to make it so that its pretty equally distributed, he claimed, pointing to the importance of open-source models.Parameters for controlWhile almost every company developing advanced AI models has their own internal policies and procedures around safetyand most have made voluntary commitments to the U.S. government regarding issues of trust, safety, and allowing third parties to evaluate their modelsnone of this is backed by the force of law. Tegmark is optimistic that if the U.S. national security establishment accepts the seriousness of the threat, safety standards will follow. Safety standard number one, he says, will be requiring companies to demonstrate how they plan to keep their models under control.Some CEOs are feeling the weight of their power. There's a huge amount of responsibilityprobably too muchon the people leading this technology, Hassabis said in February. The Google DeepMind leader has previously advocated for the creation of new institutions, akin to the European Organization for Nuclear Research (CERN) or the International Energy Agency, to bring together governments to monitor AI developments. Society needs to think about what kind of governing bodies are needed, he said.This is easier said than done. While creating binding international agreements has always been challenging, its more unrealistic than ever, says Toner. On the domestic front, Tegmark points out that right now, there are more safety standards for sandwich shops than for AI companies in America.Nadella, discussing AGI and superintelligence on a podcast in February, emphasized his view that legal infrastructure will be the biggest rate limiter to the power of future systems, potentially preventing their deployment. Before it is a real problem, the real problem will be in the courts, he said.An 'Oppenheimer moment'Mitchell says that AIs corporate leaders bring different levels of their own human concerns and thoughts to these discussions. Tegmark fears, however, that some of these leaders are falling prey to wishful thinking by believing theyre going to be able to control superintelligence, and that many are now facing their own Oppenheimer moment." He points to a poignant scene in that film where scientists watch their creation being taken away by military authorities. That's the moment where the builders of the technology realize they're losing control over their creation, he says. Some of the CEOs are beginning to feel that right now.
    0 Comments ·0 Shares ·8 Views
  • Roomba maker iRobot doubts it can survive the next 12 months
    www.techspot.com
    The big picture: iRobot, best known for its Roomba brand of robotic vacuum cleaners and floor mops, was a pioneer in the early days of home autonomy. For a while, the Roomba name itself became synonymous with robotic vacuums and many erroneously thought it was actually the company's name. Now, iRobot has warned investors that it might not survive the next 12 months as it continues to explore the possibility of a sale or a strategic partnership. iRobot's success spawned a whole new industry, and it didn't take long for competitors to bring rival cleaning bots to market often at lower price points or with more features. In the midst of a shifting landscape, iRobot in the summer of 2022 agreed to sell its business to Amazon as part of an all-cash deal valued at $1.7 billion, or $61 per share. At the time, the deal represented a 22 percent premium over iRobot's share price.The good times, however, would not last. Roughly a year and a half later, Amazon terminated what would have been its fourth largest acquisition at that time. It was the first time the Bezos-founded e-commerce giant had failed to complete a purchase, and it all came down to regulation. In short, Amazon did not believe it had a path to gaining approval from European Union regulators. As we've seen in the past, it's often better to cut your losses than to try and fight a regulatory battle.It's mostly been a downhill journey ever since. Last year, iRobot laid off more than half of its workforce as part of a restructuring that also involved lowering sales and marketing expenses through consolidation, and decreasing inventory and cash outflow.Revenue for the fourth quarter of 2024 reached $172.9 million, down 44 percent from $307.5 million during the same period a year ago. Full-year revenue dipped to $681.8 million, down from $890.6 million in 2023. Shares in iRobot, which once topped out at around $133 in 2021, are currently trading for just $4.06.Image credit: Frank Minjarez, Kindel Media // Related Stories
    0 Comments ·0 Shares ·8 Views