• Meta's Push Into Defense Tech Reflects Cultural Shift, CTO Says

    Meta CTO Andrew Bosworth said that the "tides have turned" in Silicon Valley and made it more palatable for the tech industry to support the US military's efforts. From a report: There's long existed a "silent majority" who wanted to pursue defense projects, Bosworth said during an interview at the Bloomberg Tech summit in San Francisco on Wednesday. "There's a much stronger patriotic underpinning than I think people give Silicon Valley credit for," he said. Silicon Valley was founded on military development and "there's really a long history here that we are kind of hoping to return to, but it is not even day one," Bosworth added. He described Silicon Valley's new openness to work with the US military as a "return to grace."

    of this story at Slashdot.
    #meta039s #push #into #defense #tech
    Meta's Push Into Defense Tech Reflects Cultural Shift, CTO Says
    Meta CTO Andrew Bosworth said that the "tides have turned" in Silicon Valley and made it more palatable for the tech industry to support the US military's efforts. From a report: There's long existed a "silent majority" who wanted to pursue defense projects, Bosworth said during an interview at the Bloomberg Tech summit in San Francisco on Wednesday. "There's a much stronger patriotic underpinning than I think people give Silicon Valley credit for," he said. Silicon Valley was founded on military development and "there's really a long history here that we are kind of hoping to return to, but it is not even day one," Bosworth added. He described Silicon Valley's new openness to work with the US military as a "return to grace." of this story at Slashdot. #meta039s #push #into #defense #tech
    TECH.SLASHDOT.ORG
    Meta's Push Into Defense Tech Reflects Cultural Shift, CTO Says
    Meta CTO Andrew Bosworth said that the "tides have turned" in Silicon Valley and made it more palatable for the tech industry to support the US military's efforts. From a report: There's long existed a "silent majority" who wanted to pursue defense projects, Bosworth said during an interview at the Bloomberg Tech summit in San Francisco on Wednesday. "There's a much stronger patriotic underpinning than I think people give Silicon Valley credit for," he said. Silicon Valley was founded on military development and "there's really a long history here that we are kind of hoping to return to, but it is not even day one," Bosworth added. He described Silicon Valley's new openness to work with the US military as a "return to grace." Read more of this story at Slashdot.
    Like
    Love
    Wow
    Sad
    Angry
    260
    0 Comentários 0 Compartilhamentos
  • Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model

    Yann LeCun, Meta's chief AI scientist, talks about AI regulation.

    FABRICE COFFRINI / AFP via Getty Images

    2025-05-31T21:40:40Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Yann LeCun, Meta's chief AI scientist, has some ideas on open-source regulation.
    LeCun thinks open-source AI should be an international resource.
    Countries must ensure they are not "impeding open source platforms," he said.

    AI has surged to the top of the diplomatic agenda in the past couple of years.And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said.This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said.To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it.Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical."I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year.Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing."

    Recommended video
    #meta039s #chief #scientist #says #all
    Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model
    Yann LeCun, Meta's chief AI scientist, talks about AI regulation. FABRICE COFFRINI / AFP via Getty Images 2025-05-31T21:40:40Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Yann LeCun, Meta's chief AI scientist, has some ideas on open-source regulation. LeCun thinks open-source AI should be an international resource. Countries must ensure they are not "impeding open source platforms," he said. AI has surged to the top of the diplomatic agenda in the past couple of years.And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said.This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said.To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it.Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical."I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year.Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing." Recommended video #meta039s #chief #scientist #says #all
    WWW.BUSINESSINSIDER.COM
    Meta's chief AI scientist says all countries should contribute data to a shared open-source AI model
    Yann LeCun, Meta's chief AI scientist, talks about AI regulation. FABRICE COFFRINI / AFP via Getty Images 2025-05-31T21:40:40Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Yann LeCun, Meta's chief AI scientist, has some ideas on open-source regulation. LeCun thinks open-source AI should be an international resource. Countries must ensure they are not "impeding open source platforms," he said. AI has surged to the top of the diplomatic agenda in the past couple of years.And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.At the AI Action Summit in Paris earlier this year, Meta's chief AI scientist, Yann LeCun, said he'd like to see a world in which "we'll train our open-source platforms in a distributed fashion with data centers spread across the world." Each will have access to its own data sources, which they may keep confidential, but "they will contribute to a common model that will essentially constitute a repository of all human knowledge," he said.This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, "they would be happy to contribute to training a big model, if they can, that is open source," he said.To achieve that vision, though, "countries have to be really careful with regulations and legislation." He said countries shouldn't impede open-source, but favor it.Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical."I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast last year.Altman believes those systems will have a "negative impact way beyond the realm of one country" and said he wanted to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing." Recommended video
    0 Comentários 0 Compartilhamentos
  • Meta's 'Behemoth' Llama 4 model might still be months away

    Last month, Meta hosted LlamaCon, its first ever generative AI conference. But while the event delivered some notable improvements for developers, it also felt a bit underwhelming considering how important AI is to the company. Now, we know a bit more about why, thanks to a new report in The Wall Street Journal.
    According to the report, Meta had originally intended to release its "Behemoth" Llama 4 model at the April developer event, but later delayed its release to June. Now, it's apparently been pushed back again, potentially until "fall or later." Meta engineers are reportedly "struggling to significantly improve the capabilities" of the model that Mark Zuckerberg has called “the highest performing base model in the world.”
    Meta has already released two smaller Llama 4 models, Scout and Maverick, and has also teased a fourth lightweight model that's apparently nicknamed "Little Llama." Meanwhile, the "Behemoth" model will have 288 billion active parameters and "outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several STEM benchmarks," the company said last month.
    Meta has never given a firm timeline of when to expect the model. The company said last month that it was "still training." And while Behemoth got a few nods during the LlamaCon keynote, there were no updates on when it might actually be ready. That's probably because it could still be several months. Inside Meta there are apparently questions "about whether improvements over prior versions are significant enough to justify public release."
    Meta didn't immediately respond to a request for comment. As the report notes, it wouldn't be the first company to run into snags as it races to release new models and outpace competitors. But the delay is still notable given the Meta's lofty ambitions when it comes to AI. Zuckerberg has made AI a top priority with Meta planning to spend as much as billion on its AI infrastructure this year.This article originally appeared on Engadget at
    #meta039s #039behemoth039 #llama #model #might
    Meta's 'Behemoth' Llama 4 model might still be months away
    Last month, Meta hosted LlamaCon, its first ever generative AI conference. But while the event delivered some notable improvements for developers, it also felt a bit underwhelming considering how important AI is to the company. Now, we know a bit more about why, thanks to a new report in The Wall Street Journal. According to the report, Meta had originally intended to release its "Behemoth" Llama 4 model at the April developer event, but later delayed its release to June. Now, it's apparently been pushed back again, potentially until "fall or later." Meta engineers are reportedly "struggling to significantly improve the capabilities" of the model that Mark Zuckerberg has called “the highest performing base model in the world.” Meta has already released two smaller Llama 4 models, Scout and Maverick, and has also teased a fourth lightweight model that's apparently nicknamed "Little Llama." Meanwhile, the "Behemoth" model will have 288 billion active parameters and "outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several STEM benchmarks," the company said last month. Meta has never given a firm timeline of when to expect the model. The company said last month that it was "still training." And while Behemoth got a few nods during the LlamaCon keynote, there were no updates on when it might actually be ready. That's probably because it could still be several months. Inside Meta there are apparently questions "about whether improvements over prior versions are significant enough to justify public release." Meta didn't immediately respond to a request for comment. As the report notes, it wouldn't be the first company to run into snags as it races to release new models and outpace competitors. But the delay is still notable given the Meta's lofty ambitions when it comes to AI. Zuckerberg has made AI a top priority with Meta planning to spend as much as billion on its AI infrastructure this year.This article originally appeared on Engadget at #meta039s #039behemoth039 #llama #model #might
    WWW.ENGADGET.COM
    Meta's 'Behemoth' Llama 4 model might still be months away
    Last month, Meta hosted LlamaCon, its first ever generative AI conference. But while the event delivered some notable improvements for developers, it also felt a bit underwhelming considering how important AI is to the company. Now, we know a bit more about why, thanks to a new report in The Wall Street Journal. According to the report, Meta had originally intended to release its "Behemoth" Llama 4 model at the April developer event, but later delayed its release to June. Now, it's apparently been pushed back again, potentially until "fall or later." Meta engineers are reportedly "struggling to significantly improve the capabilities" of the model that Mark Zuckerberg has called “the highest performing base model in the world.” Meta has already released two smaller Llama 4 models, Scout and Maverick, and has also teased a fourth lightweight model that's apparently nicknamed "Little Llama." Meanwhile, the "Behemoth" model will have 288 billion active parameters and "outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several STEM benchmarks," the company said last month. Meta has never given a firm timeline of when to expect the model. The company said last month that it was "still training." And while Behemoth got a few nods during the LlamaCon keynote, there were no updates on when it might actually be ready. That's probably because it could still be several months. Inside Meta there are apparently questions "about whether improvements over prior versions are significant enough to justify public release." Meta didn't immediately respond to a request for comment. As the report notes, it wouldn't be the first company to run into snags as it races to release new models and outpace competitors. But the delay is still notable given the Meta's lofty ambitions when it comes to AI. Zuckerberg has made AI a top priority with Meta planning to spend as much as $72 billion on its AI infrastructure this year.This article originally appeared on Engadget at https://www.engadget.com/ai/metas-behemoth-llama-4-model-might-still-be-months-away-221240585.html?src=rss
    0 Comentários 0 Compartilhamentos
  • Meta's smart glasses will soon provide detailed information regarding visual stimuli

    The Ray-Ban Meta glasses are getting an upgrade to better help the blind and low vision community. The AI assistant will now provide "detailed responses" regarding what's in front of users. Meta says it'll kick in "when people ask about their environment." To get started, users just have to opt-in via the Device Settings section in the Meta AI app.
    The company shared a video of the tool in action in which a blind user asked Meta AI to describe a grassy area in a park. It quickly hopped into action and correctly pointed out a path, trees and a body of water in the distance. The AI assistant was also shown describing the contents of a kitchen. 

    I could see this being a fun add-on even for those without any visual impairment. In any event, it begins rolling out to all users in the US and Canada in the coming weeks. Meta plans on expanding to additional markets in the near future.
    It's Global Accessibility Awareness Day, so that's not the only accessibility-minded tool that Meta announced today. There's the nifty Call a Volunteer, a tool that automatically connects blind or low vision people to a "network of sighted volunteers in real-time" to help complete everyday tasks. The volunteers come from the Be My Eyes foundation and the platform launches later this month in 18 countries.
    The company recently announced a more refined system for live captions in all of its extended reality products, like the Quest line of VR headsets. This converts spoken words into text in real-time, so users can "read content as it's being delivered." The feature is already available for Quest headsets and within Meta Horizon Worlds.This article originally appeared on Engadget at
    #meta039s #smart #glasses #will #soon
    Meta's smart glasses will soon provide detailed information regarding visual stimuli
    The Ray-Ban Meta glasses are getting an upgrade to better help the blind and low vision community. The AI assistant will now provide "detailed responses" regarding what's in front of users. Meta says it'll kick in "when people ask about their environment." To get started, users just have to opt-in via the Device Settings section in the Meta AI app. The company shared a video of the tool in action in which a blind user asked Meta AI to describe a grassy area in a park. It quickly hopped into action and correctly pointed out a path, trees and a body of water in the distance. The AI assistant was also shown describing the contents of a kitchen.  I could see this being a fun add-on even for those without any visual impairment. In any event, it begins rolling out to all users in the US and Canada in the coming weeks. Meta plans on expanding to additional markets in the near future. It's Global Accessibility Awareness Day, so that's not the only accessibility-minded tool that Meta announced today. There's the nifty Call a Volunteer, a tool that automatically connects blind or low vision people to a "network of sighted volunteers in real-time" to help complete everyday tasks. The volunteers come from the Be My Eyes foundation and the platform launches later this month in 18 countries. The company recently announced a more refined system for live captions in all of its extended reality products, like the Quest line of VR headsets. This converts spoken words into text in real-time, so users can "read content as it's being delivered." The feature is already available for Quest headsets and within Meta Horizon Worlds.This article originally appeared on Engadget at #meta039s #smart #glasses #will #soon
    WWW.ENGADGET.COM
    Meta's smart glasses will soon provide detailed information regarding visual stimuli
    The Ray-Ban Meta glasses are getting an upgrade to better help the blind and low vision community. The AI assistant will now provide "detailed responses" regarding what's in front of users. Meta says it'll kick in "when people ask about their environment." To get started, users just have to opt-in via the Device Settings section in the Meta AI app. The company shared a video of the tool in action in which a blind user asked Meta AI to describe a grassy area in a park. It quickly hopped into action and correctly pointed out a path, trees and a body of water in the distance. The AI assistant was also shown describing the contents of a kitchen.  I could see this being a fun add-on even for those without any visual impairment. In any event, it begins rolling out to all users in the US and Canada in the coming weeks. Meta plans on expanding to additional markets in the near future. It's Global Accessibility Awareness Day (GAAD), so that's not the only accessibility-minded tool that Meta announced today. There's the nifty Call a Volunteer, a tool that automatically connects blind or low vision people to a "network of sighted volunteers in real-time" to help complete everyday tasks. The volunteers come from the Be My Eyes foundation and the platform launches later this month in 18 countries. The company recently announced a more refined system for live captions in all of its extended reality products, like the Quest line of VR headsets. This converts spoken words into text in real-time, so users can "read content as it's being delivered." The feature is already available for Quest headsets and within Meta Horizon Worlds.This article originally appeared on Engadget at https://www.engadget.com/ai/metas-smart-glasses-will-soon-provide-detailed-information-regarding-visual-stimuli-153046605.html?src=rss
    0 Comentários 0 Compartilhamentos