• Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité

    Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité L’informaticien Yoshua Bengio annonce la création de LoiZéro, un laboratoire destiné à mettre au point des intelligences artificielles « sûres ». Elles devraient notamment être capables de prévenir les risques liés aux chatbots. Article réservé aux abonnés Yoshua Bengio, professeur titulaire à l’Université de Montréal au Paris Saclay Summit - Choose Science, Saclay, le 12 février 2025. JEAN NICHOLAS GUILLO/REA Yoshua Bengio a de la suite dans les idées. Prix Turing en 2018, directeur scientifique du MILA, l’Institut en intelligence artificiellede Montréal, cet informaticien canadien est réputé pour être l’un des pionniers de l’apprentissage profond, à l’origine du réveil de l’IA depuis une quinzaine d’années. Il est aussi connu pour alerter, depuis plus récemment, sur les risques inhérents à ces technologies, y compris sur des scénarios catastrophe pouvant conduire à l’anéantissement de l’humanité. En janvier, il publiait un vaste travail qu’il avait coordonné pour évaluer les risques. Si le rapport était équilibré, lui-même a une opinion plus tranchée, s’inquiétant d’une possible extinction de masse et appelant au principe de précaution pour freiner le développement actuel. Le 3 juin, il a franchi une nouvelle étape, ne se contentant plus d’alerter. Il lance, en effet, un nouveau laboratoire de recherche privé pour développer des « solutions techniques de systèmes d’IA sûrs par conception ». C’est-à-dire, comme il le détaille au Monde en visio, pour fabriquer des IA « qui ne se retourneront pas contre nous et qui ne pourront pas être utilisées pour nuire ». Il vous reste 73.38% de cet article à lire. La suite est réservée aux abonnés.
    #pionnier #lia #veut #construire #des
    Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité
    Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité L’informaticien Yoshua Bengio annonce la création de LoiZéro, un laboratoire destiné à mettre au point des intelligences artificielles « sûres ». Elles devraient notamment être capables de prévenir les risques liés aux chatbots. Article réservé aux abonnés Yoshua Bengio, professeur titulaire à l’Université de Montréal au Paris Saclay Summit - Choose Science, Saclay, le 12 février 2025. JEAN NICHOLAS GUILLO/REA Yoshua Bengio a de la suite dans les idées. Prix Turing en 2018, directeur scientifique du MILA, l’Institut en intelligence artificiellede Montréal, cet informaticien canadien est réputé pour être l’un des pionniers de l’apprentissage profond, à l’origine du réveil de l’IA depuis une quinzaine d’années. Il est aussi connu pour alerter, depuis plus récemment, sur les risques inhérents à ces technologies, y compris sur des scénarios catastrophe pouvant conduire à l’anéantissement de l’humanité. En janvier, il publiait un vaste travail qu’il avait coordonné pour évaluer les risques. Si le rapport était équilibré, lui-même a une opinion plus tranchée, s’inquiétant d’une possible extinction de masse et appelant au principe de précaution pour freiner le développement actuel. Le 3 juin, il a franchi une nouvelle étape, ne se contentant plus d’alerter. Il lance, en effet, un nouveau laboratoire de recherche privé pour développer des « solutions techniques de systèmes d’IA sûrs par conception ». C’est-à-dire, comme il le détaille au Monde en visio, pour fabriquer des IA « qui ne se retourneront pas contre nous et qui ne pourront pas être utilisées pour nuire ». Il vous reste 73.38% de cet article à lire. La suite est réservée aux abonnés. #pionnier #lia #veut #construire #des
    WWW.LEMONDE.FR
    Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité
    Un pionnier de l’IA veut construire des systèmes non nuisibles à l’humanité L’informaticien Yoshua Bengio annonce la création de LoiZéro, un laboratoire destiné à mettre au point des intelligences artificielles « sûres ». Elles devraient notamment être capables de prévenir les risques liés aux chatbots. Article réservé aux abonnés Yoshua Bengio, professeur titulaire à l’Université de Montréal au Paris Saclay Summit - Choose Science, Saclay, le 12 février 2025. JEAN NICHOLAS GUILLO/REA Yoshua Bengio a de la suite dans les idées. Prix Turing en 2018, directeur scientifique du MILA, l’Institut en intelligence artificielle (IA) de Montréal, cet informaticien canadien est réputé pour être l’un des pionniers de l’apprentissage profond, à l’origine du réveil de l’IA depuis une quinzaine d’années. Il est aussi connu pour alerter, depuis plus récemment, sur les risques inhérents à ces technologies, y compris sur des scénarios catastrophe pouvant conduire à l’anéantissement de l’humanité. En janvier, il publiait un vaste travail qu’il avait coordonné pour évaluer les risques. Si le rapport était équilibré, lui-même a une opinion plus tranchée, s’inquiétant d’une possible extinction de masse et appelant au principe de précaution pour freiner le développement actuel. Le 3 juin, il a franchi une nouvelle étape, ne se contentant plus d’alerter. Il lance, en effet, un nouveau laboratoire de recherche privé pour développer des « solutions techniques de systèmes d’IA sûrs par conception ». C’est-à-dire, comme il le détaille au Monde en visio, pour fabriquer des IA « qui ne se retourneront pas contre nous et qui ne pourront pas être utilisées pour nuire ». Il vous reste 73.38% de cet article à lire. La suite est réservée aux abonnés.
    0 Comments 0 Shares
  • Yoshua Bengio lance LawZero, un laboratoire visant à créer un garde-fou pour l'intelligence artificielle

    Le chercheur québécois d'origine franco-marocaine Yoshua Bengio n'a pas dit son dernier mot sur la sécurité de l'intelligence artificielle...
    #yoshua #bengio #lance #lawzero #laboratoire
    Yoshua Bengio lance LawZero, un laboratoire visant à créer un garde-fou pour l'intelligence artificielle
    Le chercheur québécois d'origine franco-marocaine Yoshua Bengio n'a pas dit son dernier mot sur la sécurité de l'intelligence artificielle... #yoshua #bengio #lance #lawzero #laboratoire
    WWW.USINE-DIGITALE.FR
    Yoshua Bengio lance LawZero, un laboratoire visant à créer un garde-fou pour l'intelligence artificielle
    Le chercheur québécois d'origine franco-marocaine Yoshua Bengio n'a pas dit son dernier mot sur la sécurité de l'intelligence artificielle...
    0 Comments 0 Shares
  • The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy

    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    #mostcited #computer #scientist #has #plan
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers." #mostcited #computer #scientist #has #plan
    TIME.COM
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly $30 million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly $200 billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    0 Comments 0 Shares
  • Canada moves to regain AI leadership mantle

    Other nations can learn much from Canada when it comes to artificial intelligence advances. For one thing, “the focus and nurturing of AI needs ongoing attention and investments; otherwise, that leadership in AI can be lost,” an industry analyst said Wednesday.

    Bill Wong, research fellow at Info-Tech Research Group, was responding to the recent appointment of MP Evan Solomon, a former journalist, as Canada’s first Minister of Artificial Intelligence and Digital Innovation in the federal cabinet of Prime Minister Mark Carney.

    In the past, he said, “Canada has been viewed as an AI leader around the world with respect to AI research, especially with thought leaders like Geoffrey Hinton, Yoshua Bengio, and Richard Sutton.”

    However, he noted, “despite the recognition, critics would cite thathas fallen behind and challenged when it comes to monetizing  AI investments. As part of the government’s election platform, the government promised to move fast on building data centers, introduce a tax credit to incentivize AI adoption by small and medium-sized businesses, and push to expand programs at Canada’s artificial intelligence institutes to drive AI commercialization.”

    In a commentary on the appointment, the Macdonald-Laurier Institute, a policy think tank based in Ottawa, Ontario, stated that it “signals a consolidation of federal focus on a field that has historically been spread across numerous portfolios … Solomon’s challenge will be to distinguish between productivity enhancing AI and ‘so-so’ automation — harnessing the benefits of AI, while ensuring adequate regulation to mitigate associated risks.”

    AI is a ‘geopolitical force’

    Canada, the organization stated, “must close the gap between AI innovation and adoption by pursuing policies that encourage productivity-boosting AI — applications that augment workers and make them more efficient, rather than simply replace them. The answer is a multi-level policy framework that accelerates the uptake of AI in ways that enhance output, job quality, and workforce participation.”

    Wong noted, “Canada was the first country to deliver its national AI strategy; the appointment of the country’s first AI minister can be viewed as a natural evolution of Canada’s adoption of AI at a national level.”

    The appointment of Solomon, he said, “demonstrates just how important AI is to the future of Canada and its people. While AI is considered a technology disruptor, its impact is far-reaching, and it will impact every industry and the national economy.”

    And while having a government ministry of AI is not the norm for most countries today, he said, “the importance of this role to the country’s economy and national security is growing. Internationally, AI has become a geopolitical force; an example of this would be the US imposing export controls on high-end AI chip technology to China.”

    The upcoming  G7 meeting in Kananaskis, Alberta, from June 15 to 17,  said Wong, “provides an opportunity for Canada to demonstrate its AI leadership on an international stage. While it’s a short runway to that event, Canada should promote its best practices for deploying AI in the public sector, its plans to democratize the benefits of AI to its people, and demonstrate its thought leadership by sharing research and data.”

    The Carney government, he said, also has a “mandate to improve its use of AI to improve productivity as well as increase the adoption of AI by private industry. A recent Deloitte study cited that only 26% of Canadian organizations have implemented AI, compared with 34% globally.”

    AI compute fabric in the works

    In the private sector, Bell Canada on Wednesday announced Bell AI Fabric, an investment, it said, “that will create the country’s largest AI compute project.”

    The telco plans to create a national network that will start with a “data center supercluster in British Columbia that will aim to provide upwards of 500 MW of hydro-electric powered AI compute capacity across six facilities.”

    The first facility, a release stated, will come online this month in partnership with AI chip provider Groq, with additional facilities being operational by the end of 2026, including two at Thompson Rivers Universityin Kamloops, BC.

    Bell said that the data centers at TRU “will be designed to host AI training and inference, providing students and faculty with access to cutting-edge compute capabilities, both at TRU and nationally through integration with the BCNET network. The data centre is also being integrated into the district energy system, with waste heat being repurposed to provide energy to TRU’s buildings.”

    Further reading:

    AI and economic pressures reshape tech jobs amid layoffs

    Microsoft cements its AI lead with one hosting service to rule them all

    Real-world use cases for agentic AI

    AI vs. copyright

    How to train an AI-enabled workforce — and why you need to

    >

    >
    #canada #moves #regain #leadership #mantle
    Canada moves to regain AI leadership mantle
    Other nations can learn much from Canada when it comes to artificial intelligence advances. For one thing, “the focus and nurturing of AI needs ongoing attention and investments; otherwise, that leadership in AI can be lost,” an industry analyst said Wednesday. Bill Wong, research fellow at Info-Tech Research Group, was responding to the recent appointment of MP Evan Solomon, a former journalist, as Canada’s first Minister of Artificial Intelligence and Digital Innovation in the federal cabinet of Prime Minister Mark Carney. In the past, he said, “Canada has been viewed as an AI leader around the world with respect to AI research, especially with thought leaders like Geoffrey Hinton, Yoshua Bengio, and Richard Sutton.” However, he noted, “despite the recognition, critics would cite thathas fallen behind and challenged when it comes to monetizing  AI investments. As part of the government’s election platform, the government promised to move fast on building data centers, introduce a tax credit to incentivize AI adoption by small and medium-sized businesses, and push to expand programs at Canada’s artificial intelligence institutes to drive AI commercialization.” In a commentary on the appointment, the Macdonald-Laurier Institute, a policy think tank based in Ottawa, Ontario, stated that it “signals a consolidation of federal focus on a field that has historically been spread across numerous portfolios … Solomon’s challenge will be to distinguish between productivity enhancing AI and ‘so-so’ automation — harnessing the benefits of AI, while ensuring adequate regulation to mitigate associated risks.” AI is a ‘geopolitical force’ Canada, the organization stated, “must close the gap between AI innovation and adoption by pursuing policies that encourage productivity-boosting AI — applications that augment workers and make them more efficient, rather than simply replace them. The answer is a multi-level policy framework that accelerates the uptake of AI in ways that enhance output, job quality, and workforce participation.” Wong noted, “Canada was the first country to deliver its national AI strategy; the appointment of the country’s first AI minister can be viewed as a natural evolution of Canada’s adoption of AI at a national level.” The appointment of Solomon, he said, “demonstrates just how important AI is to the future of Canada and its people. While AI is considered a technology disruptor, its impact is far-reaching, and it will impact every industry and the national economy.” And while having a government ministry of AI is not the norm for most countries today, he said, “the importance of this role to the country’s economy and national security is growing. Internationally, AI has become a geopolitical force; an example of this would be the US imposing export controls on high-end AI chip technology to China.” The upcoming  G7 meeting in Kananaskis, Alberta, from June 15 to 17,  said Wong, “provides an opportunity for Canada to demonstrate its AI leadership on an international stage. While it’s a short runway to that event, Canada should promote its best practices for deploying AI in the public sector, its plans to democratize the benefits of AI to its people, and demonstrate its thought leadership by sharing research and data.” The Carney government, he said, also has a “mandate to improve its use of AI to improve productivity as well as increase the adoption of AI by private industry. A recent Deloitte study cited that only 26% of Canadian organizations have implemented AI, compared with 34% globally.” AI compute fabric in the works In the private sector, Bell Canada on Wednesday announced Bell AI Fabric, an investment, it said, “that will create the country’s largest AI compute project.” The telco plans to create a national network that will start with a “data center supercluster in British Columbia that will aim to provide upwards of 500 MW of hydro-electric powered AI compute capacity across six facilities.” The first facility, a release stated, will come online this month in partnership with AI chip provider Groq, with additional facilities being operational by the end of 2026, including two at Thompson Rivers Universityin Kamloops, BC. Bell said that the data centers at TRU “will be designed to host AI training and inference, providing students and faculty with access to cutting-edge compute capabilities, both at TRU and nationally through integration with the BCNET network. The data centre is also being integrated into the district energy system, with waste heat being repurposed to provide energy to TRU’s buildings.” Further reading: AI and economic pressures reshape tech jobs amid layoffs Microsoft cements its AI lead with one hosting service to rule them all Real-world use cases for agentic AI AI vs. copyright How to train an AI-enabled workforce — and why you need to > > #canada #moves #regain #leadership #mantle
    WWW.COMPUTERWORLD.COM
    Canada moves to regain AI leadership mantle
    Other nations can learn much from Canada when it comes to artificial intelligence advances. For one thing, “the focus and nurturing of AI needs ongoing attention and investments; otherwise, that leadership in AI can be lost,” an industry analyst said Wednesday. Bill Wong, research fellow at Info-Tech Research Group, was responding to the recent appointment of MP Evan Solomon, a former journalist, as Canada’s first Minister of Artificial Intelligence and Digital Innovation in the federal cabinet of Prime Minister Mark Carney. In the past, he said, “Canada has been viewed as an AI leader around the world with respect to AI research, especially with thought leaders like Geoffrey Hinton, Yoshua Bengio, and Richard Sutton.” However, he noted, “despite the recognition, critics would cite that [it] has fallen behind and challenged when it comes to monetizing  AI investments. As part of the government’s election platform, the government promised to move fast on building data centers, introduce a tax credit to incentivize AI adoption by small and medium-sized businesses, and push to expand programs at Canada’s artificial intelligence institutes to drive AI commercialization.” In a commentary on the appointment, the Macdonald-Laurier Institute, a policy think tank based in Ottawa, Ontario, stated that it “signals a consolidation of federal focus on a field that has historically been spread across numerous portfolios … Solomon’s challenge will be to distinguish between productivity enhancing AI and ‘so-so’ automation — harnessing the benefits of AI, while ensuring adequate regulation to mitigate associated risks.” AI is a ‘geopolitical force’ Canada, the organization stated, “must close the gap between AI innovation and adoption by pursuing policies that encourage productivity-boosting AI — applications that augment workers and make them more efficient, rather than simply replace them. The answer is a multi-level policy framework that accelerates the uptake of AI in ways that enhance output, job quality, and workforce participation.” Wong noted, “Canada was the first country to deliver its national AI strategy; the appointment of the country’s first AI minister can be viewed as a natural evolution of Canada’s adoption of AI at a national level.” The appointment of Solomon, he said, “demonstrates just how important AI is to the future of Canada and its people. While AI is considered a technology disruptor, its impact is far-reaching, and it will impact every industry and the national economy.” And while having a government ministry of AI is not the norm for most countries today, he said, “the importance of this role to the country’s economy and national security is growing. Internationally, AI has become a geopolitical force; an example of this would be the US imposing export controls on high-end AI chip technology to China.” The upcoming  G7 meeting in Kananaskis, Alberta, from June 15 to 17,  said Wong, “provides an opportunity for Canada to demonstrate its AI leadership on an international stage. While it’s a short runway to that event, Canada should promote its best practices for deploying AI in the public sector, its plans to democratize the benefits of AI to its people, and demonstrate its thought leadership by sharing research and data.” The Carney government, he said, also has a “mandate to improve its use of AI to improve productivity as well as increase the adoption of AI by private industry. A recent Deloitte study cited that only 26% of Canadian organizations have implemented AI, compared with 34% globally.” AI compute fabric in the works In the private sector, Bell Canada on Wednesday announced Bell AI Fabric, an investment, it said, “that will create the country’s largest AI compute project.” The telco plans to create a national network that will start with a “data center supercluster in British Columbia that will aim to provide upwards of 500 MW of hydro-electric powered AI compute capacity across six facilities.” The first facility, a release stated, will come online this month in partnership with AI chip provider Groq, with additional facilities being operational by the end of 2026, including two at Thompson Rivers University (TRU) in Kamloops, BC. Bell said that the data centers at TRU “will be designed to host AI training and inference, providing students and faculty with access to cutting-edge compute capabilities, both at TRU and nationally through integration with the BCNET network. The data centre is also being integrated into the district energy system, with waste heat being repurposed to provide energy to TRU’s buildings.” Further reading: AI and economic pressures reshape tech jobs amid layoffs Microsoft cements its AI lead with one hosting service to rule them all Real-world use cases for agentic AI AI vs. copyright How to train an AI-enabled workforce — and why you need to > >
    0 Comments 0 Shares