• A short history of the roadblock

    Barricades, as we know them today, are thought to date back to the European wars of religion. According to most historians, the first barricade went up in Paris in 1588; the word derives from the French barriques, or barrels, spontaneously put together. They have been assembled from the most diverse materials, from cobblestones, tyres, newspapers, dead horses and bags of ice, to omnibuses and e‑scooters. Their tactical logic is close to that of guerrilla warfare: the authorities have to take the barricades in order to claim victory; all that those manning them have to do to prevail is to hold them. 
    The 19th century was the golden age for blocking narrow, labyrinthine streets. Paris had seen barricades go up nine times in the period before the Second Empire; during the July 1830 Revolution alone, 4,000 barricades had been erected. These barricades would not only stop, but also trap troops; people would then throw stones from windows or pour boiling water onto the streets. Georges‑Eugène Haussmann, Napoleon III’s prefect of Paris, famously created wide boulevards to make blocking by barricade more difficult and moving the military easier, and replaced cobblestones with macadam – a surface of crushed stone. As Flaubert observed in his Dictionary of Accepted Ideas: ‘Macadam: has cancelled revolutions. No more means to make barricades. Nevertheless rather inconvenient.’  
    Lead image: Barricades, as we know them today, are thought to have originated in early modern France. A colour engraving attributed to Achille‑Louis Martinet depicts the defence of a barricade during the 1830 July Revolution. Credit: Paris Musées / Musée Carnavalet – Histoire de Paris. Above: the socialist political thinker and activist Louis Auguste Blanqui – who was imprisoned by every regime that ruled France between 1815 and 1880 – drew instructions for how to build an effective barricade

    Under Napoleon III, Baron Haussmann widened Paris’s streets in his 1853–70 renovation of the city, making barricading more difficult
    Credit: Old Books Images / Alamy
    ‘On one hand,wanted to favour the circulation of ideas,’ reactionary intellectual Louis Veuillot observed apropos the ambiguous liberalism of the latter period of Napoleon III’s Second Empire. ‘On the other, to ensure the circulation of regiments.’ But ‘anti‑insurgency hardware’, as Justinien Tribillon has called it, also served to chase the working class out of the city centre: Haussmann’s projects amounted to a gigantic form of real-estate speculation, and the 1871 Paris Commune that followed constituted not just a short‑lived anarchist experiment featuring enormous barricades; it also signalled the return of the workers to the centre and, arguably, revenge for their dispossession.   
    By the mid‑19th century, observers questioned whether barricades still had practical meaning. Gottfried Semper’s barricade, constructed for the 1849 Dresden uprising, had proved unconquerable, but Friedrich Engels, one‑time ‘inspector of barricades’ in the Elberfeld insurrection of the same year, already suggested that the barricades’ primary meaning was now moral rather than military – a point to be echoed by Leon Trotsky in the subsequent century. Barricades symbolised bravery and the will to hold out among insurrectionists, and, not least, determination rather to destroy one’s possessions – and one’s neighbourhood – than put up with further oppression.  
    Not only self‑declared revolutionaries viewed things this way: the reformist Social Democrat leader Eduard Bernstein observed that ‘the barricade fight as a political weapon of the people has been completely eliminated due to changes in weapon technology and cities’ structures’. Bernstein was also picking up on the fact that, in the era of industrialisation, contention happened at least as much on the factory floor as on the streets. The strike, not the food riot or the defence of workers’ quartiers, became the paradigmatic form of conflict. Joshua Clover has pointed out in his 2016 book Riot. Strike. Riot: The New Era of Uprisings, that the price of labour, rather than the price of goods, caused people to confront the powerful. Blocking production grew more important than blocking the street.
    ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn’
    Today, it is again blocking – not just people streaming along the streets in large marches – that is prominently associated with protests. Disrupting circulation is not only an important gesture in the face of climate emergency; blocking transport is a powerful form of protest in an economic system focused on logistics and just‑in‑time distribution. Members of Insulate Britain and Germany’s Last Generation super‑glue themselves to streets to stop car traffic to draw attention to the climate emergency; they have also attached themselves to airport runways. They form a human barricade of sorts, immobilising traffic by making themselves immovable.  
    Today’s protesters have made themselves consciously vulnerable. They in fact follow the advice of US civil rights’ Bayard Rustin who explained: ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn.’ Making oneself vulnerable might increase the chances of a majority of citizens seeing the importance of the cause which those engaged in civil disobedience are pursuing. Demonstrations – even large, unpredictable ones – are no longer sufficient. They draw too little attention and do not compel a reaction. Naomi Klein proposed the term ‘blockadia’ as ‘a roving transnational conflict zone’ in which people block extraction – be it open‑pit mines, fracking sites or tar sands pipelines – with their bodies. More often than not, these blockades are organised by local people opposing the fossil fuel industry, not environmental activists per se. Blockadia came to denote resistance to the Keystone XL pipeline as well as Canada’s First Nations‑led movement Idle No More.
    In cities, blocking can be accomplished with highly mobile structures. Like the barricade of the 19th century, they can be quickly assembled, yet are difficult to move; unlike old‑style barricades, they can also be quickly disassembled, removed and hidden. Think of super tripods, intricate ‘protest beacons’ based on tensegrity principles, as well as inflatable cobblestones, pioneered by the artist‑activists of Tools for Action.  
    As recently as 1991, newly independent Latvia defended itself against Soviet tanks with the popular construction of barricades, in a series of confrontations that became known as the Barikādes
    Credit: Associated Press / Alamy
    Inversely, roadblocks can be used by police authorities to stop demonstrations and gatherings from taking place – protesters are seen removing such infrastructure in Dhaka during a general strike in 1999
    Credit: REUTERS / Rafiqur Rahman / Bridgeman
    These inflatable objects are highly flexible, but can also be protective against police batons. They pose an awkward challenge to the authorities, who often end up looking ridiculous when dealing with them, and, as one of the inventors pointed out, they are guaranteed to create a media spectacle. This was also true of the 19th‑century barricade: people posed for pictures in front of them. As Wolfgang Scheppe, a curator of Architecture of the Barricade, explains, these images helped the police to find Communards and mete out punishments after the end of the anarchist experiment.
    Much simpler structures can also be highly effective. In 2019, protesters in Hong Kong filled streets with little archways made from just three ordinary bricks: two standing upright, one resting on top. When touched, the falling top one would buttress the other two, and effectively block traffic. In line with their imperative of ‘be water’, protesters would retreat when the police appeared, but the ‘mini‑Stonehenges’ would remain and slow down the authorities.
    Today, elaborate architectures of protest, such as Extinction Rebellion’s ‘tensegrity towers’, are used to blockade roads and distribution networks – in this instance, Rupert Murdoch’s News UK printworks in Broxbourne, for the media group’s failure to report the climate emergency accurately
    Credit: Extinction Rebellion
    In June 2025, protests erupted in Los Angeles against the Trump administration’s deportation policies. Demonstrators barricaded downtown streets using various objects, including the pink public furniture designed by design firm Rios for Gloria Molina Grand Park. LAPD are seen advancing through tear gas
    Credit: Gina Ferazzi / Los Angeles Times via Getty Images
    Roads which radicals might want to target are not just ones in major metropoles and fancy post‑industrial downtowns. Rather, they might block the arteries leading to ‘fulfilment centres’ and harbours with container shipping. The model is not only Occupy Wall Street, which had initially called for the erection of ‘peaceful barricades’, but also the Occupy that led to the Oakland port shutdown in 2011. In short, such roadblocks disrupt what Phil Neel has called a ‘hinterland’ that is often invisible, yet crucial for contemporary capitalism. More recently, Extinction Rebellion targeted Amazon distribution centres in three European countries in November 2021; in the UK, they aimed to disrupt half of all deliveries on a Black Friday.  
    Will such blockades just anger consumers who, after all, are not present but are impatiently waiting for packages at home? One of the hopes associated with the traditional barricade was always that they might create spaces where protesters, police and previously indifferent citizens get talking; French theorists even expected them to become ‘a machine to produce the people’. That could be why military technology has evolved so that the authorities do not have to get close to the barricade: tear gas was first deployed against those on barricades before it was used in the First World War; so‑called riot control vehicles can ever more easily crush barricades. The challenge, then, for anyone who wishes to block is also how to get in other people’s faces – in order to have a chance to convince them of their cause.       

    2025-06-11
    Kristina Rapacki

    Share
    #short #history #roadblock
    A short history of the roadblock
    Barricades, as we know them today, are thought to date back to the European wars of religion. According to most historians, the first barricade went up in Paris in 1588; the word derives from the French barriques, or barrels, spontaneously put together. They have been assembled from the most diverse materials, from cobblestones, tyres, newspapers, dead horses and bags of ice, to omnibuses and e‑scooters. Their tactical logic is close to that of guerrilla warfare: the authorities have to take the barricades in order to claim victory; all that those manning them have to do to prevail is to hold them.  The 19th century was the golden age for blocking narrow, labyrinthine streets. Paris had seen barricades go up nine times in the period before the Second Empire; during the July 1830 Revolution alone, 4,000 barricades had been erected. These barricades would not only stop, but also trap troops; people would then throw stones from windows or pour boiling water onto the streets. Georges‑Eugène Haussmann, Napoleon III’s prefect of Paris, famously created wide boulevards to make blocking by barricade more difficult and moving the military easier, and replaced cobblestones with macadam – a surface of crushed stone. As Flaubert observed in his Dictionary of Accepted Ideas: ‘Macadam: has cancelled revolutions. No more means to make barricades. Nevertheless rather inconvenient.’   Lead image: Barricades, as we know them today, are thought to have originated in early modern France. A colour engraving attributed to Achille‑Louis Martinet depicts the defence of a barricade during the 1830 July Revolution. Credit: Paris Musées / Musée Carnavalet – Histoire de Paris. Above: the socialist political thinker and activist Louis Auguste Blanqui – who was imprisoned by every regime that ruled France between 1815 and 1880 – drew instructions for how to build an effective barricade Under Napoleon III, Baron Haussmann widened Paris’s streets in his 1853–70 renovation of the city, making barricading more difficult Credit: Old Books Images / Alamy ‘On one hand,wanted to favour the circulation of ideas,’ reactionary intellectual Louis Veuillot observed apropos the ambiguous liberalism of the latter period of Napoleon III’s Second Empire. ‘On the other, to ensure the circulation of regiments.’ But ‘anti‑insurgency hardware’, as Justinien Tribillon has called it, also served to chase the working class out of the city centre: Haussmann’s projects amounted to a gigantic form of real-estate speculation, and the 1871 Paris Commune that followed constituted not just a short‑lived anarchist experiment featuring enormous barricades; it also signalled the return of the workers to the centre and, arguably, revenge for their dispossession.    By the mid‑19th century, observers questioned whether barricades still had practical meaning. Gottfried Semper’s barricade, constructed for the 1849 Dresden uprising, had proved unconquerable, but Friedrich Engels, one‑time ‘inspector of barricades’ in the Elberfeld insurrection of the same year, already suggested that the barricades’ primary meaning was now moral rather than military – a point to be echoed by Leon Trotsky in the subsequent century. Barricades symbolised bravery and the will to hold out among insurrectionists, and, not least, determination rather to destroy one’s possessions – and one’s neighbourhood – than put up with further oppression.   Not only self‑declared revolutionaries viewed things this way: the reformist Social Democrat leader Eduard Bernstein observed that ‘the barricade fight as a political weapon of the people has been completely eliminated due to changes in weapon technology and cities’ structures’. Bernstein was also picking up on the fact that, in the era of industrialisation, contention happened at least as much on the factory floor as on the streets. The strike, not the food riot or the defence of workers’ quartiers, became the paradigmatic form of conflict. Joshua Clover has pointed out in his 2016 book Riot. Strike. Riot: The New Era of Uprisings, that the price of labour, rather than the price of goods, caused people to confront the powerful. Blocking production grew more important than blocking the street. ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn’ Today, it is again blocking – not just people streaming along the streets in large marches – that is prominently associated with protests. Disrupting circulation is not only an important gesture in the face of climate emergency; blocking transport is a powerful form of protest in an economic system focused on logistics and just‑in‑time distribution. Members of Insulate Britain and Germany’s Last Generation super‑glue themselves to streets to stop car traffic to draw attention to the climate emergency; they have also attached themselves to airport runways. They form a human barricade of sorts, immobilising traffic by making themselves immovable.   Today’s protesters have made themselves consciously vulnerable. They in fact follow the advice of US civil rights’ Bayard Rustin who explained: ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn.’ Making oneself vulnerable might increase the chances of a majority of citizens seeing the importance of the cause which those engaged in civil disobedience are pursuing. Demonstrations – even large, unpredictable ones – are no longer sufficient. They draw too little attention and do not compel a reaction. Naomi Klein proposed the term ‘blockadia’ as ‘a roving transnational conflict zone’ in which people block extraction – be it open‑pit mines, fracking sites or tar sands pipelines – with their bodies. More often than not, these blockades are organised by local people opposing the fossil fuel industry, not environmental activists per se. Blockadia came to denote resistance to the Keystone XL pipeline as well as Canada’s First Nations‑led movement Idle No More. In cities, blocking can be accomplished with highly mobile structures. Like the barricade of the 19th century, they can be quickly assembled, yet are difficult to move; unlike old‑style barricades, they can also be quickly disassembled, removed and hidden. Think of super tripods, intricate ‘protest beacons’ based on tensegrity principles, as well as inflatable cobblestones, pioneered by the artist‑activists of Tools for Action.   As recently as 1991, newly independent Latvia defended itself against Soviet tanks with the popular construction of barricades, in a series of confrontations that became known as the Barikādes Credit: Associated Press / Alamy Inversely, roadblocks can be used by police authorities to stop demonstrations and gatherings from taking place – protesters are seen removing such infrastructure in Dhaka during a general strike in 1999 Credit: REUTERS / Rafiqur Rahman / Bridgeman These inflatable objects are highly flexible, but can also be protective against police batons. They pose an awkward challenge to the authorities, who often end up looking ridiculous when dealing with them, and, as one of the inventors pointed out, they are guaranteed to create a media spectacle. This was also true of the 19th‑century barricade: people posed for pictures in front of them. As Wolfgang Scheppe, a curator of Architecture of the Barricade, explains, these images helped the police to find Communards and mete out punishments after the end of the anarchist experiment. Much simpler structures can also be highly effective. In 2019, protesters in Hong Kong filled streets with little archways made from just three ordinary bricks: two standing upright, one resting on top. When touched, the falling top one would buttress the other two, and effectively block traffic. In line with their imperative of ‘be water’, protesters would retreat when the police appeared, but the ‘mini‑Stonehenges’ would remain and slow down the authorities. Today, elaborate architectures of protest, such as Extinction Rebellion’s ‘tensegrity towers’, are used to blockade roads and distribution networks – in this instance, Rupert Murdoch’s News UK printworks in Broxbourne, for the media group’s failure to report the climate emergency accurately Credit: Extinction Rebellion In June 2025, protests erupted in Los Angeles against the Trump administration’s deportation policies. Demonstrators barricaded downtown streets using various objects, including the pink public furniture designed by design firm Rios for Gloria Molina Grand Park. LAPD are seen advancing through tear gas Credit: Gina Ferazzi / Los Angeles Times via Getty Images Roads which radicals might want to target are not just ones in major metropoles and fancy post‑industrial downtowns. Rather, they might block the arteries leading to ‘fulfilment centres’ and harbours with container shipping. The model is not only Occupy Wall Street, which had initially called for the erection of ‘peaceful barricades’, but also the Occupy that led to the Oakland port shutdown in 2011. In short, such roadblocks disrupt what Phil Neel has called a ‘hinterland’ that is often invisible, yet crucial for contemporary capitalism. More recently, Extinction Rebellion targeted Amazon distribution centres in three European countries in November 2021; in the UK, they aimed to disrupt half of all deliveries on a Black Friday.   Will such blockades just anger consumers who, after all, are not present but are impatiently waiting for packages at home? One of the hopes associated with the traditional barricade was always that they might create spaces where protesters, police and previously indifferent citizens get talking; French theorists even expected them to become ‘a machine to produce the people’. That could be why military technology has evolved so that the authorities do not have to get close to the barricade: tear gas was first deployed against those on barricades before it was used in the First World War; so‑called riot control vehicles can ever more easily crush barricades. The challenge, then, for anyone who wishes to block is also how to get in other people’s faces – in order to have a chance to convince them of their cause.        2025-06-11 Kristina Rapacki Share #short #history #roadblock
    WWW.ARCHITECTURAL-REVIEW.COM
    A short history of the roadblock
    Barricades, as we know them today, are thought to date back to the European wars of religion. According to most historians, the first barricade went up in Paris in 1588; the word derives from the French barriques, or barrels, spontaneously put together. They have been assembled from the most diverse materials, from cobblestones, tyres, newspapers, dead horses and bags of ice (during Kyiv’s Euromaidan in 2013–14), to omnibuses and e‑scooters. Their tactical logic is close to that of guerrilla warfare: the authorities have to take the barricades in order to claim victory; all that those manning them have to do to prevail is to hold them.  The 19th century was the golden age for blocking narrow, labyrinthine streets. Paris had seen barricades go up nine times in the period before the Second Empire; during the July 1830 Revolution alone, 4,000 barricades had been erected (roughly one for every 200 Parisians). These barricades would not only stop, but also trap troops; people would then throw stones from windows or pour boiling water onto the streets. Georges‑Eugène Haussmann, Napoleon III’s prefect of Paris, famously created wide boulevards to make blocking by barricade more difficult and moving the military easier, and replaced cobblestones with macadam – a surface of crushed stone. As Flaubert observed in his Dictionary of Accepted Ideas: ‘Macadam: has cancelled revolutions. No more means to make barricades. Nevertheless rather inconvenient.’   Lead image: Barricades, as we know them today, are thought to have originated in early modern France. A colour engraving attributed to Achille‑Louis Martinet depicts the defence of a barricade during the 1830 July Revolution. Credit: Paris Musées / Musée Carnavalet – Histoire de Paris. Above: the socialist political thinker and activist Louis Auguste Blanqui – who was imprisoned by every regime that ruled France between 1815 and 1880 – drew instructions for how to build an effective barricade Under Napoleon III, Baron Haussmann widened Paris’s streets in his 1853–70 renovation of the city, making barricading more difficult Credit: Old Books Images / Alamy ‘On one hand, [the authorities] wanted to favour the circulation of ideas,’ reactionary intellectual Louis Veuillot observed apropos the ambiguous liberalism of the latter period of Napoleon III’s Second Empire. ‘On the other, to ensure the circulation of regiments.’ But ‘anti‑insurgency hardware’, as Justinien Tribillon has called it, also served to chase the working class out of the city centre: Haussmann’s projects amounted to a gigantic form of real-estate speculation, and the 1871 Paris Commune that followed constituted not just a short‑lived anarchist experiment featuring enormous barricades; it also signalled the return of the workers to the centre and, arguably, revenge for their dispossession.    By the mid‑19th century, observers questioned whether barricades still had practical meaning. Gottfried Semper’s barricade, constructed for the 1849 Dresden uprising, had proved unconquerable, but Friedrich Engels, one‑time ‘inspector of barricades’ in the Elberfeld insurrection of the same year, already suggested that the barricades’ primary meaning was now moral rather than military – a point to be echoed by Leon Trotsky in the subsequent century. Barricades symbolised bravery and the will to hold out among insurrectionists, and, not least, determination rather to destroy one’s possessions – and one’s neighbourhood – than put up with further oppression.   Not only self‑declared revolutionaries viewed things this way: the reformist Social Democrat leader Eduard Bernstein observed that ‘the barricade fight as a political weapon of the people has been completely eliminated due to changes in weapon technology and cities’ structures’. Bernstein was also picking up on the fact that, in the era of industrialisation, contention happened at least as much on the factory floor as on the streets. The strike, not the food riot or the defence of workers’ quartiers, became the paradigmatic form of conflict. Joshua Clover has pointed out in his 2016 book Riot. Strike. Riot: The New Era of Uprisings, that the price of labour, rather than the price of goods, caused people to confront the powerful. Blocking production grew more important than blocking the street. ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn’ Today, it is again blocking – not just people streaming along the streets in large marches – that is prominently associated with protests. Disrupting circulation is not only an important gesture in the face of climate emergency; blocking transport is a powerful form of protest in an economic system focused on logistics and just‑in‑time distribution. Members of Insulate Britain and Germany’s Last Generation super‑glue themselves to streets to stop car traffic to draw attention to the climate emergency; they have also attached themselves to airport runways. They form a human barricade of sorts, immobilising traffic by making themselves immovable.   Today’s protesters have made themselves consciously vulnerable. They in fact follow the advice of US civil rights’ Bayard Rustin who explained: ‘The only weapons we have are our bodies, and we need to tuck them in places so wheels don’t turn.’ Making oneself vulnerable might increase the chances of a majority of citizens seeing the importance of the cause which those engaged in civil disobedience are pursuing. Demonstrations – even large, unpredictable ones – are no longer sufficient. They draw too little attention and do not compel a reaction. Naomi Klein proposed the term ‘blockadia’ as ‘a roving transnational conflict zone’ in which people block extraction – be it open‑pit mines, fracking sites or tar sands pipelines – with their bodies. More often than not, these blockades are organised by local people opposing the fossil fuel industry, not environmental activists per se. Blockadia came to denote resistance to the Keystone XL pipeline as well as Canada’s First Nations‑led movement Idle No More. In cities, blocking can be accomplished with highly mobile structures. Like the barricade of the 19th century, they can be quickly assembled, yet are difficult to move; unlike old‑style barricades, they can also be quickly disassembled, removed and hidden (by those who have the engineering and architectural know‑how). Think of super tripods, intricate ‘protest beacons’ based on tensegrity principles, as well as inflatable cobblestones, pioneered by the artist‑activists of Tools for Action (and as analysed in Nick Newman’s recent volume Protest Architecture).   As recently as 1991, newly independent Latvia defended itself against Soviet tanks with the popular construction of barricades, in a series of confrontations that became known as the Barikādes Credit: Associated Press / Alamy Inversely, roadblocks can be used by police authorities to stop demonstrations and gatherings from taking place – protesters are seen removing such infrastructure in Dhaka during a general strike in 1999 Credit: REUTERS / Rafiqur Rahman / Bridgeman These inflatable objects are highly flexible, but can also be protective against police batons. They pose an awkward challenge to the authorities, who often end up looking ridiculous when dealing with them, and, as one of the inventors pointed out, they are guaranteed to create a media spectacle. This was also true of the 19th‑century barricade: people posed for pictures in front of them. As Wolfgang Scheppe, a curator of Architecture of the Barricade (currently on display at the Arsenale Institute for Politics of Representation in Venice), explains, these images helped the police to find Communards and mete out punishments after the end of the anarchist experiment. Much simpler structures can also be highly effective. In 2019, protesters in Hong Kong filled streets with little archways made from just three ordinary bricks: two standing upright, one resting on top. When touched, the falling top one would buttress the other two, and effectively block traffic. In line with their imperative of ‘be water’, protesters would retreat when the police appeared, but the ‘mini‑Stonehenges’ would remain and slow down the authorities. Today, elaborate architectures of protest, such as Extinction Rebellion’s ‘tensegrity towers’, are used to blockade roads and distribution networks – in this instance, Rupert Murdoch’s News UK printworks in Broxbourne, for the media group’s failure to report the climate emergency accurately Credit: Extinction Rebellion In June 2025, protests erupted in Los Angeles against the Trump administration’s deportation policies. Demonstrators barricaded downtown streets using various objects, including the pink public furniture designed by design firm Rios for Gloria Molina Grand Park. LAPD are seen advancing through tear gas Credit: Gina Ferazzi / Los Angeles Times via Getty Images Roads which radicals might want to target are not just ones in major metropoles and fancy post‑industrial downtowns. Rather, they might block the arteries leading to ‘fulfilment centres’ and harbours with container shipping. The model is not only Occupy Wall Street, which had initially called for the erection of ‘peaceful barricades’, but also the Occupy that led to the Oakland port shutdown in 2011. In short, such roadblocks disrupt what Phil Neel has called a ‘hinterland’ that is often invisible, yet crucial for contemporary capitalism. More recently, Extinction Rebellion targeted Amazon distribution centres in three European countries in November 2021; in the UK, they aimed to disrupt half of all deliveries on a Black Friday.   Will such blockades just anger consumers who, after all, are not present but are impatiently waiting for packages at home? One of the hopes associated with the traditional barricade was always that they might create spaces where protesters, police and previously indifferent citizens get talking; French theorists even expected them to become ‘a machine to produce the people’. That could be why military technology has evolved so that the authorities do not have to get close to the barricade: tear gas was first deployed against those on barricades before it was used in the First World War; so‑called riot control vehicles can ever more easily crush barricades. The challenge, then, for anyone who wishes to block is also how to get in other people’s faces – in order to have a chance to convince them of their cause.        2025-06-11 Kristina Rapacki Share
    0 Комментарии 0 Поделились 0 предпросмотр
  • Anthropic launches Claude AI models for US national security

    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments.

    The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments.

    Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio.

    Specialised AI capabilities for national security

    The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments.

    Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis.

    However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI.

    Balancing innovation with regulation

    In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled.

    Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively.

    Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry.

    He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary.

    Implications of AI in national security

    The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations.

    Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology.

    The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development.

    Regulatory landscape

    As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure.

    Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action.

    This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard.

    As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate.

    For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security.See also: Reddit sues Anthropic over AI data scraping

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    The post Anthropic launches Claude AI models for US national security appeared first on AI News.
    #anthropic #launches #claude #models #national
    Anthropic launches Claude AI models for US national security
    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security.See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News. #anthropic #launches #claude #models #national
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Anthropic launches Claude AI models for US national security
    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security. (Image credit: Anthropic) See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News.
    Like
    Love
    Wow
    Sad
    Angry
    732
    0 Комментарии 0 Поделились 0 предпросмотр
  • Can AI Mistakes Lead to Real Legal Exposure?

    Posted on : June 5, 2025

    By

    Tech World Times

    AI 

    Rate this post

    Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner.
    What Types of AI Errors Create Legal Liability?
    AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts.
    When Is a Business Owner Liable for AI Mistakes?
    Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility.
    How Do AI Errors Harm Your Reputation and Operations?
    AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk.
    What Steps Reduce Legal Risk From AI Deployments?
    Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset.
    Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization.
    You should review these AI risk mitigation strategies below.

    Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement.
    Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved.
    Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties.
    Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach.
    Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks.

    How Do Attorneys Shield Your Business From AI Legal Risks?
    Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #can #mistakes #lead #real #legal
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #can #mistakes #lead #real #legal
    TECHWORLDTIMES.COM
    Can AI Mistakes Lead to Real Legal Exposure?
    Posted on : June 5, 2025 By Tech World Times AI  Rate this post Artificial intelligence tools now touch nearly every corner of modern business, from customer service and marketing to supply chain management and HR. These powerful technologies promise speed, accuracy, and insight, but their missteps can cause more than temporary inconvenience. A single AI-driven error can result in regulatory investigations, civil lawsuits, or public scandals that threaten the foundation of a business. Understanding how legal exposure arises from AI mistakes—and how a skilled attorney protects your interests—is no longer an option, but a requirement for any forward-thinking business owner. What Types of AI Errors Create Legal Liability? AI does not think or reason like a human; it follows code and statistical patterns, sometimes with unintended results. These missteps can create a trail of legal liability for any business owner. For example, an online retailer’s AI recommends discriminatory pricing, sparking allegations of unfair trade practices. An HR department automates hiring decisions with AI, only to face lawsuits for violating anti-discrimination laws. Even an AI-driven chatbot, when programmed without proper safeguards, can inadvertently give health advice or misrepresent product claims—exposing the company to regulatory penalties. Cases like these are regularly reported in Legal news as businesses discover the high cost of digital shortcuts. When Is a Business Owner Liable for AI Mistakes? Liability rarely rests with the software developer or the tool itself. Courts and regulators expect the business to monitor, supervise, and, when needed, override AI decisions. Suppose a financial advisor uses AI to recommend investments, but the algorithm suggests securities that violate state regulations. Even if the AI was “just following instructions,” the advisor remains responsible for client losses. Similarly, a marketing team cannot escape liability if their AI generates misleading advertising. The bottom line: outsourcing work to AI does not outsource legal responsibility. How Do AI Errors Harm Your Reputation and Operations? AI mistakes can leave lasting marks on a business’s reputation, finances, and operations. A logistics firm’s route-optimization tool creates data leaks that breach customer privacy and trigger costly notifications. An online business suffers public backlash after an AI-powered customer service tool sends offensive responses to clients. Such incidents erode public trust, drive customers to competitors, and divert resources into damage control rather than growth. Worse, compliance failures can result in penalties or shutdown orders, putting the entire enterprise at risk. What Steps Reduce Legal Risk From AI Deployments? Careful planning and continuous oversight keep AI tools working for your business—not against it. Compliance is not a “set it and forget it” matter. Proactive risk management transforms artificial intelligence from a liability into a valuable asset. Routine audits, staff training, and transparent policies form the backbone of safe, effective AI use in any organization. You should review these AI risk mitigation strategies below. Implement Manual Review of Sensitive Outputs: Require human approval for high-risk tasks, such as legal filings, financial transactions, or customer communications. A payroll company’s manual audits prevented the accidental overpayment of employees by catching AI-generated errors before disbursement. Update AI Systems for Regulatory Changes: Stay ahead of new laws and standards by regularly reviewing AI algorithms and outputs. An insurance brokerage avoided regulatory fines by updating their risk assessment models as privacy laws evolved. Document Every Incident and Remediation Step: Keep records of AI errors, investigations, and corrections. A healthcare provider’s transparency during a patient data mix-up helped avoid litigation and regulatory penalties. Limit AI Access to Personal and Sensitive Data: Restrict the scope and permissions of AI tools to reduce the chance of data misuse. A SaaS provider used data minimization techniques, lowering the risk of exposure in case of a system breach. Consult With Attorneys for Custom Policies and Protocols: Collaborate with experienced Attorneys to design, review, and update AI compliance frameworks. How Do Attorneys Shield Your Business From AI Legal Risks? Attorneys provide a critical safety net as AI integrates deeper into business operations. They draft tailored contracts, establish protocols for monitoring and escalation, and assess risks unique to your industry. In the event of an AI-driven incident, legal counsel investigates the facts, manages communication with regulators, and builds a robust defense. By providing training, ongoing guidance, and crisis management support, attorneys ensure that innovation doesn’t lead to exposure—or disaster. With the right legal partner, businesses can harness AI’s power while staying firmly on the right side of the law. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    Like
    Love
    Wow
    Sad
    Angry
    272
    0 Комментарии 0 Поделились 0 предпросмотр
  • Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester

    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester

    By John P. Mello Jr.
    June 3, 2025 5:00 AM PT

    ADVERTISEMENT
    Quality Leads That Turn Into Deals
    Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now.

    Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research.
    Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel.
    Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature.
    It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure.
    Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained.
    “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore.
    “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.”
    Sensor Magic
    “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La.
    “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.”
    Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld.

    “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.”
    “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.”
    Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed.
    Better Data Collection
    The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems.
    “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.”
    “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added.
    Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld.
    “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said.
    Drone Adoption Faces Regulatory Hurdles
    While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization.
    Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities.
    “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.”

    “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld.
    “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.”
    Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.”
    “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #drones #set #deliver #benefits #laborintensive
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #drones #set #deliver #benefits #laborintensive
    WWW.TECHNEWSWORLD.COM
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes and [optimal] usage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight [BVLOS] — are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    Like
    Love
    Wow
    Sad
    Angry
    341
    0 Комментарии 0 Поделились 0 предпросмотр
  • The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy

    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    #mostcited #computer #scientist #has #plan
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers." #mostcited #computer #scientist #has #plan
    TIME.COM
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly $30 million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly $200 billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    0 Комментарии 0 Поделились 0 предпросмотр
  • Best Meta Loadouts For COD Black Ops 6 Season 4

    Black Ops 6Season 4 has finally rolled out across all platforms, bringing with it a fresh wave of content for fans to dive into. From the return of Grief in Zombies, making its first appearance in over a decade, to the debut of new Multiplayer maps like Fugitive and Shutdown, there’s no shortage of action to jump into.
    #best #meta #loadouts #cod #black
    Best Meta Loadouts For COD Black Ops 6 Season 4
    Black Ops 6Season 4 has finally rolled out across all platforms, bringing with it a fresh wave of content for fans to dive into. From the return of Grief in Zombies, making its first appearance in over a decade, to the debut of new Multiplayer maps like Fugitive and Shutdown, there’s no shortage of action to jump into. #best #meta #loadouts #cod #black
    GAMERANT.COM
    Best Meta Loadouts For COD Black Ops 6 Season 4
    Black Ops 6Season 4 has finally rolled out across all platforms, bringing with it a fresh wave of content for fans to dive into. From the return of Grief in Zombies, making its first appearance in over a decade, to the debut of new Multiplayer maps like Fugitive and Shutdown, there’s no shortage of action to jump into.
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com