• The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software.
    Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall.
    The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month.
    Why the Danish Ministry of Digitalization Switched to Open-Source Software
    The three main reasons Denmark is moving away from Microsoft are costs, politics, and security.
    In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider. 
    The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023.
    It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft. 

    Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely.
    Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark. 
    In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory.
    If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January.
    Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves
    Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent.
    Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values.

    Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region. 
    Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied.
    Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead
    Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking.
    It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products.
    Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.
     Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #word #out #danish #ministry #drops
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #word #out #danish #ministry #drops
    TECHREPORT.COM
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing $1M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Union (EU) countries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    Like
    Love
    Wow
    Sad
    Angry
    526
    2 Comments 0 Shares
  • In conflict: Putting Russia’s datacentre market under the microscope

    When Russian troops invaded Ukraine on 24 February 2022, Russia’s datacentre sector was one of the fastest-growing segments of the country’s IT industry, with annual growth rates in the region of 10-12%.
    However, with the conflict resulting in the imposition of Western sanctions against Russia and an outflow of US-based tech companies from the country, including Apple and Microsoft, optimism about the sector’s potential for further growth soon disappeared.
    In early March 2025, it was reported that Google had disconnected from traffic exchange points and datacentres in Russia, leading to concerns about how this could negatively affect the speed of access to some Google services for Russian users.
    Initially, there was hope that domestic technology and datacentre providers might be able to plug the gaps left by the exodus of the US tech giants, but it seems they could not keep up with the hosting demands of Russia’s increasingly digital economy.
    Oleg Kim, director of the hardware systems department at Russian IT company Axoft, says the departure of foreign cloud providers and equipment manufacturers has led to a serious shortage of compute capacity in Russia.
    This is because the situation resulted in a sharp, initial increase in demand for domestic datacentres, but Russian providers simply did not have time to expand their capacities on the required scale, continues Kim.

    According to the estimates of Key Point, one of Russia’s largest datacentre networks, meeting Russia’s demand for datacentres will require facilities with a total capacity of 30,000 racks to be built each year over the next five years.
    On top of this, it has also become more costly to build datacentres in Russia.
    Estimates suggest that prior to 2022, the cost of a datacentre rack totalled 100,000 rubles, but now exceeds 150,000 rubles.
    And analysts at Forbes Russia expect these figures will continue to grow, due to rising logistics costs and the impact the war is having on the availability of skilled labour in the construction sector.
    The impact of these challenges is being keenly felt by users, with several of the country’s large banks experiencing serious problems when finding suitable locations for their datacentres.
    Sberbank is among the firms affected, with its chairperson, German Gref, speaking out previously about how the bank is in need of a datacentre with at least 200MW of capacity, but would ideally need 300-400MW to address its compute requirements.
    Stanislav Bliznyuk, chairperson of T-Bank, says trying to build even two 50MW datacentres to meet its needs is proving problematic. “Finding locations where such capacity and adequate tariffs are available is a difficult task,” he said.

    about datacentre developments

    North Lincolnshire Council has received a planning permission application for another large-scale datacentre development, in support of its bid to become an AI Growth Zone
    A proposal to build one of the biggest datacentres in Europe has been submitted to Hertsmere Borough Council, and already has the support of the technology secretary and local councillors.
    The UK government has unveiled its 50-point AI action plan, which commits to building sovereign artificial intelligence capabilities and accelerating AI datacentre developments – but questions remain about the viability of the plans.

    Despite this, T-Bank is establishing its own network of data processing centres – the first of which should open in early 2027, he confirmed in November 2024.
    Kirill Solyev, head of the engineering infrastructure department of the Softline Group of Companies, who specialise in IT, says many large Russian companies are resorting to building their own datacentres – because compute capacity is in such short supply.
    The situation is, however, complicated by the lack of suitable locations for datacentres in the largest cities of Russia – Moscow and St Petersburg. “For example, to build a datacentre with a capacity of 60MW, finding a suitable site can take up to three years,” says Solyev. “In Moscow, according to preliminary estimates, there are about 50MW of free capacity left, which is equivalent to 2-4 large commercial datacentres.
    “The capacity deficit only in the southern part of the Moscow region is predicted at 564MW by 2030, and up to 3.15GW by 2042.”
    As a result, datacentre operators and investors are now looking for suitable locations outside of Moscow and St Petersburg, and seeking to co-locate new datacentres in close proximity to renewable energy sources.
    And this will be important as demand for datacentre capacity in Russia is expected to increase, as it is in most of the rest of the world, due to the growing use of artificial intelligencetools and services.
    The energy-intensive nature of AI workloads will put further pressure on operators that are already struggling to meet the compute capacity demands of their customers.

    Speaking at the recent Ural Forum on cyber security in finance, Alexander Kraynov, director of AI technology development at Yandex, says solving the energy consumption issue of AI datacentres will not be easy.
    “The world is running out of electricity, including for AI, while the same situation is observed in Russia,” he said. “In order to ensure a stable energy supply of a newly built large datacentre, we will need up to one year.”
    According to a recent report of the Russian Vedomosti business paper, as of April 2024, Russian datacentres have used about 2.6GW, which is equivalent to about 1% of the installed capacity of the Unified Energy System of Russia.
    Accommodating AI workloads will also mean operators will need to purchase additional equipment, including expensive accelerators based on graphic processing units and higher-performing data storage systems.
    The implementation of these plans and the viability of these purchases is likely to be seriously complicated by the current sanctions regime against Russia.
    That said, Russia’s prime minister, Mikhail Mishustin, claims this part of the datacentre supply equation is being partially solved by an uptick in the domestic production of datacentre kit.
    According to the Mishustin, more than half of the server equipment and industrial storage and information processing systems needed for datacentres are already being produced in Russia – and these figures will continue to grow.

    The government also plans to provide additional financial support to the industry, as – to date – building datacentres in Russia has been prevented by relatively long payback periods, of up to 10 years in some cases, of such projects.
    One of the possible support measures on offer could include the subsidisation of at least part of the interest rates on loans to datacentre developers and operators.
    At the same time, though, the government’s actions in other areas have made it harder for operators to build new facilities.
    For example, in March 2025, the Russian government significantly tightened the existing norms for the establishment of new datacentres in the form of new rules for the design of data processing centres, which came into force after the approval by the Russian Ministry of Construction.
    According to Nikita Tsaplin, CEO of Russian hosting provider RUVDS, the rules led to additional bureaucracy in the sector.
    And, according to his predictions, that situation can extend the construction cycle of a datacentre from around five years to seven years.
    The government’s intervention here was to prevent the installation of servers in residential areas, such as garages, but it looks set to complicate an already complex situation – prompting questions about whether Russia’s datacentre market will ever reach its full potential.
    #conflict #putting #russias #datacentre #market
    In conflict: Putting Russia’s datacentre market under the microscope
    When Russian troops invaded Ukraine on 24 February 2022, Russia’s datacentre sector was one of the fastest-growing segments of the country’s IT industry, with annual growth rates in the region of 10-12%. However, with the conflict resulting in the imposition of Western sanctions against Russia and an outflow of US-based tech companies from the country, including Apple and Microsoft, optimism about the sector’s potential for further growth soon disappeared. In early March 2025, it was reported that Google had disconnected from traffic exchange points and datacentres in Russia, leading to concerns about how this could negatively affect the speed of access to some Google services for Russian users. Initially, there was hope that domestic technology and datacentre providers might be able to plug the gaps left by the exodus of the US tech giants, but it seems they could not keep up with the hosting demands of Russia’s increasingly digital economy. Oleg Kim, director of the hardware systems department at Russian IT company Axoft, says the departure of foreign cloud providers and equipment manufacturers has led to a serious shortage of compute capacity in Russia. This is because the situation resulted in a sharp, initial increase in demand for domestic datacentres, but Russian providers simply did not have time to expand their capacities on the required scale, continues Kim. According to the estimates of Key Point, one of Russia’s largest datacentre networks, meeting Russia’s demand for datacentres will require facilities with a total capacity of 30,000 racks to be built each year over the next five years. On top of this, it has also become more costly to build datacentres in Russia. Estimates suggest that prior to 2022, the cost of a datacentre rack totalled 100,000 rubles, but now exceeds 150,000 rubles. And analysts at Forbes Russia expect these figures will continue to grow, due to rising logistics costs and the impact the war is having on the availability of skilled labour in the construction sector. The impact of these challenges is being keenly felt by users, with several of the country’s large banks experiencing serious problems when finding suitable locations for their datacentres. Sberbank is among the firms affected, with its chairperson, German Gref, speaking out previously about how the bank is in need of a datacentre with at least 200MW of capacity, but would ideally need 300-400MW to address its compute requirements. Stanislav Bliznyuk, chairperson of T-Bank, says trying to build even two 50MW datacentres to meet its needs is proving problematic. “Finding locations where such capacity and adequate tariffs are available is a difficult task,” he said. about datacentre developments North Lincolnshire Council has received a planning permission application for another large-scale datacentre development, in support of its bid to become an AI Growth Zone A proposal to build one of the biggest datacentres in Europe has been submitted to Hertsmere Borough Council, and already has the support of the technology secretary and local councillors. The UK government has unveiled its 50-point AI action plan, which commits to building sovereign artificial intelligence capabilities and accelerating AI datacentre developments – but questions remain about the viability of the plans. Despite this, T-Bank is establishing its own network of data processing centres – the first of which should open in early 2027, he confirmed in November 2024. Kirill Solyev, head of the engineering infrastructure department of the Softline Group of Companies, who specialise in IT, says many large Russian companies are resorting to building their own datacentres – because compute capacity is in such short supply. The situation is, however, complicated by the lack of suitable locations for datacentres in the largest cities of Russia – Moscow and St Petersburg. “For example, to build a datacentre with a capacity of 60MW, finding a suitable site can take up to three years,” says Solyev. “In Moscow, according to preliminary estimates, there are about 50MW of free capacity left, which is equivalent to 2-4 large commercial datacentres. “The capacity deficit only in the southern part of the Moscow region is predicted at 564MW by 2030, and up to 3.15GW by 2042.” As a result, datacentre operators and investors are now looking for suitable locations outside of Moscow and St Petersburg, and seeking to co-locate new datacentres in close proximity to renewable energy sources. And this will be important as demand for datacentre capacity in Russia is expected to increase, as it is in most of the rest of the world, due to the growing use of artificial intelligencetools and services. The energy-intensive nature of AI workloads will put further pressure on operators that are already struggling to meet the compute capacity demands of their customers. Speaking at the recent Ural Forum on cyber security in finance, Alexander Kraynov, director of AI technology development at Yandex, says solving the energy consumption issue of AI datacentres will not be easy. “The world is running out of electricity, including for AI, while the same situation is observed in Russia,” he said. “In order to ensure a stable energy supply of a newly built large datacentre, we will need up to one year.” According to a recent report of the Russian Vedomosti business paper, as of April 2024, Russian datacentres have used about 2.6GW, which is equivalent to about 1% of the installed capacity of the Unified Energy System of Russia. Accommodating AI workloads will also mean operators will need to purchase additional equipment, including expensive accelerators based on graphic processing units and higher-performing data storage systems. The implementation of these plans and the viability of these purchases is likely to be seriously complicated by the current sanctions regime against Russia. That said, Russia’s prime minister, Mikhail Mishustin, claims this part of the datacentre supply equation is being partially solved by an uptick in the domestic production of datacentre kit. According to the Mishustin, more than half of the server equipment and industrial storage and information processing systems needed for datacentres are already being produced in Russia – and these figures will continue to grow. The government also plans to provide additional financial support to the industry, as – to date – building datacentres in Russia has been prevented by relatively long payback periods, of up to 10 years in some cases, of such projects. One of the possible support measures on offer could include the subsidisation of at least part of the interest rates on loans to datacentre developers and operators. At the same time, though, the government’s actions in other areas have made it harder for operators to build new facilities. For example, in March 2025, the Russian government significantly tightened the existing norms for the establishment of new datacentres in the form of new rules for the design of data processing centres, which came into force after the approval by the Russian Ministry of Construction. According to Nikita Tsaplin, CEO of Russian hosting provider RUVDS, the rules led to additional bureaucracy in the sector. And, according to his predictions, that situation can extend the construction cycle of a datacentre from around five years to seven years. The government’s intervention here was to prevent the installation of servers in residential areas, such as garages, but it looks set to complicate an already complex situation – prompting questions about whether Russia’s datacentre market will ever reach its full potential. #conflict #putting #russias #datacentre #market
    WWW.COMPUTERWEEKLY.COM
    In conflict: Putting Russia’s datacentre market under the microscope
    When Russian troops invaded Ukraine on 24 February 2022, Russia’s datacentre sector was one of the fastest-growing segments of the country’s IT industry, with annual growth rates in the region of 10-12%. However, with the conflict resulting in the imposition of Western sanctions against Russia and an outflow of US-based tech companies from the country, including Apple and Microsoft, optimism about the sector’s potential for further growth soon disappeared. In early March 2025, it was reported that Google had disconnected from traffic exchange points and datacentres in Russia, leading to concerns about how this could negatively affect the speed of access to some Google services for Russian users. Initially, there was hope that domestic technology and datacentre providers might be able to plug the gaps left by the exodus of the US tech giants, but it seems they could not keep up with the hosting demands of Russia’s increasingly digital economy. Oleg Kim, director of the hardware systems department at Russian IT company Axoft, says the departure of foreign cloud providers and equipment manufacturers has led to a serious shortage of compute capacity in Russia. This is because the situation resulted in a sharp, initial increase in demand for domestic datacentres, but Russian providers simply did not have time to expand their capacities on the required scale, continues Kim. According to the estimates of Key Point, one of Russia’s largest datacentre networks, meeting Russia’s demand for datacentres will require facilities with a total capacity of 30,000 racks to be built each year over the next five years. On top of this, it has also become more costly to build datacentres in Russia. Estimates suggest that prior to 2022, the cost of a datacentre rack totalled 100,000 rubles ($1,200), but now exceeds 150,000 rubles. And analysts at Forbes Russia expect these figures will continue to grow, due to rising logistics costs and the impact the war is having on the availability of skilled labour in the construction sector. The impact of these challenges is being keenly felt by users, with several of the country’s large banks experiencing serious problems when finding suitable locations for their datacentres. Sberbank is among the firms affected, with its chairperson, German Gref, speaking out previously about how the bank is in need of a datacentre with at least 200MW of capacity, but would ideally need 300-400MW to address its compute requirements. Stanislav Bliznyuk, chairperson of T-Bank, says trying to build even two 50MW datacentres to meet its needs is proving problematic. “Finding locations where such capacity and adequate tariffs are available is a difficult task,” he said. Read more about datacentre developments North Lincolnshire Council has received a planning permission application for another large-scale datacentre development, in support of its bid to become an AI Growth Zone A proposal to build one of the biggest datacentres in Europe has been submitted to Hertsmere Borough Council, and already has the support of the technology secretary and local councillors. The UK government has unveiled its 50-point AI action plan, which commits to building sovereign artificial intelligence capabilities and accelerating AI datacentre developments – but questions remain about the viability of the plans. Despite this, T-Bank is establishing its own network of data processing centres – the first of which should open in early 2027, he confirmed in November 2024. Kirill Solyev, head of the engineering infrastructure department of the Softline Group of Companies, who specialise in IT, says many large Russian companies are resorting to building their own datacentres – because compute capacity is in such short supply. The situation is, however, complicated by the lack of suitable locations for datacentres in the largest cities of Russia – Moscow and St Petersburg. “For example, to build a datacentre with a capacity of 60MW, finding a suitable site can take up to three years,” says Solyev. “In Moscow, according to preliminary estimates, there are about 50MW of free capacity left, which is equivalent to 2-4 large commercial datacentres. “The capacity deficit only in the southern part of the Moscow region is predicted at 564MW by 2030, and up to 3.15GW by 2042.” As a result, datacentre operators and investors are now looking for suitable locations outside of Moscow and St Petersburg, and seeking to co-locate new datacentres in close proximity to renewable energy sources. And this will be important as demand for datacentre capacity in Russia is expected to increase, as it is in most of the rest of the world, due to the growing use of artificial intelligence (AI) tools and services. The energy-intensive nature of AI workloads will put further pressure on operators that are already struggling to meet the compute capacity demands of their customers. Speaking at the recent Ural Forum on cyber security in finance, Alexander Kraynov, director of AI technology development at Yandex, says solving the energy consumption issue of AI datacentres will not be easy. “The world is running out of electricity, including for AI, while the same situation is observed in Russia,” he said. “In order to ensure a stable energy supply of a newly built large datacentre, we will need up to one year.” According to a recent report of the Russian Vedomosti business paper, as of April 2024, Russian datacentres have used about 2.6GW, which is equivalent to about 1% of the installed capacity of the Unified Energy System of Russia. Accommodating AI workloads will also mean operators will need to purchase additional equipment, including expensive accelerators based on graphic processing units and higher-performing data storage systems. The implementation of these plans and the viability of these purchases is likely to be seriously complicated by the current sanctions regime against Russia. That said, Russia’s prime minister, Mikhail Mishustin, claims this part of the datacentre supply equation is being partially solved by an uptick in the domestic production of datacentre kit. According to the Mishustin, more than half of the server equipment and industrial storage and information processing systems needed for datacentres are already being produced in Russia – and these figures will continue to grow. The government also plans to provide additional financial support to the industry, as – to date – building datacentres in Russia has been prevented by relatively long payback periods, of up to 10 years in some cases, of such projects. One of the possible support measures on offer could include the subsidisation of at least part of the interest rates on loans to datacentre developers and operators. At the same time, though, the government’s actions in other areas have made it harder for operators to build new facilities. For example, in March 2025, the Russian government significantly tightened the existing norms for the establishment of new datacentres in the form of new rules for the design of data processing centres, which came into force after the approval by the Russian Ministry of Construction. According to Nikita Tsaplin, CEO of Russian hosting provider RUVDS, the rules led to additional bureaucracy in the sector (due to the positioning of datacentres as typical construction objects). And, according to his predictions, that situation can extend the construction cycle of a datacentre from around five years to seven years. The government’s intervention here was to prevent the installation of servers in residential areas, such as garages, but it looks set to complicate an already complex situation – prompting questions about whether Russia’s datacentre market will ever reach its full potential.
    Like
    Love
    Wow
    Sad
    Angry
    631
    0 Comments 0 Shares
  • Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    News

    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    7 min read

    Published: June 4, 2025

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent.
    The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets.
    This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device.
    Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode.
    How Does It Work?
    Here’s the method used by Meta to spy on Android devices:

    As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour.
    When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites.
    However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background. 
    Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost. 
    Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms.

    The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites.
    Yandex also uses a similar method to harvest your personal data.

    Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone. 
    When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters. 
    These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1.
    Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains.
    The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK.

    Meta’s Infamous History with Privacy Norms
    This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations. 
    For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B. 
    Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission. 
    Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data. 
    In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users.
    In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally.
    So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place.
    That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities. 
    The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data.
    Meta’s Timid Response
    Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication. 

    We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson

    This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports.
    Here’s what will possibly happen next:

    A lawsuit may be filed based on the report.
    An investigating committee might be formed to question the matter.
    The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines.
    Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done. 

    The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes.
    More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    View all
    #meta #yandex #spying #android #users
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data.  In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all #meta #yandex #spying #android #users
    TECHREPORT.COM
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580-12585), on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying $1.4B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta $5B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid $90M and promised to delete the collected data.  In 2024, South Korea also fined Meta $15M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined $101.6M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’ (yep, that’s what they are calling it) as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all
    Like
    Love
    Wow
    Sad
    Angry
    193
    0 Comments 0 Shares
  • Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months

    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandexhave been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”
    #meta #apps #have #been #covertly
    Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months
    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandexhave been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.” #meta #apps #have #been #covertly
    LIFEHACKER.COM
    Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months
    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandex (a Russian technology company) have been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”
    Like
    Love
    Wow
    Sad
    Angry
    77
    0 Comments 0 Shares
  • A Coding Guide to Building a Scalable Multi-Agent Communication Systems Using Agent Communication Protocol (ACP)

    In this tutorial, we implement the Agent Communication Protocolthrough building a flexible, ACP-compliant messaging system in Python, leveraging Google’s Gemini API for natural language processing. Beginning with the installation and configuration of the google-generativeai library, the tutorial introduces core abstractions, message types, performatives, and the ACPMessage data class, which standardizes inter-agent communication. By defining ACPAgent and ACPMessageBroker classes, the guide demonstrates how to create, send, route, and process structured messages among multiple autonomous agents. Through clear code examples, users learn to implement querying, requesting actions, and broadcasting information, while maintaining conversation threads, acknowledgments, and error handling.
    import google.generativeai as genai
    import json
    import time
    import uuid
    from enum import Enum
    from typing import Dict, List, Any, Optional
    from dataclasses import dataclass, asdict

    GEMINI_API_KEY = "Use Your Gemini API Key"
    genai.configureWe import essential Python modules, ranging from JSON handling and timing to unique identifier generation and type annotations, to support a structured ACP implementation. It then retrieves the user’s Gemini API key placeholder and configures the google-generativeai client for subsequent calls to the Gemini language model.
    class ACPMessageType:
    """Standard ACP message types"""
    REQUEST = "request"
    RESPONSE = "response"
    INFORM = "inform"
    QUERY = "query"
    SUBSCRIBE = "subscribe"
    UNSUBSCRIBE = "unsubscribe"
    ERROR = "error"
    ACK = "acknowledge"
    The ACPMessageType enumeration defines the core message categories used in the Agent Communication Protocol, including requests, responses, informational broadcasts, queries, and control actions like subscription management, error signaling, and acknowledgments. By centralizing these message types, the protocol ensures consistent handling and routing of inter-agent communications throughout the system.
    class ACPPerformative:
    """ACP speech acts"""
    TELL = "tell"
    ASK = "ask"
    REPLY = "reply"
    REQUEST_ACTION = "request-action"
    AGREE = "agree"
    REFUSE = "refuse"
    PROPOSE = "propose"
    ACCEPT = "accept"
    REJECT = "reject"
    The ACPPerformative enumeration captures the variety of speech acts agents can use when interacting under the ACP framework, mapping high-level intentions, such as making requests, posing questions, giving commands, or negotiating agreements, onto standardized labels. This clear taxonomy enables agents to interpret and respond to messages in contextually appropriate ways, ensuring robust and semantically rich communication.

    @dataclass
    class ACPMessage:
    """Agent Communication Protocol Message Structure"""
    message_id: str
    sender: str
    receiver: str
    performative: str
    content: Dictprotocol: str = "ACP-1.0"
    conversation_id: str = None
    reply_to: str = None
    language: str = "english"
    encoding: str = "json"
    timestamp: float = None

    def __post_init__:
    if self.timestamp is None:
    self.timestamp = time.timeif self.conversation_id is None:
    self.conversation_id = str)

    def to_acp_format-> str:
    """Convert to standard ACP message format"""
    acp_msg = {
    "message-id": self.message_id,
    "sender": self.sender,
    "receiver": self.receiver,
    "performative": self.performative,
    "content": self.content,
    "protocol": self.protocol,
    "conversation-id": self.conversation_id,
    "reply-to": self.reply_to,
    "language": self.language,
    "encoding": self.encoding,
    "timestamp": self.timestamp
    }
    return json.dumps@classmethod
    def from_acp_format-> 'ACPMessage':
    """Parse ACP message from string format"""
    data = json.loadsreturn cls,
    conversation_id=data.get,
    reply_to=data.get,
    language=data.get,
    encoding=data.get,
    timestamp=data.get)
    )

    The ACPMessage data class encapsulates all the fields required for a structured ACP exchange, including identifiers, participants, performative, payload, and metadata such as protocol version, language, and timestamps. Its __post_init__ method auto-populates missing timestamp and conversation_id values, ensuring every message is uniquely tracked. Utility methods to_acp_format and from_acp_format handle serialization to and from the standardized JSON representation for seamless transmission and parsing.
    class ACPAgent:
    """Agent implementing Agent Communication Protocol"""

    def __init__:
    self.agent_id = agent_id
    self.name = name
    self.capabilities = capabilities
    self.model = genai.GenerativeModelself.message_queue: List=self.subscriptions: Dict] = {}
    self.conversations: Dict] = {}

    def create_message-> ACPMessage:
    """Create a new ACP-compliant message"""
    return ACPMessage),
    sender=self.agent_id,
    receiver=receiver,
    performative=performative,
    content=content,
    conversation_id=conversation_id,
    reply_to=reply_to
    )

    def send_inform-> ACPMessage:
    """Send an INFORM message"""
    content = {"fact": fact, "data": data}
    return self.create_messagedef send_query-> ACPMessage:
    """Send a QUERY message"""
    content = {"question": question, "query-type": query_type}
    return self.create_messagedef send_request-> ACPMessage:
    """Send a REQUEST message"""
    content = {"action": action, "parameters": parameters or {}}
    return self.create_messagedef send_reply-> ACPMessage:
    """Send a REPLY message in response to another message"""
    content = {"response": response_data, "original-question": original_msg.content}
    return self.create_messagedef process_message-> Optional:
    """Process incoming ACP message and generate appropriate response"""
    self.message_queue.appendconv_id = message.conversation_id
    if conv_id not in self.conversations:
    self.conversations=self.conversations.appendif message.performative == ACPPerformative.ASK.value:
    return self._handle_queryelif message.performative == ACPPerformative.REQUEST_ACTION.value:
    return self._handle_requestelif message.performative == ACPPerformative.TELL.value:
    return self._handle_informreturn None

    def _handle_query-> ACPMessage:
    """Handle incoming query messages"""
    question = message.content.getprompt = f"As agent {self.name} with capabilities {self.capabilities}, answer: {question}"
    try:
    response = self.model.generate_contentanswer = response.text.stripexcept:
    answer = "Unable to process query at this time"

    return self.send_replydef _handle_request-> ACPMessage:
    """Handle incoming action requests"""
    action = message.content.getparameters = message.content.getif anyfor capability in self.capabilities):
    result = f"Executing {action} with parameters {parameters}"
    status = "agreed"
    else:
    result = f"Cannot perform {action} - not in my capabilities"
    status = "refused"

    return self.send_replydef _handle_inform-> Optional:
    """Handle incoming information messages"""
    fact = message.content.getprintack_content = {"status": "received", "fact": fact}
    return self.create_messageThe ACPAgent class encapsulates an autonomous entity capable of sending, receiving, and processing ACP-compliant messages using Gemini’s language model. It manages its own message queue, conversation history, and subscriptions, and provides helper methodsto construct correctly formatted ACPMessage instances. Incoming messages are routed through process_message, which delegates to specialized handlers for queries, action requests, and informational messages.
    class ACPMessageBroker:
    """Message broker implementing ACP routing and delivery"""

    def __init__:
    self.agents: Dict= {}
    self.message_log: List=self.routing_table: Dict= {}

    def register_agent:
    """Register an agent with the message broker"""
    self.agents= agent
    self.routing_table= "local"
    print")

    def route_message-> bool:
    """Route ACP message to appropriate recipient"""
    if message.receiver not in self.agents:
    printreturn False

    printprintprintprint}")

    receiver_agent = self.agentsresponse = receiver_agent.process_messageself.message_log.appendif response:
    printprintprint}")

    if response.receiver in self.agents:
    self.agents.process_messageself.message_log.appendreturn True

    def broadcast_message:
    """Broadcast message to multiple recipients"""
    for recipient in recipients:
    msg_copy = ACPMessage),
    sender=message.sender,
    receiver=recipient,
    performative=message.performative,
    content=message.content.copy,
    conversation_id=message.conversation_id
    )
    self.route_messageThe ACPMessageBroker serves as the central router for ACP messages, maintaining a registry of agents and a message log. It provides methods to register agents, deliver individual messages via route_message, which handles lookup, logging, and response chaining, and to send the same message to multiple recipients with broadcast_message.
    def demonstrate_acp:
    """Comprehensive demonstration of Agent Communication Protocol"""

    printDEMONSTRATION")
    printbroker = ACPMessageBrokerresearcher = ACPAgentassistant = ACPAgentcalculator = ACPAgentbroker.register_agentbroker.register_agentbroker.register_agentprintfor agent_id, agent in broker.agents.items:
    print: {', '.join}")

    print")
    query_msg = assistant.send_querybroker.route_messageprint")
    calc_request = researcher.send_request+ 10"})
    broker.route_messageprint")
    info_msg = researcher.send_informbroker.route_messageprintprint}")
    print)}")
    print)}")

    printsample_msg = assistant.send_queryprint)
    The demonstrate_acp function orchestrates a hands-on walkthrough of the entire ACP framework: it initializes a broker and three distinct agents, registers them, and illustrates three key interaction scenarios, querying for information, requesting a computation, and sharing an update. After routing each message and handling responses, it prints summary statistics on the message flow. It showcases a formatted ACP message, providing users with a clear, end-to-end example of how agents communicate under the protocol.
    def setup_guide:
    print ACP PROTOCOL FEATURES:

    • Standardized message format with required fields
    • Speech act performatives• Conversation tracking and message threading
    • Error handling and acknowledgments
    • Message routing and delivery confirmation

    EXTEND THE PROTOCOL:
    ```python
    # Create custom agent
    my_agent = ACPAgentbroker.register_agent# Send custom message
    msg = my_agent.send_querybroker.route_message```
    """)

    if __name__ == "__main__":
    setup_guidedemonstrate_acpFinally, the setup_guide function provides a quick-start reference for running the ACP demo in Google Colab, outlining how to obtain and configure your Gemini API key and invoke the demonstrate_acp routine. It also summarizes key protocol features, such as standardized message formats, performatives, and message routing. It provides a concise code snippet illustrating how to register custom agents and send tailored messages.
    In conclusion, this tutorial implements ACP-based multi-agent systems capable of research, computation, and collaboration tasks. The provided sample scenarios illustrate common use cases, information queries, computational requests, and fact sharing, while the broker ensures reliable message delivery and logging. Readers are encouraged to extend the framework by adding new agent capabilities, integrating domain-specific actions, or incorporating more sophisticated subscription and notification mechanisms.

    Download the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender SystemsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Stanford Researchers Introduced Biomni: A Biomedical AI Agent for Automation Across Diverse Tasks and Data TypesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU EfficiencyAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features
    #coding #guide #building #scalable #multiagent
    A Coding Guide to Building a Scalable Multi-Agent Communication Systems Using Agent Communication Protocol (ACP)
    In this tutorial, we implement the Agent Communication Protocolthrough building a flexible, ACP-compliant messaging system in Python, leveraging Google’s Gemini API for natural language processing. Beginning with the installation and configuration of the google-generativeai library, the tutorial introduces core abstractions, message types, performatives, and the ACPMessage data class, which standardizes inter-agent communication. By defining ACPAgent and ACPMessageBroker classes, the guide demonstrates how to create, send, route, and process structured messages among multiple autonomous agents. Through clear code examples, users learn to implement querying, requesting actions, and broadcasting information, while maintaining conversation threads, acknowledgments, and error handling. import google.generativeai as genai import json import time import uuid from enum import Enum from typing import Dict, List, Any, Optional from dataclasses import dataclass, asdict GEMINI_API_KEY = "Use Your Gemini API Key" genai.configureWe import essential Python modules, ranging from JSON handling and timing to unique identifier generation and type annotations, to support a structured ACP implementation. It then retrieves the user’s Gemini API key placeholder and configures the google-generativeai client for subsequent calls to the Gemini language model. class ACPMessageType: """Standard ACP message types""" REQUEST = "request" RESPONSE = "response" INFORM = "inform" QUERY = "query" SUBSCRIBE = "subscribe" UNSUBSCRIBE = "unsubscribe" ERROR = "error" ACK = "acknowledge" The ACPMessageType enumeration defines the core message categories used in the Agent Communication Protocol, including requests, responses, informational broadcasts, queries, and control actions like subscription management, error signaling, and acknowledgments. By centralizing these message types, the protocol ensures consistent handling and routing of inter-agent communications throughout the system. class ACPPerformative: """ACP speech acts""" TELL = "tell" ASK = "ask" REPLY = "reply" REQUEST_ACTION = "request-action" AGREE = "agree" REFUSE = "refuse" PROPOSE = "propose" ACCEPT = "accept" REJECT = "reject" The ACPPerformative enumeration captures the variety of speech acts agents can use when interacting under the ACP framework, mapping high-level intentions, such as making requests, posing questions, giving commands, or negotiating agreements, onto standardized labels. This clear taxonomy enables agents to interpret and respond to messages in contextually appropriate ways, ensuring robust and semantically rich communication. @dataclass class ACPMessage: """Agent Communication Protocol Message Structure""" message_id: str sender: str receiver: str performative: str content: Dictprotocol: str = "ACP-1.0" conversation_id: str = None reply_to: str = None language: str = "english" encoding: str = "json" timestamp: float = None def __post_init__: if self.timestamp is None: self.timestamp = time.timeif self.conversation_id is None: self.conversation_id = str) def to_acp_format-> str: """Convert to standard ACP message format""" acp_msg = { "message-id": self.message_id, "sender": self.sender, "receiver": self.receiver, "performative": self.performative, "content": self.content, "protocol": self.protocol, "conversation-id": self.conversation_id, "reply-to": self.reply_to, "language": self.language, "encoding": self.encoding, "timestamp": self.timestamp } return json.dumps@classmethod def from_acp_format-> 'ACPMessage': """Parse ACP message from string format""" data = json.loadsreturn cls, conversation_id=data.get, reply_to=data.get, language=data.get, encoding=data.get, timestamp=data.get) ) The ACPMessage data class encapsulates all the fields required for a structured ACP exchange, including identifiers, participants, performative, payload, and metadata such as protocol version, language, and timestamps. Its __post_init__ method auto-populates missing timestamp and conversation_id values, ensuring every message is uniquely tracked. Utility methods to_acp_format and from_acp_format handle serialization to and from the standardized JSON representation for seamless transmission and parsing. class ACPAgent: """Agent implementing Agent Communication Protocol""" def __init__: self.agent_id = agent_id self.name = name self.capabilities = capabilities self.model = genai.GenerativeModelself.message_queue: List=self.subscriptions: Dict] = {} self.conversations: Dict] = {} def create_message-> ACPMessage: """Create a new ACP-compliant message""" return ACPMessage), sender=self.agent_id, receiver=receiver, performative=performative, content=content, conversation_id=conversation_id, reply_to=reply_to ) def send_inform-> ACPMessage: """Send an INFORM message""" content = {"fact": fact, "data": data} return self.create_messagedef send_query-> ACPMessage: """Send a QUERY message""" content = {"question": question, "query-type": query_type} return self.create_messagedef send_request-> ACPMessage: """Send a REQUEST message""" content = {"action": action, "parameters": parameters or {}} return self.create_messagedef send_reply-> ACPMessage: """Send a REPLY message in response to another message""" content = {"response": response_data, "original-question": original_msg.content} return self.create_messagedef process_message-> Optional: """Process incoming ACP message and generate appropriate response""" self.message_queue.appendconv_id = message.conversation_id if conv_id not in self.conversations: self.conversations=self.conversations.appendif message.performative == ACPPerformative.ASK.value: return self._handle_queryelif message.performative == ACPPerformative.REQUEST_ACTION.value: return self._handle_requestelif message.performative == ACPPerformative.TELL.value: return self._handle_informreturn None def _handle_query-> ACPMessage: """Handle incoming query messages""" question = message.content.getprompt = f"As agent {self.name} with capabilities {self.capabilities}, answer: {question}" try: response = self.model.generate_contentanswer = response.text.stripexcept: answer = "Unable to process query at this time" return self.send_replydef _handle_request-> ACPMessage: """Handle incoming action requests""" action = message.content.getparameters = message.content.getif anyfor capability in self.capabilities): result = f"Executing {action} with parameters {parameters}" status = "agreed" else: result = f"Cannot perform {action} - not in my capabilities" status = "refused" return self.send_replydef _handle_inform-> Optional: """Handle incoming information messages""" fact = message.content.getprintack_content = {"status": "received", "fact": fact} return self.create_messageThe ACPAgent class encapsulates an autonomous entity capable of sending, receiving, and processing ACP-compliant messages using Gemini’s language model. It manages its own message queue, conversation history, and subscriptions, and provides helper methodsto construct correctly formatted ACPMessage instances. Incoming messages are routed through process_message, which delegates to specialized handlers for queries, action requests, and informational messages. class ACPMessageBroker: """Message broker implementing ACP routing and delivery""" def __init__: self.agents: Dict= {} self.message_log: List=self.routing_table: Dict= {} def register_agent: """Register an agent with the message broker""" self.agents= agent self.routing_table= "local" print") def route_message-> bool: """Route ACP message to appropriate recipient""" if message.receiver not in self.agents: printreturn False printprintprintprint}") receiver_agent = self.agentsresponse = receiver_agent.process_messageself.message_log.appendif response: printprintprint}") if response.receiver in self.agents: self.agents.process_messageself.message_log.appendreturn True def broadcast_message: """Broadcast message to multiple recipients""" for recipient in recipients: msg_copy = ACPMessage), sender=message.sender, receiver=recipient, performative=message.performative, content=message.content.copy, conversation_id=message.conversation_id ) self.route_messageThe ACPMessageBroker serves as the central router for ACP messages, maintaining a registry of agents and a message log. It provides methods to register agents, deliver individual messages via route_message, which handles lookup, logging, and response chaining, and to send the same message to multiple recipients with broadcast_message. def demonstrate_acp: """Comprehensive demonstration of Agent Communication Protocol""" printDEMONSTRATION") printbroker = ACPMessageBrokerresearcher = ACPAgentassistant = ACPAgentcalculator = ACPAgentbroker.register_agentbroker.register_agentbroker.register_agentprintfor agent_id, agent in broker.agents.items: print: {', '.join}") print") query_msg = assistant.send_querybroker.route_messageprint") calc_request = researcher.send_request+ 10"}) broker.route_messageprint") info_msg = researcher.send_informbroker.route_messageprintprint}") print)}") print)}") printsample_msg = assistant.send_queryprint) The demonstrate_acp function orchestrates a hands-on walkthrough of the entire ACP framework: it initializes a broker and three distinct agents, registers them, and illustrates three key interaction scenarios, querying for information, requesting a computation, and sharing an update. After routing each message and handling responses, it prints summary statistics on the message flow. It showcases a formatted ACP message, providing users with a clear, end-to-end example of how agents communicate under the protocol. def setup_guide: print🔧 ACP PROTOCOL FEATURES: • Standardized message format with required fields • Speech act performatives• Conversation tracking and message threading • Error handling and acknowledgments • Message routing and delivery confirmation 📝 EXTEND THE PROTOCOL: ```python # Create custom agent my_agent = ACPAgentbroker.register_agent# Send custom message msg = my_agent.send_querybroker.route_message``` """) if __name__ == "__main__": setup_guidedemonstrate_acpFinally, the setup_guide function provides a quick-start reference for running the ACP demo in Google Colab, outlining how to obtain and configure your Gemini API key and invoke the demonstrate_acp routine. It also summarizes key protocol features, such as standardized message formats, performatives, and message routing. It provides a concise code snippet illustrating how to register custom agents and send tailored messages. In conclusion, this tutorial implements ACP-based multi-agent systems capable of research, computation, and collaboration tasks. The provided sample scenarios illustrate common use cases, information queries, computational requests, and fact sharing, while the broker ensures reliable message delivery and logging. Readers are encouraged to extend the framework by adding new agent capabilities, integrating domain-specific actions, or incorporating more sophisticated subscription and notification mechanisms. Download the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender SystemsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Stanford Researchers Introduced Biomni: A Biomedical AI Agent for Automation Across Diverse Tasks and Data TypesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU EfficiencyAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features #coding #guide #building #scalable #multiagent
    WWW.MARKTECHPOST.COM
    A Coding Guide to Building a Scalable Multi-Agent Communication Systems Using Agent Communication Protocol (ACP)
    In this tutorial, we implement the Agent Communication Protocol (ACP) through building a flexible, ACP-compliant messaging system in Python, leveraging Google’s Gemini API for natural language processing. Beginning with the installation and configuration of the google-generativeai library, the tutorial introduces core abstractions, message types, performatives, and the ACPMessage data class, which standardizes inter-agent communication. By defining ACPAgent and ACPMessageBroker classes, the guide demonstrates how to create, send, route, and process structured messages among multiple autonomous agents. Through clear code examples, users learn to implement querying, requesting actions, and broadcasting information, while maintaining conversation threads, acknowledgments, and error handling. import google.generativeai as genai import json import time import uuid from enum import Enum from typing import Dict, List, Any, Optional from dataclasses import dataclass, asdict GEMINI_API_KEY = "Use Your Gemini API Key" genai.configure(api_key=GEMINI_API_KEY) We import essential Python modules, ranging from JSON handling and timing to unique identifier generation and type annotations, to support a structured ACP implementation. It then retrieves the user’s Gemini API key placeholder and configures the google-generativeai client for subsequent calls to the Gemini language model. class ACPMessageType(Enum): """Standard ACP message types""" REQUEST = "request" RESPONSE = "response" INFORM = "inform" QUERY = "query" SUBSCRIBE = "subscribe" UNSUBSCRIBE = "unsubscribe" ERROR = "error" ACK = "acknowledge" The ACPMessageType enumeration defines the core message categories used in the Agent Communication Protocol, including requests, responses, informational broadcasts, queries, and control actions like subscription management, error signaling, and acknowledgments. By centralizing these message types, the protocol ensures consistent handling and routing of inter-agent communications throughout the system. class ACPPerformative(Enum): """ACP speech acts (performatives)""" TELL = "tell" ASK = "ask" REPLY = "reply" REQUEST_ACTION = "request-action" AGREE = "agree" REFUSE = "refuse" PROPOSE = "propose" ACCEPT = "accept" REJECT = "reject" The ACPPerformative enumeration captures the variety of speech acts agents can use when interacting under the ACP framework, mapping high-level intentions, such as making requests, posing questions, giving commands, or negotiating agreements, onto standardized labels. This clear taxonomy enables agents to interpret and respond to messages in contextually appropriate ways, ensuring robust and semantically rich communication. @dataclass class ACPMessage: """Agent Communication Protocol Message Structure""" message_id: str sender: str receiver: str performative: str content: Dict[str, Any] protocol: str = "ACP-1.0" conversation_id: str = None reply_to: str = None language: str = "english" encoding: str = "json" timestamp: float = None def __post_init__(self): if self.timestamp is None: self.timestamp = time.time() if self.conversation_id is None: self.conversation_id = str(uuid.uuid4()) def to_acp_format(self) -> str: """Convert to standard ACP message format""" acp_msg = { "message-id": self.message_id, "sender": self.sender, "receiver": self.receiver, "performative": self.performative, "content": self.content, "protocol": self.protocol, "conversation-id": self.conversation_id, "reply-to": self.reply_to, "language": self.language, "encoding": self.encoding, "timestamp": self.timestamp } return json.dumps(acp_msg, indent=2) @classmethod def from_acp_format(cls, acp_string: str) -> 'ACPMessage': """Parse ACP message from string format""" data = json.loads(acp_string) return cls( message_id=data["message-id"], sender=data["sender"], receiver=data["receiver"], performative=data["performative"], content=data["content"], protocol=data.get("protocol", "ACP-1.0"), conversation_id=data.get("conversation-id"), reply_to=data.get("reply-to"), language=data.get("language", "english"), encoding=data.get("encoding", "json"), timestamp=data.get("timestamp", time.time()) ) The ACPMessage data class encapsulates all the fields required for a structured ACP exchange, including identifiers, participants, performative, payload, and metadata such as protocol version, language, and timestamps. Its __post_init__ method auto-populates missing timestamp and conversation_id values, ensuring every message is uniquely tracked. Utility methods to_acp_format and from_acp_format handle serialization to and from the standardized JSON representation for seamless transmission and parsing. class ACPAgent: """Agent implementing Agent Communication Protocol""" def __init__(self, agent_id: str, name: str, capabilities: List[str]): self.agent_id = agent_id self.name = name self.capabilities = capabilities self.model = genai.GenerativeModel("gemini-1.5-flash") self.message_queue: List[ACPMessage] = [] self.subscriptions: Dict[str, List[str]] = {} self.conversations: Dict[str, List[ACPMessage]] = {} def create_message(self, receiver: str, performative: str, content: Dict[str, Any], conversation_id: str = None, reply_to: str = None) -> ACPMessage: """Create a new ACP-compliant message""" return ACPMessage( message_id=str(uuid.uuid4()), sender=self.agent_id, receiver=receiver, performative=performative, content=content, conversation_id=conversation_id, reply_to=reply_to ) def send_inform(self, receiver: str, fact: str, data: Any = None) -> ACPMessage: """Send an INFORM message (telling someone a fact)""" content = {"fact": fact, "data": data} return self.create_message(receiver, ACPPerformative.TELL.value, content) def send_query(self, receiver: str, question: str, query_type: str = "yes-no") -> ACPMessage: """Send a QUERY message (asking for information)""" content = {"question": question, "query-type": query_type} return self.create_message(receiver, ACPPerformative.ASK.value, content) def send_request(self, receiver: str, action: str, parameters: Dict = None) -> ACPMessage: """Send a REQUEST message (asking someone to perform an action)""" content = {"action": action, "parameters": parameters or {}} return self.create_message(receiver, ACPPerformative.REQUEST_ACTION.value, content) def send_reply(self, original_msg: ACPMessage, response_data: Any) -> ACPMessage: """Send a REPLY message in response to another message""" content = {"response": response_data, "original-question": original_msg.content} return self.create_message( original_msg.sender, ACPPerformative.REPLY.value, content, conversation_id=original_msg.conversation_id, reply_to=original_msg.message_id ) def process_message(self, message: ACPMessage) -> Optional[ACPMessage]: """Process incoming ACP message and generate appropriate response""" self.message_queue.append(message) conv_id = message.conversation_id if conv_id not in self.conversations: self.conversations[conv_id] = [] self.conversations[conv_id].append(message) if message.performative == ACPPerformative.ASK.value: return self._handle_query(message) elif message.performative == ACPPerformative.REQUEST_ACTION.value: return self._handle_request(message) elif message.performative == ACPPerformative.TELL.value: return self._handle_inform(message) return None def _handle_query(self, message: ACPMessage) -> ACPMessage: """Handle incoming query messages""" question = message.content.get("question", "") prompt = f"As agent {self.name} with capabilities {self.capabilities}, answer: {question}" try: response = self.model.generate_content(prompt) answer = response.text.strip() except: answer = "Unable to process query at this time" return self.send_reply(message, {"answer": answer, "confidence": 0.8}) def _handle_request(self, message: ACPMessage) -> ACPMessage: """Handle incoming action requests""" action = message.content.get("action", "") parameters = message.content.get("parameters", {}) if any(capability in action.lower() for capability in self.capabilities): result = f"Executing {action} with parameters {parameters}" status = "agreed" else: result = f"Cannot perform {action} - not in my capabilities" status = "refused" return self.send_reply(message, {"status": status, "result": result}) def _handle_inform(self, message: ACPMessage) -> Optional[ACPMessage]: """Handle incoming information messages""" fact = message.content.get("fact", "") print(f"[{self.name}] Received information: {fact}") ack_content = {"status": "received", "fact": fact} return self.create_message(message.sender, "acknowledge", ack_content, conversation_id=message.conversation_id) The ACPAgent class encapsulates an autonomous entity capable of sending, receiving, and processing ACP-compliant messages using Gemini’s language model. It manages its own message queue, conversation history, and subscriptions, and provides helper methods (send_inform, send_query, send_request, send_reply) to construct correctly formatted ACPMessage instances. Incoming messages are routed through process_message, which delegates to specialized handlers for queries, action requests, and informational messages. class ACPMessageBroker: """Message broker implementing ACP routing and delivery""" def __init__(self): self.agents: Dict[str, ACPAgent] = {} self.message_log: List[ACPMessage] = [] self.routing_table: Dict[str, str] = {} def register_agent(self, agent: ACPAgent): """Register an agent with the message broker""" self.agents[agent.agent_id] = agent self.routing_table[agent.agent_id] = "local" print(f"✓ Registered agent: {agent.name} ({agent.agent_id})") def route_message(self, message: ACPMessage) -> bool: """Route ACP message to appropriate recipient""" if message.receiver not in self.agents: print(f"✗ Receiver {message.receiver} not found") return False print(f"\n📨 ACP MESSAGE ROUTING:") print(f"From: {message.sender} → To: {message.receiver}") print(f"Performative: {message.performative}") print(f"Content: {json.dumps(message.content, indent=2)}") receiver_agent = self.agents[message.receiver] response = receiver_agent.process_message(message) self.message_log.append(message) if response: print(f"\n📤 GENERATED RESPONSE:") print(f"From: {response.sender} → To: {response.receiver}") print(f"Content: {json.dumps(response.content, indent=2)}") if response.receiver in self.agents: self.agents[response.receiver].process_message(response) self.message_log.append(response) return True def broadcast_message(self, message: ACPMessage, recipients: List[str]): """Broadcast message to multiple recipients""" for recipient in recipients: msg_copy = ACPMessage( message_id=str(uuid.uuid4()), sender=message.sender, receiver=recipient, performative=message.performative, content=message.content.copy(), conversation_id=message.conversation_id ) self.route_message(msg_copy) The ACPMessageBroker serves as the central router for ACP messages, maintaining a registry of agents and a message log. It provides methods to register agents, deliver individual messages via route_message, which handles lookup, logging, and response chaining, and to send the same message to multiple recipients with broadcast_message. def demonstrate_acp(): """Comprehensive demonstration of Agent Communication Protocol""" print("🤖 AGENT COMMUNICATION PROTOCOL (ACP) DEMONSTRATION") print("=" * 60) broker = ACPMessageBroker() researcher = ACPAgent("agent-001", "Dr. Research", ["analysis", "research", "data-processing"]) assistant = ACPAgent("agent-002", "AI Assistant", ["information", "scheduling", "communication"]) calculator = ACPAgent("agent-003", "MathBot", ["calculation", "mathematics", "computation"]) broker.register_agent(researcher) broker.register_agent(assistant) broker.register_agent(calculator) print(f"\n📋 REGISTERED AGENTS:") for agent_id, agent in broker.agents.items(): print(f" • {agent.name} ({agent_id}): {', '.join(agent.capabilities)}") print(f"\n🔬 SCENARIO 1: Information Query (ASK performative)") query_msg = assistant.send_query("agent-001", "What are the key factors in AI research?") broker.route_message(query_msg) print(f"\n🔢 SCENARIO 2: Action Request (REQUEST-ACTION performative)") calc_request = researcher.send_request("agent-003", "calculate", {"expression": "sqrt(144) + 10"}) broker.route_message(calc_request) print(f"\n📢 SCENARIO 3: Information Sharing (TELL performative)") info_msg = researcher.send_inform("agent-002", "New research paper published on quantum computing") broker.route_message(info_msg) print(f"\n📊 PROTOCOL STATISTICS:") print(f" • Total messages processed: {len(broker.message_log)}") print(f" • Active conversations: {len(set(msg.conversation_id for msg in broker.message_log))}") print(f" • Message types used: {len(set(msg.performative for msg in broker.message_log))}") print(f"\n📋 SAMPLE ACP MESSAGE FORMAT:") sample_msg = assistant.send_query("agent-001", "Sample question for format demonstration") print(sample_msg.to_acp_format()) The demonstrate_acp function orchestrates a hands-on walkthrough of the entire ACP framework: it initializes a broker and three distinct agents (Researcher, AI Assistant, and MathBot), registers them, and illustrates three key interaction scenarios, querying for information, requesting a computation, and sharing an update. After routing each message and handling responses, it prints summary statistics on the message flow. It showcases a formatted ACP message, providing users with a clear, end-to-end example of how agents communicate under the protocol. def setup_guide(): print(""" 🚀 GOOGLE COLAB SETUP GUIDE: 1. Get Gemini API Key: https://makersuite.google.com/app/apikey 2. Replace: GEMINI_API_KEY = "YOUR_ACTUAL_API_KEY" 3. Run: demonstrate_acp() 🔧 ACP PROTOCOL FEATURES: • Standardized message format with required fields • Speech act performatives (TELL, ASK, REQUEST-ACTION, etc.) • Conversation tracking and message threading • Error handling and acknowledgments • Message routing and delivery confirmation 📝 EXTEND THE PROTOCOL: ```python # Create custom agent my_agent = ACPAgent("my-001", "CustomBot", ["custom-capability"]) broker.register_agent(my_agent) # Send custom message msg = my_agent.send_query("agent-001", "Your question here") broker.route_message(msg) ``` """) if __name__ == "__main__": setup_guide() demonstrate_acp() Finally, the setup_guide function provides a quick-start reference for running the ACP demo in Google Colab, outlining how to obtain and configure your Gemini API key and invoke the demonstrate_acp routine. It also summarizes key protocol features, such as standardized message formats, performatives, and message routing. It provides a concise code snippet illustrating how to register custom agents and send tailored messages. In conclusion, this tutorial implements ACP-based multi-agent systems capable of research, computation, and collaboration tasks. The provided sample scenarios illustrate common use cases, information queries, computational requests, and fact sharing, while the broker ensures reliable message delivery and logging. Readers are encouraged to extend the framework by adding new agent capabilities, integrating domain-specific actions, or incorporating more sophisticated subscription and notification mechanisms. Download the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender SystemsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Stanford Researchers Introduced Biomni: A Biomedical AI Agent for Automation Across Diverse Tasks and Data TypesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek Releases R1-0528: An Open-Source Reasoning AI Model Delivering Enhanced Math and Code Performance with Single-GPU EfficiencyAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features
    0 Comments 0 Shares
  • North Korean Konni APT Targets Ukraine with Malware to track Russian Invasion Progress






    May 13, 2025Ravie LakshmananCyber Espionage / Malware

    The North Korea-linked threat actor known as Konni APT has been attributed to a phishing campaign targeting government entities in Ukraine, indicating the threat actor's targeting beyond Russia.
    Enterprise security firm Proofpoint said the end goal of the campaign is to collect intelligence on the "trajectory of the Russian invasion."
    "The group's interest in Ukraine follows historical targeting of government entities in Russia for strategic intelligence gathering purposes," security researchers Greg Lesnewich, Saher Naumaan, and Mark Kelly said in a report shared with The Hacker News.
    Konni APT, also known as Opal Sleet, Osmium, TA406, and Vedalia, is a cyber espionage group that has a history of targeting entities in South Korea, the United States, and Russia.
    It's operational since at least 2014.
    Attack chains mounted by the threat actor often involve the use of phishing emails to distribute malware called Konni RAT (aka UpDog) and redirect recipients to credential harvesting pages.
    Proofpoint, in an analysis of the threat group published in November 2021, assessed TA406 to be one of several actors that make up the activity publicly tracked as Kimsuky, Thallium, and Konni Group.
    The latest set of attacks documented by the cybersecurity company entails the use of phishing emails that impersonate a fictitious senior fellow at a think tank called the Royal Institute of Strategic Studies, which is also a non-existent organization.
    The email messages contain a link to a password-protected RAR archive that's hosted on the MEGA cloud service.
    Opening the RAR archive using a password mentioned in the message body launches an infection sequence that's engineered to conduct extensive reconnaissance of the compromised machines.

    Specifically, present within the RAR archive is a CHM file that displays decoy content related to former Ukrainian military leader Valeriy Zaluzhnyi.
    Should the victim click anywhere on the page, a PowerShell command embedded within the HTML is executed to reach out to an external server and download a next-stage PowerShell payload.
    The newly launched PowerShell script is capable of executing various commands to gather information about the system, encode it using Base64-encoding, and send it to the same server.
    "The actor sent multiple phishing emails on consecutive days when the target did not click the link, asking the target if they had received the prior emails and if they would download the files," the researchers said.
    Proofpoint said it also observed an HTML file being directly distributed as an attachment to the phishing messages.
    In this variation of the attack, the victim is instructed to click on an embedded link in the HTML file, resulting in the download of a ZIP archive that includes a benign PDF and a Windows shortcut (LNK) file.
    When the LNK is run, it executes Base64-encoded PowerShell to drop a Javascript Encoded file called "Themes.jse" using a Visual Basic Script.
    The JSE malware, in turn, contacts an attacker-controlled URL and runs the response from the server via PowerShell.
    The exact nature of the payload is currently not known.
    Furthermore, TA406 has been spotted attempting to harvest credentials by sending fake Microsoft security alert messages to Ukrainian government entities from ProtonMail accounts, warning them of suspicious sign-in activity from IP addresses located in the United States and urging them to verify the login by visiting a link.
    While the credential harvesting page has not been recovered, the same compromised domain is said to have been used in the past to collect Naver login information.
    "These credential harvesting campaigns took place prior to the attempted malware deployments and targeted some of the same users later targeted with the HTML delivery campaign," Proofpoint said.
    "TA406 is very likely gathering intelligence to help North Korean leadership determine the current risk to its forces already in the theatre, as well as the likelihood that Russia will request more troops or armaments."
    "Unlike Russian groups who have likely been tasked with gathering tactical battlefield information and targeting of Ukrainian forces in situ, TA406 has typically focused on more strategic, political intelligence collection efforts."

    The disclosure comes as the Konni group has been linked to a sophisticated multi-stage malware campaign targeting entities in South Korea with ZIP archives containing LNK files, which run PowerShell scripts to extract a CAB archive and ultimately deliver batch script malware capable of collecting sensitive data and exfiltrating it to a remote server.
    The findings also dovetail with spear-phishing campaigns orchestrated by Kimsuky to target government agencies in South Korea by delivering a stealer malware capable of establishing command-and-control (C2 or C&C) communications and exfiltrating files, web browser data, and cryptocurrency wallet information.

    According to South Korean cybersecurity company AhnLab, Kimsuky has also been observed propagating PEBBLEDASH as part of a multi-stage infection sequence initiated via spear-phishing.
    The trojan was attributed by the U.S.
    government to the Lazarus Group in May 2020.
    "While the Kimsuky group uses various types of malware, in the case of PEBBLEDASH, they execute malware based on an LNK file by spear-phishing in the initial access stage to launch their attacks," it said.

    "They then utilize a PowerShell script to create a task scheduler and register it for automatic execution.
    Through communication with a Dropbox and TCP socket-based C&C server, the group installs multiple malware and tools including PEBBLEDASH."
    Konni and Kimsuky are far from the only North Korean threat actors to focus on Seoul.
    As recently as March 2025, South Korean entities have been found to be at the receiving end of another campaign carried out by APT37, which is also referred to as ScarCruft.
    Dubbed Operation ToyBox Story, the spear-phishing attacks singled out several activists focused on North Korea, per the Genians Security Center (GSC).
    The first observed spear phishing attack occurred on March 8, 2025.
    "The email contained a Dropbox link leading to a compressed archive that included a malicious shortcut (LNK) file," the South Korean company said.
    "When extracted and executed, the LNK file activated additional malware containing the keyword 'toy.'"

    The LNK files are configured to launch a decoy HWP file and run PowerShell commands, leading to the execution of files named toy03.bat, toy02.bat, and toy01.bat (in that order), the last of which contains shellcode to launch RoKRAT, a staple malware associated with APT37.
    RokRAT is equipped to collect system information, capture screenshots, and use three different cloud services, including pCloud, Yandex, and Dropbox for C2.
    "The threat actors exploited legitimate cloud services as C2 infrastructure and continued to modify shortcut (LNK) files while focusing on fileless attack techniques to evade detection by antivirus software installed on target endpoints," Genians said.

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.



    SHARE










    المصدر: https://thehackernews.com/2025/05/north-korean-konni-apt-targets-ukraine.html

    #North #Korean #Konni #APT #Targets #Ukraine #with #Malware #track #Russian #Invasion #Progress
    North Korean Konni APT Targets Ukraine with Malware to track Russian Invasion Progress
    May 13, 2025Ravie LakshmananCyber Espionage / Malware The North Korea-linked threat actor known as Konni APT has been attributed to a phishing campaign targeting government entities in Ukraine, indicating the threat actor's targeting beyond Russia. Enterprise security firm Proofpoint said the end goal of the campaign is to collect intelligence on the "trajectory of the Russian invasion." "The group's interest in Ukraine follows historical targeting of government entities in Russia for strategic intelligence gathering purposes," security researchers Greg Lesnewich, Saher Naumaan, and Mark Kelly said in a report shared with The Hacker News. Konni APT, also known as Opal Sleet, Osmium, TA406, and Vedalia, is a cyber espionage group that has a history of targeting entities in South Korea, the United States, and Russia. It's operational since at least 2014. Attack chains mounted by the threat actor often involve the use of phishing emails to distribute malware called Konni RAT (aka UpDog) and redirect recipients to credential harvesting pages. Proofpoint, in an analysis of the threat group published in November 2021, assessed TA406 to be one of several actors that make up the activity publicly tracked as Kimsuky, Thallium, and Konni Group. The latest set of attacks documented by the cybersecurity company entails the use of phishing emails that impersonate a fictitious senior fellow at a think tank called the Royal Institute of Strategic Studies, which is also a non-existent organization. The email messages contain a link to a password-protected RAR archive that's hosted on the MEGA cloud service. Opening the RAR archive using a password mentioned in the message body launches an infection sequence that's engineered to conduct extensive reconnaissance of the compromised machines. Specifically, present within the RAR archive is a CHM file that displays decoy content related to former Ukrainian military leader Valeriy Zaluzhnyi. Should the victim click anywhere on the page, a PowerShell command embedded within the HTML is executed to reach out to an external server and download a next-stage PowerShell payload. The newly launched PowerShell script is capable of executing various commands to gather information about the system, encode it using Base64-encoding, and send it to the same server. "The actor sent multiple phishing emails on consecutive days when the target did not click the link, asking the target if they had received the prior emails and if they would download the files," the researchers said. Proofpoint said it also observed an HTML file being directly distributed as an attachment to the phishing messages. In this variation of the attack, the victim is instructed to click on an embedded link in the HTML file, resulting in the download of a ZIP archive that includes a benign PDF and a Windows shortcut (LNK) file. When the LNK is run, it executes Base64-encoded PowerShell to drop a Javascript Encoded file called "Themes.jse" using a Visual Basic Script. The JSE malware, in turn, contacts an attacker-controlled URL and runs the response from the server via PowerShell. The exact nature of the payload is currently not known. Furthermore, TA406 has been spotted attempting to harvest credentials by sending fake Microsoft security alert messages to Ukrainian government entities from ProtonMail accounts, warning them of suspicious sign-in activity from IP addresses located in the United States and urging them to verify the login by visiting a link. While the credential harvesting page has not been recovered, the same compromised domain is said to have been used in the past to collect Naver login information. "These credential harvesting campaigns took place prior to the attempted malware deployments and targeted some of the same users later targeted with the HTML delivery campaign," Proofpoint said. "TA406 is very likely gathering intelligence to help North Korean leadership determine the current risk to its forces already in the theatre, as well as the likelihood that Russia will request more troops or armaments." "Unlike Russian groups who have likely been tasked with gathering tactical battlefield information and targeting of Ukrainian forces in situ, TA406 has typically focused on more strategic, political intelligence collection efforts." The disclosure comes as the Konni group has been linked to a sophisticated multi-stage malware campaign targeting entities in South Korea with ZIP archives containing LNK files, which run PowerShell scripts to extract a CAB archive and ultimately deliver batch script malware capable of collecting sensitive data and exfiltrating it to a remote server. The findings also dovetail with spear-phishing campaigns orchestrated by Kimsuky to target government agencies in South Korea by delivering a stealer malware capable of establishing command-and-control (C2 or C&C) communications and exfiltrating files, web browser data, and cryptocurrency wallet information. According to South Korean cybersecurity company AhnLab, Kimsuky has also been observed propagating PEBBLEDASH as part of a multi-stage infection sequence initiated via spear-phishing. The trojan was attributed by the U.S. government to the Lazarus Group in May 2020. "While the Kimsuky group uses various types of malware, in the case of PEBBLEDASH, they execute malware based on an LNK file by spear-phishing in the initial access stage to launch their attacks," it said. "They then utilize a PowerShell script to create a task scheduler and register it for automatic execution. Through communication with a Dropbox and TCP socket-based C&C server, the group installs multiple malware and tools including PEBBLEDASH." Konni and Kimsuky are far from the only North Korean threat actors to focus on Seoul. As recently as March 2025, South Korean entities have been found to be at the receiving end of another campaign carried out by APT37, which is also referred to as ScarCruft. Dubbed Operation ToyBox Story, the spear-phishing attacks singled out several activists focused on North Korea, per the Genians Security Center (GSC). The first observed spear phishing attack occurred on March 8, 2025. "The email contained a Dropbox link leading to a compressed archive that included a malicious shortcut (LNK) file," the South Korean company said. "When extracted and executed, the LNK file activated additional malware containing the keyword 'toy.'" The LNK files are configured to launch a decoy HWP file and run PowerShell commands, leading to the execution of files named toy03.bat, toy02.bat, and toy01.bat (in that order), the last of which contains shellcode to launch RoKRAT, a staple malware associated with APT37. RokRAT is equipped to collect system information, capture screenshots, and use three different cloud services, including pCloud, Yandex, and Dropbox for C2. "The threat actors exploited legitimate cloud services as C2 infrastructure and continued to modify shortcut (LNK) files while focusing on fileless attack techniques to evade detection by antivirus software installed on target endpoints," Genians said. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     المصدر: https://thehackernews.com/2025/05/north-korean-konni-apt-targets-ukraine.html #North #Korean #Konni #APT #Targets #Ukraine #with #Malware #track #Russian #Invasion #Progress
    THEHACKERNEWS.COM
    North Korean Konni APT Targets Ukraine with Malware to track Russian Invasion Progress
    May 13, 2025Ravie LakshmananCyber Espionage / Malware The North Korea-linked threat actor known as Konni APT has been attributed to a phishing campaign targeting government entities in Ukraine, indicating the threat actor's targeting beyond Russia. Enterprise security firm Proofpoint said the end goal of the campaign is to collect intelligence on the "trajectory of the Russian invasion." "The group's interest in Ukraine follows historical targeting of government entities in Russia for strategic intelligence gathering purposes," security researchers Greg Lesnewich, Saher Naumaan, and Mark Kelly said in a report shared with The Hacker News. Konni APT, also known as Opal Sleet, Osmium, TA406, and Vedalia, is a cyber espionage group that has a history of targeting entities in South Korea, the United States, and Russia. It's operational since at least 2014. Attack chains mounted by the threat actor often involve the use of phishing emails to distribute malware called Konni RAT (aka UpDog) and redirect recipients to credential harvesting pages. Proofpoint, in an analysis of the threat group published in November 2021, assessed TA406 to be one of several actors that make up the activity publicly tracked as Kimsuky, Thallium, and Konni Group. The latest set of attacks documented by the cybersecurity company entails the use of phishing emails that impersonate a fictitious senior fellow at a think tank called the Royal Institute of Strategic Studies, which is also a non-existent organization. The email messages contain a link to a password-protected RAR archive that's hosted on the MEGA cloud service. Opening the RAR archive using a password mentioned in the message body launches an infection sequence that's engineered to conduct extensive reconnaissance of the compromised machines. Specifically, present within the RAR archive is a CHM file that displays decoy content related to former Ukrainian military leader Valeriy Zaluzhnyi. Should the victim click anywhere on the page, a PowerShell command embedded within the HTML is executed to reach out to an external server and download a next-stage PowerShell payload. The newly launched PowerShell script is capable of executing various commands to gather information about the system, encode it using Base64-encoding, and send it to the same server. "The actor sent multiple phishing emails on consecutive days when the target did not click the link, asking the target if they had received the prior emails and if they would download the files," the researchers said. Proofpoint said it also observed an HTML file being directly distributed as an attachment to the phishing messages. In this variation of the attack, the victim is instructed to click on an embedded link in the HTML file, resulting in the download of a ZIP archive that includes a benign PDF and a Windows shortcut (LNK) file. When the LNK is run, it executes Base64-encoded PowerShell to drop a Javascript Encoded file called "Themes.jse" using a Visual Basic Script. The JSE malware, in turn, contacts an attacker-controlled URL and runs the response from the server via PowerShell. The exact nature of the payload is currently not known. Furthermore, TA406 has been spotted attempting to harvest credentials by sending fake Microsoft security alert messages to Ukrainian government entities from ProtonMail accounts, warning them of suspicious sign-in activity from IP addresses located in the United States and urging them to verify the login by visiting a link. While the credential harvesting page has not been recovered, the same compromised domain is said to have been used in the past to collect Naver login information. "These credential harvesting campaigns took place prior to the attempted malware deployments and targeted some of the same users later targeted with the HTML delivery campaign," Proofpoint said. "TA406 is very likely gathering intelligence to help North Korean leadership determine the current risk to its forces already in the theatre, as well as the likelihood that Russia will request more troops or armaments." "Unlike Russian groups who have likely been tasked with gathering tactical battlefield information and targeting of Ukrainian forces in situ, TA406 has typically focused on more strategic, political intelligence collection efforts." The disclosure comes as the Konni group has been linked to a sophisticated multi-stage malware campaign targeting entities in South Korea with ZIP archives containing LNK files, which run PowerShell scripts to extract a CAB archive and ultimately deliver batch script malware capable of collecting sensitive data and exfiltrating it to a remote server. The findings also dovetail with spear-phishing campaigns orchestrated by Kimsuky to target government agencies in South Korea by delivering a stealer malware capable of establishing command-and-control (C2 or C&C) communications and exfiltrating files, web browser data, and cryptocurrency wallet information. According to South Korean cybersecurity company AhnLab, Kimsuky has also been observed propagating PEBBLEDASH as part of a multi-stage infection sequence initiated via spear-phishing. The trojan was attributed by the U.S. government to the Lazarus Group in May 2020. "While the Kimsuky group uses various types of malware, in the case of PEBBLEDASH, they execute malware based on an LNK file by spear-phishing in the initial access stage to launch their attacks," it said. "They then utilize a PowerShell script to create a task scheduler and register it for automatic execution. Through communication with a Dropbox and TCP socket-based C&C server, the group installs multiple malware and tools including PEBBLEDASH." Konni and Kimsuky are far from the only North Korean threat actors to focus on Seoul. As recently as March 2025, South Korean entities have been found to be at the receiving end of another campaign carried out by APT37, which is also referred to as ScarCruft. Dubbed Operation ToyBox Story, the spear-phishing attacks singled out several activists focused on North Korea, per the Genians Security Center (GSC). The first observed spear phishing attack occurred on March 8, 2025. "The email contained a Dropbox link leading to a compressed archive that included a malicious shortcut (LNK) file," the South Korean company said. "When extracted and executed, the LNK file activated additional malware containing the keyword 'toy.'" The LNK files are configured to launch a decoy HWP file and run PowerShell commands, leading to the execution of files named toy03.bat, toy02.bat, and toy01.bat (in that order), the last of which contains shellcode to launch RoKRAT, a staple malware associated with APT37. RokRAT is equipped to collect system information, capture screenshots, and use three different cloud services, including pCloud, Yandex, and Dropbox for C2. "The threat actors exploited legitimate cloud services as C2 infrastructure and continued to modify shortcut (LNK) files while focusing on fileless attack techniques to evade detection by antivirus software installed on target endpoints," Genians said. Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Comments 0 Shares