• The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software.
    Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall.
    The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month.
    Why the Danish Ministry of Digitalization Switched to Open-Source Software
    The three main reasons Denmark is moving away from Microsoft are costs, politics, and security.
    In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider. 
    The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023.
    It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft. 

    Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely.
    Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark. 
    In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory.
    If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January.
    Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves
    Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent.
    Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values.

    Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region. 
    Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied.
    Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead
    Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking.
    It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products.
    Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.
     Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #word #out #danish #ministry #drops
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #word #out #danish #ministry #drops
    TECHREPORT.COM
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing $1M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Union (EU) countries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    Like
    Love
    Wow
    Sad
    Angry
    526
    2 Comentários 0 Compartilhamentos 0 Anterior
  • PlayStation finally removes regional restrictions from Helldivers 2 and more after infuriating gamers everywhere 

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    PlayStation’s annoying regional restrictions on PC games have proved infuriating for gamers all across the world. While most gamers were unaffected, those in some regions found themselves unable to play games like Helldivers 2 and other titles due to the restrictions. 
    Thankfully, after months of complaints, it appears that PlayStation is finally removing the regional restrictions of its PC releases for a large number of countries. However, not every title has been altered at the time of writing. 
    Regional restrictions removed from Helldivers 2 and more 
    As spotted by players online, a number of Steam database updates have changed the regional restrictions of PlayStation games on PC. 
    Games such as Helldivers 2, Spider-Man 2, God of War: Ragnarok and The Last of Us: Part 2 are now available to purchase in a large number of additional countries. The change appears to be rolling out to PlayStation PC releases at the time of writing.
    It’s been a long time coming, and the introduction of the restrictions last year was a huge controversy for the company. Since the restrictions were put in place, players who previously purchased Helldivers 2 were unable to play the title online without a VPN. 
    Additionally, Ghost of Tsushima could be played in a number of countries, but its Legends multiplayer mode was inaccessible due to the regional issues. 
    Honestly, PlayStation should’ve removed these restrictions far quicker than they initially did. However, the phrase “better late than never” exists for a reason, and we’re happy that more gamers around the world are no longer punished for simply being born in a different country. 
    For more PlayStation news, read the company’s recent comments about the next generation PlayStation 6 console. Additionally, read about potential PS Plus price increases that could be on the way as the company aims to “maximise profitability”. 

    Helldivers 2

    Platform:
    PC, PlayStation 5

    Genre:
    Action, Shooter, Third Person

    8
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #playstation #finally #removes #regional #restrictions
    PlayStation finally removes regional restrictions from Helldivers 2 and more after infuriating gamers everywhere 
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here PlayStation’s annoying regional restrictions on PC games have proved infuriating for gamers all across the world. While most gamers were unaffected, those in some regions found themselves unable to play games like Helldivers 2 and other titles due to the restrictions.  Thankfully, after months of complaints, it appears that PlayStation is finally removing the regional restrictions of its PC releases for a large number of countries. However, not every title has been altered at the time of writing.  Regional restrictions removed from Helldivers 2 and more  As spotted by players online, a number of Steam database updates have changed the regional restrictions of PlayStation games on PC.  Games such as Helldivers 2, Spider-Man 2, God of War: Ragnarok and The Last of Us: Part 2 are now available to purchase in a large number of additional countries. The change appears to be rolling out to PlayStation PC releases at the time of writing. It’s been a long time coming, and the introduction of the restrictions last year was a huge controversy for the company. Since the restrictions were put in place, players who previously purchased Helldivers 2 were unable to play the title online without a VPN.  Additionally, Ghost of Tsushima could be played in a number of countries, but its Legends multiplayer mode was inaccessible due to the regional issues.  Honestly, PlayStation should’ve removed these restrictions far quicker than they initially did. However, the phrase “better late than never” exists for a reason, and we’re happy that more gamers around the world are no longer punished for simply being born in a different country.  For more PlayStation news, read the company’s recent comments about the next generation PlayStation 6 console. Additionally, read about potential PS Plus price increases that could be on the way as the company aims to “maximise profitability”.  Helldivers 2 Platform: PC, PlayStation 5 Genre: Action, Shooter, Third Person 8 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #playstation #finally #removes #regional #restrictions
    WWW.VIDEOGAMER.COM
    PlayStation finally removes regional restrictions from Helldivers 2 and more after infuriating gamers everywhere 
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here PlayStation’s annoying regional restrictions on PC games have proved infuriating for gamers all across the world. While most gamers were unaffected, those in some regions found themselves unable to play games like Helldivers 2 and other titles due to the restrictions.  Thankfully, after months of complaints, it appears that PlayStation is finally removing the regional restrictions of its PC releases for a large number of countries. However, not every title has been altered at the time of writing.  Regional restrictions removed from Helldivers 2 and more  As spotted by players online (thanks, Wario64), a number of Steam database updates have changed the regional restrictions of PlayStation games on PC.  Games such as Helldivers 2, Spider-Man 2, God of War: Ragnarok and The Last of Us: Part 2 are now available to purchase in a large number of additional countries. The change appears to be rolling out to PlayStation PC releases at the time of writing. It’s been a long time coming, and the introduction of the restrictions last year was a huge controversy for the company. Since the restrictions were put in place, players who previously purchased Helldivers 2 were unable to play the title online without a VPN.  Additionally, Ghost of Tsushima could be played in a number of countries, but its Legends multiplayer mode was inaccessible due to the regional issues.  Honestly, PlayStation should’ve removed these restrictions far quicker than they initially did. However, the phrase “better late than never” exists for a reason, and we’re happy that more gamers around the world are no longer punished for simply being born in a different country.  For more PlayStation news, read the company’s recent comments about the next generation PlayStation 6 console. Additionally, read about potential PS Plus price increases that could be on the way as the company aims to “maximise profitability”.  Helldivers 2 Platform(s): PC, PlayStation 5 Genre(s): Action, Shooter, Third Person 8 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    Like
    Love
    Wow
    Angry
    Sad
    487
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Mirela Cialai Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential.
    That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success.
    In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers.
    You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI.
    Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.

     
    Mirela Cialai Q&A Interview
    1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience?

    Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives.

    This could be revenue growth, customer retention, market expansion, or operational efficiency.
    We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition.
    We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals.
    In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance.
    This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth.
    Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings.
    Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences.
    To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale.

    By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals.

    2. What steps did you take to ensure data accuracy?
    The data team was very diligent in ensuring that our data warehouse had accurate data.
    So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc.

    That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data.

    3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy?
    Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability.
    I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%.
    This data helps make a compelling case to stakeholders about the importance of prioritizing retention.
    Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth.
    This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives.

    By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy.

    4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement?
    Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach.
    The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives.
    I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse.
    Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows.
    Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities.

    Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape.

    5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for?
    I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels.
    Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns.
    Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns.
    Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability.

    If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs.

    6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap?
    Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes.
    Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact.
    Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert.

    By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success.

    7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives?
    To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success.
    Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value.
    Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities.
    Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth.
    By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs.

    In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability.

    In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first.
    8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you?
    Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability.
    We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success.
    To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams.

    To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together.

    9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like?
    A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine.
    In one word: PAPER. Here’s how it breaks down.

    Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals.
    Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps.
    Prioritize: initiatives based on impact, feasibility, and ROI potential.
    Execute: by implementing the roadmap in manageable phases.
    Refine: by continuously improving CRM performance and refining the roadmap.

    So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy.

    10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively?
    The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences.

    The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth.

    Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies.
    The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes.
    Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution.
    A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions.
    Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others.
    While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends.

    By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success.

    11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind?
    I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives.
    Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives.

    Another important lesson: The roadmap is only as effective as the data and systems it’s built upon.

    I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on.
    A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers.

    So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.

     

     
    This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #mirela #cialai #qampampa #customer #engagement
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage. #mirela #cialai #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?

    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report.
    The deal surprised the industry as the two are seen as major AI rivals.
    Signs of friction between OpenAI and Microsoft may have also fueled the move.
    The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025.

    In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs.
    The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft.
    Why the Deal Surprised the Tech Industry
    The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider.
    Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product.

    A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result.
    Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024. 
    And then there’s this gem:

    With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice.
    In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed.
    It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft?
    In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception.
    Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy. 
    Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025.
    If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later.
    While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months.
    In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March.
    The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity.

    In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector.

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #rivals #partners #whats #with #google
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #rivals #partners #whats #with #google
    TECHREPORT.COM
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its $75B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bard (now known as Gemini) to compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost $100B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence (AGI). Defined as when OpenAI develops AI systems that generate $100B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to $12.7 from $3.7B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked $75B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unit (TPU) called Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Fortnite x Squid Game skins, release date, leaks, and more

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    The Squid Game collaboration is the next big thing for Fortnite. Its release date is right around the corner, and Epic Games has already revealed some of the things that will come with it. The iconic South Korean series will bring much more than just cosmetics, and here’s everything you need to know about it.
    In this article, we will reveal everything we know about Squid Game skins in Fortnite, as well as Creative tools. Furthermore, we will reveal the release date and take a look at several leaks that have come out since the collab announcement.
    Will the Squid Game collaboration bring Fortnite skins?
    According to trusted Fortnite leakers, the Squid Game partnership will introduce new cosmetics, including character skins. At the moment, it’s unknown what these skins will look like, but we should find out more details soon. The release date of the Squid Game collab is set for Friday, June 27, which is also the release date of the last season of the series.
    Earlier this month, Epic Games confirmed that the collaboration will bring new UEFNtools. Thanks to this, creators will be able to make Squid Game-themed maps with new items and mechanics.
    The Squid Game collaboration will bring new Fortnite cosmetics. Image by VideoGamer
    Epic has already released a cryptic teaser for the collab which reveals the following: “Red Greens, Square Meals, Affluent Arrivals, June 27th.” With less than two weeks to go until the big update, we expect even more teasers and possibly skin leaks. Considering how popular Squid Game is, this could become one of Fortnite’s most iconic collaborations.
    The next Fortnite update is set to come out on Tuesday, June 17. Since this update will contain Squid Game data, we could see more early leaks in just a few more days.

    Fortnite

    Platform:
    Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X

    Genre:
    Action, Massively Multiplayer, Shooter

    9
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #fortnite #squid #game #skins #release
    Fortnite x Squid Game skins, release date, leaks, and more
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here The Squid Game collaboration is the next big thing for Fortnite. Its release date is right around the corner, and Epic Games has already revealed some of the things that will come with it. The iconic South Korean series will bring much more than just cosmetics, and here’s everything you need to know about it. In this article, we will reveal everything we know about Squid Game skins in Fortnite, as well as Creative tools. Furthermore, we will reveal the release date and take a look at several leaks that have come out since the collab announcement. Will the Squid Game collaboration bring Fortnite skins? According to trusted Fortnite leakers, the Squid Game partnership will introduce new cosmetics, including character skins. At the moment, it’s unknown what these skins will look like, but we should find out more details soon. The release date of the Squid Game collab is set for Friday, June 27, which is also the release date of the last season of the series. Earlier this month, Epic Games confirmed that the collaboration will bring new UEFNtools. Thanks to this, creators will be able to make Squid Game-themed maps with new items and mechanics. The Squid Game collaboration will bring new Fortnite cosmetics. Image by VideoGamer Epic has already released a cryptic teaser for the collab which reveals the following: “Red Greens, Square Meals, Affluent Arrivals, June 27th.” With less than two weeks to go until the big update, we expect even more teasers and possibly skin leaks. Considering how popular Squid Game is, this could become one of Fortnite’s most iconic collaborations. The next Fortnite update is set to come out on Tuesday, June 17. Since this update will contain Squid Game data, we could see more early leaks in just a few more days. Fortnite Platform: Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre: Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #fortnite #squid #game #skins #release
    WWW.VIDEOGAMER.COM
    Fortnite x Squid Game skins, release date, leaks, and more
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here The Squid Game collaboration is the next big thing for Fortnite. Its release date is right around the corner, and Epic Games has already revealed some of the things that will come with it. The iconic South Korean series will bring much more than just cosmetics, and here’s everything you need to know about it. In this article, we will reveal everything we know about Squid Game skins in Fortnite, as well as Creative tools. Furthermore, we will reveal the release date and take a look at several leaks that have come out since the collab announcement. Will the Squid Game collaboration bring Fortnite skins? According to trusted Fortnite leakers, the Squid Game partnership will introduce new cosmetics, including character skins. At the moment, it’s unknown what these skins will look like, but we should find out more details soon. The release date of the Squid Game collab is set for Friday, June 27, which is also the release date of the last season of the series. Earlier this month, Epic Games confirmed that the collaboration will bring new UEFN (Unreal Editor for Fortnite) tools. Thanks to this, creators will be able to make Squid Game-themed maps with new items and mechanics. The Squid Game collaboration will bring new Fortnite cosmetics. Image by VideoGamer Epic has already released a cryptic teaser for the collab which reveals the following: “Red Greens, Square Meals, Affluent Arrivals, June 27th.” With less than two weeks to go until the big update, we expect even more teasers and possibly skin leaks. Considering how popular Squid Game is, this could become one of Fortnite’s most iconic collaborations. The next Fortnite update is set to come out on Tuesday, June 17. Since this update will contain Squid Game data, we could see more early leaks in just a few more days. Fortnite Platform(s): Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre(s): Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Comentários 0 Compartilhamentos 0 Anterior
  • This Marvel Rivals hero became unstoppable after the controversial change

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations.
    In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move.
    Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate
    With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move.
    So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Womanand Luna Snow, both of whom are considered amazing healers.
    Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer
    In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%.
    Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments.

    Marvel Rivals

    Platform:
    macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X

    Genre:
    Fighting, Shooter

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #this #marvel #rivals #hero #became
    This Marvel Rivals hero became unstoppable after the controversial change
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations. In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move. Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move. So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Womanand Luna Snow, both of whom are considered amazing healers. Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%. Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments. Marvel Rivals Platform: macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X Genre: Fighting, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #this #marvel #rivals #hero #became
    WWW.VIDEOGAMER.COM
    This Marvel Rivals hero became unstoppable after the controversial change
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations. In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move. Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move. So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Woman (45.54%) and Luna Snow (45.28%), both of whom are considered amazing healers. Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%. Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments. Marvel Rivals Platform(s): macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X Genre(s): Fighting, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com