Computer Weekly
Computer Weekly
Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
  • 1 people like this
  • 649 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • Science &Technology
Search
Recent Updates
  • WWW.COMPUTERWEEKLY.COM
    Financially-motivated cyber crime remains biggest threat source
    Financially-motivated threat actors – including ransomware crews – remain the single biggest source of cyber threat in the world, accounting for 55% of active threat groups tracked during 2024, up two percentage points on 2023 and 7% on 2022, demonstrating that cyber crime really does, to a certain extent, pay. At least, this is according to Google Cloud’s Mandiant, which has this week released its latest M-Trends report, an annual, in-depth deep dive into the cyber security world. The dominance of cyber crime is not in and of itself a surprise, and according to Mandiant, cyber criminals are becoming a more complex, diverse, and tooled up threat in the process. “Cyber threats continue to trend towards greater complexity and, as ever, are impacting a diverse set of targeted industries,” said Mandiant Consulting EMEA managing director, Stuart McKenzie. “Financially motivated attacks are still the leading category. While ransomware, data theft and multifaceted extortion are and will continue to be significant global cybercrime concerns, we are also tracking the rise in the adoption of infostealer malware and the developing exploitation of Web3 technologies, including cryptocurrencies.  McKenzie added: “The increasing sophistication and automation offered by artificial intelligence are further exacerbating these threats by enabling more targeted, evasive, and widespread attacks. Organisations need to proactively gather insights to stay ahead of these trends and implement processes and tools to continuously collect and analyse threat intelligence from diverse sources.” The most common means for threat actors to access their victim environments last year was by exploiting disclosed vulnerabilities – 33% of intrusions began in this way worldwide, and 39% in EMEA. In second place, using legitimate credentials obtained by deception or theft, seen in 16% of instances, followed by email phishing in 14% of incidents, web compromises in 9%, and revisiting prior compromises in 8%. The landscape in EMEA differed slightly to this, with email phishing opening the doors to 15% of cyber attacks, and brute force attacks representing 10%. Once ensconced within their target environments and able to get to work, threat actors took a global average of 11 days to establish the lay of the land, conduct lateral movement, and line up their final coup de grace. This period, known in the security world as dwell time, was up approximately 24 hours on 2023, but down significantly on 2022, when cyber criminals hung out for an average of 16 days. Anecdotal evidence suggests that technological factors including, possibly, the adoption of AI by cyber ne’er-do-wells, may have something to do with this drop. Interestingly, median dwell times in EMEA were significantly higher than the worldwide figure, clocking in at 27 days, five days longer than in 2022. When threat actors were discovered inside someone’s IT estate, the victims tended to learn about it from an external source – such as an ethical hacker, a penetration testing or red teaming exercise, a threat intelligence organisation like Mandiant, or in many instances an actual ransomware gang – in 57% of cases. The remaining 43% were discovered internally by security teams and so on. The EMEA figures differed little from this. Nation-state threat actors, or advanced persistent threat (APT) groups create a lot of noise and generate a lot of attention in the cyber security world by dint of the lingering romance associated with spycraft, and in more practical terms, the fractious global geopolitical environment. However, compared to their cyber criminal counterparts, they represent just 8% of threat activity, which is actually a couple of percentage points lower than it was two years ago. Mandiant tracked four active advanced persistent threat (APT) groups in 2024, and 297 unclassified (UNC) groups – meaning not enough information is really available to make a firm bet on what they are up to, so this could include potential APTs. Indeed there is significant overlap in this regard and, Mandiant has on occasion upgraded some groups to full-fledged APTs – such as Sandworm, which now goes by APT44 in its threat actor classification scheme. APT44 is one of the four active APTs observed in 2024. Infamous for its attacks on Ukrainian infrastructure in support of Russia’s invasion, APT44 has long supported the Kremlin’s geopolitical goals and was involved in some of the largest and most devastating cyber attacks to date, including the NotPetya incident. Also newly-designated in 2024 was APT45, operating on behalf of the North Korean regime and described by Mandiant as a “moderately sophisticated” operator active since about 2009. Read more about current security trends The growth of AI is proving a double-edged sword for API security, presenting opportunities for defenders to enhance their resilience, but also more risks from AI-powered attacks, according to a report. Many businesses around the world are taking the decision to alter their supplier mix in the face of tariff uncertainty, but in doing so are creating more cyber risks for themselves. As directors increasingly recognise the threats posed by increasingly sophisticated, AI-driven cyber attacks, risks are being mitigated by changes in physical infrastructure networks, research finds.
    0 Comments 0 Shares 19 Views
  • WWW.COMPUTERWEEKLY.COM
    Rethink authentication to remove the burden on users
    Attackers exploit human nature, making authentication a prime target. The Snowflake data breach is a clear example – hackers used stolen customer credentials, many which lacked multi-factor authentication (MFA), to breach several customer accounts, steal sensitive data and reportedly extort dozens of companies. This incident highlights how one seemingly small, compromised credential can have severe consequences. Phishing scams, credential stuffing, and account takeovers all succeed because authentication still depends on users making security decisions. But no amount of security training can completely stop people from being tricked into handing over their credentials, downloading malware that steals login information, or reusing passwords that can be easily exploited. The problem isn’t the user; it’s the system that requires them to be the last line of defense. With agentic AI set to introduce a surge of non-human identities (NHIs) - bringing an added layer of complexity to an already complicated IT environment - enterprises need to rethink authentication, removing users from the process as much, and as soon, as possible. The explosion of cloud applications, systems and data has made identity security more complex and critical than ever before. Today, the average enterprise manages multiple cloud environments and around 1,000 applications, creating a highly fragmented landscape, which attackers are actively capitalising on. In fact, IBM’s 2025 Threat Intelligence Index  found that most of the cyber attacks investigated last year were caused by cybercriminals using stolen employee credentials to breach corporate networks. With AI-driven attacks set to make this problem even worse, identity abuse shows no signs of a slowdown. Large language models (LLMs) can automate spear-phishing campaigns and scrape billions of exposed credentials to fuel automated identity attacks. With AI enabling attackers to scale their tactics, the transition away from credential-based security must become a priority for businesses. The future of secure modern authentication requires reducing the user burden from the identity paradigm by moving away from passwords and knowledge-based authentication. Passwordless authentication, based on the FIDO (Fast Identity Online) standard replaces traditional passwords with cryptography keys bound to a user’s account on an application or website. Instead of choosing and remembering a password, users authenticate with biometrics or a hardware-backed credential, this is typically provided by the device (laptop or mobile device) and their operating system. These credentials (passkeys) are protected by the operating systems, browsers and password managers, significantly reducing the risk of phishing attacks and stolen credentials.  A modern way to authenticate, passkeys are phishing resistant, offer a better user experience and improve security posture. While not a new or novel concept, passwordless is slow to gain traction because of perceived complexity and lack of clear migration paths. However, the FIDO alliance announced in late 2024 new resources that are set to help accelerate the adoption of passkeys by making them easier for organizations and consumers to use. For example, FIDO’s new proposed specifications enable organisations to securely move passkeys and other credentials from one provider to another. This helps provide flexibility to organisations by removing vendor lock-in. Digital credentials are another technology that helps remove the burden of security decisions from users. While passwordless authentication provides a secure way to access resources, digital credentials (sometimes referred to as verifiable credentials) provide a secure way to share private data. Digital credentials – such as digital employee badges or mobile driver’s licences – allow organisations to validate users without exposing unnecessary or sensitive personal data. For example, a digital driver’s licence lets users prove their age for restricted purchases without revealing unnecessary personal information like their home address or even their actual birthday. Similarly, digital paystubs allow users to confirm salary requirements for a loan without disclosing their actual salary. This solution also helps put the power of data sharing back into the users’ hands – allowing them to choose what type of information is provided, to who and when. Read more about IAM IAM is critical to an organisation's data security posture, and its role in regulatory compliance is just as crucial. Does your IAM program need OAuth or OpenID Connect? Or maybe both? Let's look at the various standards and protocols that make identity management function. If it is deployed correctly, identity and access management is among the plethora of techniques that can help to secure enterprise IT. The move towards passwordless and digital credentials is not just about stopping today’s attackers – it’s about preparing for what’s next. AI-powered attacks: Attackers are already using generative AI (GAI) to create phishing campaigns that are nearly as effective as human-generated ones, automate social engineering at scale, and bypass traditional security controls. Passwordless eliminates one of the most common attack vectors – phishable credentials – making AI driven attacks much harder to execute. Non-human Identities – As agentic AI advances and takes on more roles in the enterprise – whether in software design or IT automation – identity security must evolve in tandem. Digital credentials allow organisations to authenticate NHIs with the same level of cryptographic security as human users, ensuring that AI agents interacting with corporate systems are verifiable and authorised.   Organisations must start preparing now for what lies ahead. While passwordless and digital credentials are not the only steps that should be taken to combat the surge in identity attacks, by deploying these technologies organisations can modernize a strained model – removing security decisions from users, enhancing the user experience and ultimately helping IAM take back its role as gatekeeper. Patrick Wardrop is executive director of product, engineering and design for the Verify IAM product portfolio at IBM Software. 
    0 Comments 0 Shares 26 Views
  • WWW.COMPUTERWEEKLY.COM
    Hitachi Vantara: VSP One leads revamped storage portfolio
    In this storage supplier profile, we look at Hitachi Vantara, which is a small part of a very big organisation. Since we last looked at Hitachi Vantara, its storage portfolio has undergone something of a revamp, based around its VSP One family that offers block, file and object storage – with performance profiles that range from NVMe flash to HDD capacity – available across on-premise, cloud and even mainframe environments. It can provide all this via as-a-service models of procurement, and also offers full-stack IT including compute and – via partnerships – networking, too – and with containerisation for cloud-native deployments. Hitachi Vantara is the storage, data and analytics division of the giant Hitachi group, which as a corporation is ranked 196th in the Forbes Global 500 as of last year. In recent times, Hitachi has declined in this ranking. It was at 38th in 2012, 78th in 2014 and 106th in 2020. Formed out of the Hitachi Data Systems storage group, Vantara was created in 2017 upon a merger with the group’s Pentaho business intelligence operation and the Hitachi Insight Group (IT services and consulting). Hitachi Vantara is a small part of the parent Hitachi corporation, whose consolidated revenues for fiscal year 2020 were $70.72bn, with subsidiaries numbered in the hundreds and approximately just under 270,000 employees worldwide in 2024. Vantara was formed in 2017 to deliver digital systems. Its name was chosen for being “suggestive”, according to CEO Brian Householder in 2017, and is meant to evoke “advantage”, a “vantage point” and “virtualisation”. In Hitachi’s results for 2023, Digital Systems & Services revenues were 2,599 billion yen ($18.4bn) and a rise of 4% year-on-year. However, Vantara storage appears to be a relatively small part of Digital Systems & Services. IDC reckoned the company’s external storage system revenues at $1.55bn and a market share of 4.9%. That was a small increase on 2022, in which it had storage array revenue and share of $1.42bn and 4.4%. That puts Hitachi Vantara seventh in the IDC ranking of storage array players, level with IBM, but only “others” below them.  Hitachi Vantara’s flagship storage platform is Virtual Storage Platform (VSP) One. VSP One – launched in late 2024 – offers a single data fabric across on-premise and cloud with all-QLC and object storage products added at that time. Top of the line in block storage is the VSP 5000 series, which comes as the all-flash 5200 and 5600, with an H-suffix variant to each that indicates the possibility of spinning disk HDD capacity. Per node, each offer 65,280 LUNs at 256TB per LUN and maximum 1,024 snapshots per LUN. All of them allow for Fibre Channel, iSCSI and Ficon mainframe connectivity. The 5200 offers capacity up to just below 300PB. NVMe SCM (storage class memory) can be fitted. The ratio of NVMe flash to SAS HDDs possible is about 1:8, with the latter going up to drive sizes of 18TB. The 5600 model gains a much higher drive and port count. Throughput is also very much improved over the 5200 (312GBps vs 52GBps). VSP block storage arrays run a common operating system, Storage Virtualization OS (SVOS). The VSP One Block range is aimed at the midrange, and comes in a 2U form factor with options for TLC and QLC flash storage capacity. It can scale per appliance from 32TB effective to 1.8PB and to 65 nodes. It also runs SVOS – that can manage other suppliers’ arrays – and can run file and S3 object, too. Read more on storage suppliers Huawei rises in the storage ranks despite sanctions and tariffs: Huawei has leapfrogged HPE in revenue and market share and broadened its storage offer towards AI, the cloud and as-a-service despite sanctions and tariffs. HPE storage battles hard and smart in challenging market: We look at HPE, which has slipped in the storage supplier rankings, but brings a full range of AI-era storage over a mature cloud and consumption model offer. Vantara’s E series are the E590 and E790 – it seems to have lost the E1090 since we last surveyed the company. These also scale to PBs but lack the mainframe access of the 5000s. They also retain spinning disk HDD capacity. VSP One SDS is a software-defined platform aimed at “distributed block applications” that potentially run to PB scale. It runs on third-party hardware and in the Amazon Web Services (AWS) cloud. A key idea is that customers have a great deal of flexibility in how they deploy it. SVOS allows for data flow across applications and locations, with Kubernetes APIs to allow containerised applications. Hitachi Vantara’s file storage offer comes in the form of VSP One File series, in which models are given number suffixes, namely 32, 34 and 38. VSP One File 32 is aimed at entry-level and cost-sensitive deployments, while the 34 aims at customers that want all-flash – presumably the 32 uses HDDs – and the 38 is built for all-out performance. Each model is also distinguished by connectivity options – more Ethernet bandwidth, Fibre Channel choices and raw bandwidth – and node count (from 20 to 80). VSP One File arrays also have some S3 object storage capability, but only as a tier for connection to the cloud.   Object storage is served by VSP One Object, providing S3 native storage that can scale from a few TB to 150TB per node in a minimum eight-node cluster. It’s aimed at data lake and data warehouse, backup repository and AI/ML use cases. Hitachi Content Platform is the company’s unstructured data platform. It comes as a physical or virtual appliance or as storage software, can scale to exabytes and is available as hybrid (with HDDs and flash, for example) and all-flash. Capacity can be extended from on-premise to the three major clouds – AWS, Azure and GCP – or any S3-compatible cloud. Hyper-converged infrastructure comes in the form of hardware appliances in its Hitachi Unified Compute Platform, which can be all-flash, all-NVMe flash and GPU-equipped. UCP uses VMware vSphere as its hypervisor, with node capacity that can go from a few TB to the low 100-plus TB range with a maximum of 64 nodes.  Hitachi Vantara storage can deal with workloads that range from entry-level and SME storage to the most demanding enterprise workloads that include transactional processing and AI. In 2024, Hitachi launched its iQ portfolio that combines Nvidia technologies with Hitachi Vantara storage – in particular Hitachi Content Platform and file storage – and aims to provide reference architectures for AI use cases. Pentaho is a key plank of the Vantara brand. It provides AI-based data cataloguing across multiple on-premise and cloud data stores, data lakes, and so on. It allows for metadata tagging, as well as pipeline and workflow tools including ingest and cleanse, with access controls and data protection, plus migration to data warehouses. The company claims 85% of Fortune 500 companies are Hitachi Vantara customers. Hitachi Vantara put hybrid and multi-cloud operations at the heart of its plans when it launched VSP One. Using its SVOS storage operating system, it aims to make data available to customers across multiple datacentres and public cloud providers. Hitachi Storage Plug-in for Containers provides integration between Hitachi Vantara Virtual Storage Platform One (VSP One) and container orchestrators such as Kubernetes and Red Hat OpenShift. It addresses the complexities of storage management for containers for stateful applications. These include provisioning and enabling persistent storage for containers, as well as advanced storage features and data protection. The company’s Hitachi EverFlex service (see below) also provides containers as a service to provide central management of compute, storage, security and Kubernetes management.   Hitachi’s initial foray into container storage was Hitachi Kubernetes Service, which was built out of IP belonging to Containership in 2019, using CSI drivers to manage persistent volumes directly on Kubernetes nodes. Hitachi Vantara’s EverFlex infrastructure as a service allows customers to take advantage of flexible capacity and pay-per-use consumption that can range from self-managed to fully managed by a third party across the full IT stack, which can be on-premise or hybrid cloud. Everflex Consumption Level allows customers to scale up and down within a pre-agreed capacity range. Meanwhile, Foundation Level adds advanced management, observability and control over the infrastructure. Additional services at Foundation Level include integration capabilities aimed at existing heterogenous environments, provision of resources to plug skills gaps or management by Hitachi Vantara, all based on SLAs. Options that include third-party providers, such as Cisco for networking, are also offered. The company also offers Hitachi EverFlex Data Protection as a Service, which is based on VSP One infrastructure. Service management works through the cloud-based Hitachi Services Manager portal, with storage capacities that start at 50TB and go up to petabytes. Monthly billing consists of a committed amount and a flexible fee that covers excess usage costs.
    0 Comments 0 Shares 33 Views
  • WWW.COMPUTERWEEKLY.COM
    Amid uncertainty, Armis becomes newest CVE numbering authority
    Mitre’s Common Vulnerabilities and Exposures (CVE) Program – which last week came close to shutting down altogether amid a wide-ranging shakeup of the United States government – has designated cyber exposure management specialist Armis as a CVE Numbering Authority (CNA). This means it will be able to review and assign CVE identifiers to newly discovered vulnerabilities in support of the Program’s mission to identify, define and catalogue as many security issues as possible.  “We are focused on going beyond detection to provide real security – before an attack, not just after,” said Armis CTO and co-founder, Nadir Izrael. “It is our duty and goal to help raise the tide of cyber security awareness and action across all industries. This is key to effectively addressing the entire lifecycle of cyber threats and managing cyber risk exposure to keep society safe and secure.” Mitre currently draws on the expertise of 450 CNAs around the world – nearly 250 of them in the US, but including 12 in the UK. The full list includes some of the largest tech firms in the world such as Amazon, Apple, Google, Meta and Microsoft, as well as a litany of other suppliers and government agencies and computer emergency response teams (CERTs). All the organisations listed participate on a voluntary basis, and each has committed to having a public vulnerability disclosure policy, a public source for new disclosures, and to have agreed to the programme’s Ts&Cs. In return, says Mitre, participants are able to demonstrate a mature attitude to vulnerabilities to their customers and to communicate value-added vulnerability information; to control the CVE release process for vulnerabilities in the scope of their participation; to assign CVE IDs without having to share information with other CNAs; and to streamline the vulnerability disclosure process. The addition of Armis to this roster comes amid uncertainty over the Program’s wider future given how close it came to cancellation. In the wake of the incident, many in the security community have argued that a shake-up of how CVEs are managed is long overdue. “This funding interruption underscores a crucial truth for your security strategy: CVE-based vulnerability management cannot serve as the cornerstone of effective security controls. At best, it’s a lagging indicator, underpinned by a programme with unreliable resources,” said Joe Silva, CEO of risk management specialist Spektion. “The future of vulnerability management should focus on identifying real exploitable paths in runtime, rather than merely cataloging potential vulnerabilities. Your organisation’s risk posture should not hinge on the renewal of a government contract. “Even though funding was provided, this further shakes confidence in the CVE system, which is a patchwork crowdsourced effort reliant on shaky government funding. The CVE programme was already not sufficiently comprehensive and timely, and now it’s also less stable.”   Meanwhile, Armis is also today expanding its vulnerability management capabilities by making its proprietary Vulnerability Intelligence Database (VID) free to all-comers. The community-driven database, which is backed by the firm’s in-house Armis Labs unit, offers early warning services and asset intelligence, and is fed a constant stream of crowdsourced intelligence to enhance its users’ ability to prioritise emerging vulnerabilities likely to impact their vertical industries, and take action to shore up their defences before such issues are widely exploited. “As threat actors continue to amplify the scale and sophistication of cyberattacks, a proactive approach to reducing risk is essential,” said Izrael. “The Armis Vulnerability Intelligence Database is a critical, accessible resource built by the security community, for the security community. It translates vulnerability data into real-world impact so that businesses can adapt quickly and make more informed decisions to manage cyber threats.” Armis said that currently, 58% of cyber attack victims only reactively respond to threats after the damage has been done, and nearly a quarter of IT decision-makers say a lack of continuous vulnerability assessment is a significant gap in their security operations, making it imperative to do more to address problems quicker. Read more about the CVE Program’s future 15 April: Mitre, the operator of the world-renowned CVE repository, has warned of significant impacts to global cyber security standards, and increased risk from threat actors, as it emerges its US government contract will lapse in 24 hours. 16 April: With news that Mitre’s contract to run the world-renowned CVE Programme is abruptly terminating, a breakaway group is setting up a non-profit foundation to try to ensure the project’s continuity. The US Cybersecurity and Infrastructure Security Agency has ridden to the rescue of the under-threat Mitre CVE Programme, approving a last-minute, 11-month contract extension to preserve the project’s vital work.
    0 Comments 0 Shares 32 Views
  • WWW.COMPUTERWEEKLY.COM
    Qualys goes to bat for US cricket side San Francisco Unicorns
    News Qualys goes to bat for US cricket side San Francisco Unicorns Cloud security specialist Qualys partners with US T20 cricket squad San Francisco Unicorns and its Sparkle Army fanclub as the team prepares for its summer 2025 campaign By Alex Scroxton, Security Editor Published: 23 Apr 2025 9:59 California-based Twenty20 (T20) cricket side the San Francisco Unicorns has enlisted cloud security and compliance technology specialist Qualys as its inaugural cyber security partner for the upcoming summer 2025 Major League Cricket season in the United States. In exchange for its suite of IT security solutions, including its security intelligence platform Enterprise TruRisk, which automates vulnerability detection, compliance and protection across the organisation, Qualys will be placed front and centre on the team’s matchday and training strips, as well as on signage and merchandise. It will also be able to take advantage of other branding opportunities, including placement in matchday broadcasts and media. The team’s fan club, the so-called Sparkle Army, will also incorporate Qualys’s shield logo into its branding. The two organisations said their partnership reflected Qualys’s commitment to safeguarding digital organisations and supporting local communities. “This season marks a significant milestone for the Unicorns as we come to play in the Bay Area for the first time, and we’re thrilled to deliver world-class cricket via an elite partnership with local cyber security pioneer Qualys,” said team CEO David White. “Qualys stands out as an organisation for its commitment to excellence; a quality we strive for in all aspects of our own setup. Having their logo prominently displayed on our playing jerseys will be a reminder of our own high standards and values, while also resonating with our fans proudly sporting the matchday kit in the stands.” The Unicorns are one of six teams – the others hailing from Los Angeles, New York, Seattle, Dallas-Fort Worth and Washington DC – that inaugurated America’s Major League Cricket franchise just two years ago with the aim of broadening the sport’s appeal in the US. Although it is true the first ever cricket international was between the US and Canada, the sport never really caught on across the Atlantic as it was largely formalised after the War of Independence. Backed by venture capitalists Anand Rajaraman and Venky Harinarayan, who co-founded Junglee, an early internet shopping comparison site acquired by Amazon in the late 1990s, and coached by Australian all-rounder Shane Watson, the Unicorns may be well-placed to capitalise on the existing cricket fandom among the many thousands of Indian and Pakistani nationals employed in Silicon Valley. Its growing pool of players draws heavily on Australian talent, including the likes of former Under-19s captain Cooper Connolly, who received his first Test Cap this year against Sri Lanka; Jake Fraser-McGurk, fresh from his maiden international half-century against the England T20 squad last September; and Matt Short, who scored his maiden One Day International half-century – also against England – off a mere 23 balls. “Qualys is proud to sponsor the San Francisco Unicorns, and we’re honoured to have the opportunity to support a team that mirrors our values of innovation and determination,” said Sumedh Thakar, president and CEO of Qualys. “This partnership reflects our dedication to building strong community connections and celebrating excellence across all fields,” he added. The team will begin this year’s campaign at the Oakland Coliseum on 12 June 2025, when it meets Washington Freedom. Read more about IT in cricket Cricket Australia was able to keep digital services up and running during a period of unprecedented customer demand. With the cloud-connected ball and machine learning, amateur cricket players will soon be able to analyse their bowls and improve their game. Cricketers’ medical images will be stored securely in an independent clinical archive, enabling clinicians to access medical data and make treatment decisions more quickly. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 Comments 0 Shares 38 Views
  • WWW.COMPUTERWEEKLY.COM
    Podcast: Quantum lacks profitability but it will come, says CEO
    Computer Weekly talks to Quantum CEO Jamie Lerner about the company’s expertise in massive volumes of data and a roadmap that includes Myriad, a new file system for forever flash in the AI era
    0 Comments 0 Shares 37 Views
  • WWW.COMPUTERWEEKLY.COM
    The tech investment gap: bridging the divide for underrepresented entrepreneurs
    Despite the UK’s position as one of the world’s most dynamic technology hubs, the industry still faces a persistent and well-documented challenge - equitable access to investment. Underrepresented entrepreneurs - particularly women, ethnic minorities, LGBTQ+ founders, and individuals from non-traditional backgrounds - remain significantly disadvantaged when it comes to securing the capital needed to start and scale high-growth technology businesses. This disparity is particularly striking in a sector built on innovation and disruption. According to the British Business Bank, in 2023 all-female founding teams received just 2% of UK venture capital (VC) funding, while mixed-gender teams secured only 12% - leaving most capital flowing to all-male teams. These imbalances extend into emerging tech sectors such as artificial intelligence (AI) - research from the Alan Turing Institute reveals that female-founded AI startups accounted for only 2.1% of UK VC deals between 2012 and 2022, receiving a mere 0.3% of the total investment during that time. In addition, the Social Mobility Commission found that people from working-class backgrounds represent only 19% of tech sector professionals - despite making up almost 40% of the UK’s working population. For an industry that prides itself on solving complex challenges, the exclusion of so many potential innovators is more than a contradiction - it reflects the requirement for positive imagination, inclusion, and long-term value creation. While action is being taken within a select group, more is required for an industry shift. While there are a growing number of initiatives aimed at addressing the investment gap, structural inequities remain deeply embedded. The UK’s tech and investment ecosystems - like those in Silicon Valley - are still heavily network-based and often risk averse when evaluating so-called “non-traditional” founders. Despite the vocabulary of innovation, much of the capital allocation still relies on pattern recognition - investing in those who look like, sound like, or were educated alongside previous success stories. This creates a double bind - under-represented founders are undercapitalised, and their lack of capital is used as a proxy for a lack of potential. There are three key systemic barriers: Access to networks - Early-stage fundraising is still dominated by warm introductions. If you’re not in the room, you’re unlikely to be invited to pitch. Bias in risk perception - Founders who don’t fit the typical mould - often white, male, and Oxbridge-educated - are perceived as riskier investments, despite research showing that diverse teams often outperform on several metrics. Structural gaps in support - Programmes such as accelerators, incubators, and grant schemes often don’t reach non-traditional communities early enough to make a meaningful difference. Ironically, while technology contributes to these inequities, it also offers part of the solution. Early examples of disruption are emerging. Data-driven VC platforms are beginning to use machine learning to identify overlooked founders based on fundamentals, not personal networks. Decentralised finance (DeFi) and crowdfunding are creating alternative routes to capital, bypassing traditional gatekeepers. And corporate innovation funds are incorporating environmental, social and governance (ESG) practices and diversity, equality and inclusion (DEI) principles into their portfolios, incentivising broader inclusion. However, without intentional design, AI and algorithmic systems risk replicating - or even exacerbating - the biases found in historical data. If we train decision-making tools on skewed investment histories, we risk encoding today’s inequality into tomorrow’s automation. Having held senior technology and operational leadership roles across global banks and fintech scaleups - and now advising boards and startups across the UK, US, and EU - I’ve seen the issue from multiple vantage points. Technology doesn’t just benefit from diverse talent - it requires it. Whether developing scalable architectures, building responsible AI models, or designing secure digital ecosystems, the strength of the solution is often a reflection of the diversity behind it. Underrepresented founders often bring differentiated market insights, a deep understanding of overlooked customer segments, and resilience forged through navigating structural challenges. These are exactly the attributes needed to succeed in a fast-changing tech landscape. Bridging this gap demands more than good intentions - it requires deliberate, coordinated action. Here are five ways the tech and investment communities can lead: Rebuild the discovery process - Leverage AI to surface strong founding teams from outside traditional clusters. Standardise pitch evaluation criteria to prioritise traction and potential, not just pedigree. Diversify investment committees - Diverse decision-making teams lead to broader deal flow and better investment performance. Review who’s sitting at the table - and who’s missing. Support beyond seed stage - Many underrepresented founders secure micro-funding but are left behind at Series A and beyond. Investors must design follow-on strategies that support sustainable scale. Enterprise buyers as catalysts - Large tech companies can reshape ecosystems by diversifying procurement. A single enterprise contract can define the trajectory of a startup. CIOs and CTOs should assess the diversity of their vendor ecosystems. Measure, report, act - Track where your money is going. Publish diversity metrics. Set tangible goals. Change doesn’t happen without accountability. As stewards of the digital future, those of us in the tech sector must use our influence to reshape the system. Bridging the investment gap is not just a moral imperative - it’s a strategic one. Underrepresented founders are already building the future - from ethical AI and climate solutions to inclusive digital platforms and secure fintech infrastructure. But many are doing so without the capital needed to truly scale. For investors, this is the moment to look beyond the familiar and recognise untapped potential. For technologists, it’s about building inclusive tools by design, not by accident. And for the entire ecosystem - we must expand our definition of what a “tech founder” looks like. Let’s change that - together. Read more about diversity in tech startups European fintech must take different path to Trump’s US on diversity - Leading light in women in fintech says Europe must take separate path to Trump’s US on diversity, equity and inclusion. Is diversity even more under threat in tech? The promotion of diversity in tech is going backwards - and it's a terrible moment for that to happen. Here's why diversity is even more important than it's ever been. Tech workers say diversity and inclusion efforts are working - The status of diversity in the technology sector remains up for debate, but some tech industry workers are seeing improvements.
    0 Comments 0 Shares 46 Views
  • WWW.COMPUTERWEEKLY.COM
    Digital ID sector calls for changes to government data legislation
    Rostislav Sedlacek - stock.adobe News Digital ID sector calls for changes to government data legislation Suppliers urge technology secretary to work more collaboratively with private sector over concerns government’s digital wallet will gain a monopoly in the market By Bryan Glick, Editor in chief Published: 23 Apr 2025 9:30 The digital identity sector is calling on the government to amend its forthcoming data legislation and to change policy around use of the Gov.uk Wallet – which was technology secretary Peter Kyle’s flagship announcement as part of his new digital strategy. In an open letter to Kyle, secretary of state for science, innovation and technology, a group representing suppliers of online safety and age verification technology said the launch in January of the government’s digital wallet “sent shockwaves through the sector”. The Association of Digital Verification Professionals, the Age Verification Providers Association and the Online Safety Tech Industry Association outlined a number of concerns arising from Kyle’s plans to allow the Gov.uk Wallet to be used for commercial purposes, such as proving a shopper’s age when purchasing restricted goods like alcohol. “The news has triggered widespread uncertainty among suppliers and investors,” said the groups in their letter. “We are concerned about the inadvertent creation of a government monopoly in digital identity – one that could stifle innovation, limit consumer choice and impose billions of new costs on the taxpayer for functions the private sector currently provides, such as customer service and integration support.” A recent independent study suggests the industry’s fears are well founded. A report from Juniper Research into the UK digital ID market predicted a 267% annual growth in the number of people using digital identity apps, reaching 25 million by 2029. Juniper forecasted that more than 45% of UK adults will use the government app – whereas private sector providers will see just 9% growth over the same period. The Data (Use and Access) Bill, which is making its way through Parliament, includes legislation that will help to enable the widespread use of digital identity tools supported by government data. The signatories of the open letter want to see amendments to the bill that will avoid “distortions caused by exclusivity or unfair state-subsidised pricing”. Specifically, they are calling for: The Gov.uk Wallet and the government’s One Login single-sign-on tool to be statutorily limited to authentication for public services, to avoid competing with private sector alternatives.   Digital identity software that is certified as compliant with the government’s trust framework to be accepted for authentication with public services, rather than One Login having a monopoly for online government access. Government-issued credentials such as the digital driving licence that Kyle announced alongside the Gov.uk Wallet should be allowed to be held in any certified wallet, not just the government’s own product. The industry groups have proposed a joint technical working group to ensure collaboration between government and certified identity and wallet providers. “Investors will be further reassured if these equality, portability and interoperability principles are enshrined in the Data (Use and Access) Bill,” they wrote. Computer Weekly reported in February about significant concerns among digital ID suppliers and their investors about the government’s plans to compete against them for commercial uses.   Iain Corby, executive director of the Age Verification Providers Association, said that better collaboration would also avoid accusations that the government is attempting to introduce a national digital identity scheme. “This is not an area where government by press release is wise – the sudden and very public announcement, made without any consultation, gave many in the industry the impression that digital identity was being nationalised, and that could easily translate to the wider public as a threat of a national ID card by the back door,” he said. “If that happens, the benefits of digital ID will be lost for another decade. “We have had to act now, rather than wait for a consultative meeting scheduled for next month after Computer Weekly first reported this story, because there is still time to amend the Data (Use and Access) Bill to guarantee equality and consumer choice between private and government-issued digital IDs.” A government spokesperson said: “Citizens have dealt with sluggish processes for too long. Our Gov.uk Wallet will give millions of Britons access to all their existing government credentials – like their driving licence – from their phones and save them hours of wasted time. We will work with the UK digital identity sector to provide the best possible experience to people who choose to use digital identity technology.” Read more about digital identity UK digital identity turns to drama (or farce?) over industry fears and security doubts: Government at loggerheads with industry. Warnings of serious security and data protection problems undermining a vital public service. A burgeoning political campaign seeking greater influence. What’s the government up to with digital verification services? Is the government really looking to compete with the private sector for provision of digital identity? Such a move risks fundamentally undermining public trust in critical digital verification services. Distrust builds between digital ID sector and government amid speculation over ‘ID cards by stealth’: Wednesday 2 April saw the latest meeting of the All-Party Parliamentary Group on digital identity.  In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 Comments 0 Shares 46 Views
  • WWW.COMPUTERWEEKLY.COM
    Secure Future Initiative reveals Microsoft staff focus
    Vitalii Gulenok/istock via Getty News Secure Future Initiative reveals Microsoft staff focus IT security is now a metric in the Microsoft employee appraisal process By Cliff Saran, Managing Editor Published: 22 Apr 2025 16:45 Every Microsoft employee now has a metric dubbed “Security Core Priority” tied directly to performance reviews. This is among the changes the software giant has put in place to enforce security internally.  In a blog post outlining the steps the company has taken to harden internal security, Charles Bell, executive vice-president of Microsoft Security, wrote: “We want every person at Microsoft to understand their role in keeping our customers safe and to have the tools to act on that responsibility.” He said 50,000 employees have participated in the Microsoft Security Academy to improve their security skills and that 99% of employees have completed the company’s Security Foundations and Trust Code courses. In May 2024, Microsoft introduced a governance structure to improve risk visibility and accountability. Since then, Bell said Microsoft has appointed a deputy chief information security officer (CISO) for business applications and consolidated responsibility across its Microsoft 365 and Experiences and Devices divisions. “All 14 Deputy CISOs across Microsoft have completed a risk inventory and prioritisation,” he said, adding that this creates a shared view of enterprise-wide security risk. Bell said new policies, behavioural-based detection models and investigation methods have helped to thwart $4bn in fraud attempts. One example of where modelling can be used is in preventing an attacker that has gained access to one system from moving onto other systems inside the company network. Modelling IT assets using a graph can be beneficial in preventing attackers from successfully moving onto other IT assets once a system has been compromised. Microsoft said that modelling IT assets as a graph reveals unknown vulnerabilities and classes of known issues that need to be mitigated to reduce what it describes as “lateral movement vectors”. According to its April 2025 progress report, Microsoft has made “significant” steps in adopting a standard software developer’s kit for identity and ensuring 100% of user accounts are resistant to multi-factor authentication (MFA) phishing attacks. However, among the areas it’s still working on is protection of cryptographic signing keys and quantum safe public key infrastructure (PKI). Read more about employee cyber security Nationwide Building Society to train people to think like cyber criminals: Nationwide wants to help bring more diversity into UK cyber security skills base through partnership with training specialist. NHS staff lack confidence in health service cyber measures: NHS staff understand their role in protecting the health service from cyber threats and the public backs them in this aim, but legacy tech and a lack of training are hindering efforts, according to BT. To protect high-risk production systems, Microsoft said that in November 2024, it moved 28,000 high-risk users, working on sensitive workflows, to a locked-down Azure Virtual Desktop infrastructure, and is working to improve the user experience for these endpoints. Regarding network protection, the report shows that the company is working on implementing network micro segmentation by reimplementing access control lists. “Currently, 20% of first-party IPs [internet protocols] are tagged and 93% of first-party services have established plans for allocating IPs from tagged ranges and provisioning IP capacity,” Microsoft said. It added that it’s also introducing new capabilities to help customers isolate and secure their network resources. These include Network Security Perimeter, DNS Security Extensions and Azure Bastion Premium private-only mode. In terms of its internal software development practices, Microsoft said it’s been driving four standards to help ensure open source software (OSS) used in its production environments is sourced from governed internal feeds and free of known critical and high-severity public vulnerabilities. In the report, Microsoft said Component Governance, a software composition analysis tool that tracks OSS usage and vulnerabilities in OSS, has achieved broad adoption and is enabled by default. It also has an offering called Centralized Feed Service, which provides governed feeds for consuming open-source software. According to Microsoft, this has reached broad adoption. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 Comments 0 Shares 69 Views
  • WWW.COMPUTERWEEKLY.COM
    Cyber ‘agony aunts’ launch guidebook for women in security
    Two of the UK’s leading female cyber practitioners – Secureworks threat intelligence knowledge manager Rebecca Taylor and CybAid founder and Hewitt Partnerships managing director Amelia Hewitt – are to launch a book for women starting and navigating careers in the cyber security sector. The duo describe their co-authored book, Securely Yours, as a practical, agony aunt-style guide, drawing on both Taylor and Hewitt’s lived experience in the still male-dominated security industry. They aim to tackle a range of topics, including active allyship, the ever-present spectre of burnout, and building a professional brand. Many of these topics are drawn from questions posed by others whom the women have mentored during their careers. “No topic is too taboo,” they said. Although Securely Yours reflects on the experiences of and questions raised by women specifically, the authors hope its practical advice will be helpful to anybody looking to advance their security careers, from whatever background, as well as serving as a resource for those in positions of privilege to better support inclusion in security. “It’s been an immense privilege to share not only our experiences and the advice we’ve gained across our careers, but the insights of a range of incredible individuals who have each had their own journeys within cyber, to create a resource we hope has something for every reader,” said Hewitt. “I am fortunate to have had an amazing cyber security career, and I want others to have the same,” added Taylor. “I feel an accountability within me to elevate, support and mentor underrepresented groups to own their platforms, voices and opportunities. This book is a manifestation of that. I truly hope it makes a difference and helps those wanting to thrive in cyber know that they can do it, that we have their back and that they’re not alone on their journey,” she said. Taylor began her career at Secureworks – now part of Sophos – in an administrative role, before taking advantage of a forward-thinking internal culture at the business to develop her career and security expertise. In a 2024 interview with Computer Weekly, she said: “I started doing lots of studying and learning in the background, and through mentorship and exposure around the business, really focused on progressing my career.” Taylor now works on multiple diversity initiatives – not just gender – both within and outside the business. Over the years, she has supported many other women in the industry with mentoring and other guidance. As pushback against official diversity, equity and inclusion (DEI) initiatives becomes stronger, community support networks – which are often underpinned by mentorships – are becoming particularly valuable for underrepresented groups in security and the wider technology industry. However, recent research conducted by US-based non-profit Woman in Cybersecurity (WiCyS) found that 57% of women in the sector feel excluded from career and growth opportunities, and 56% do not feel they are respected. This is leading to a situation where almost 40% feel they are unable to be their authentic selves in the workplace, and given security suffers from high rates of workforce attrition and burnout, Hewitt and Taylor said it is imperative that more is done to help people feel supported and respected at all stages of their careers. Securely Yours will be available from 1 May 2025 on Amazon, and its launch will be accompanied by a podcast series to expand on the conversations started in the book and offer additional support. Read more about women in cyber Women make up a higher percentage of new entrants to the cyber security profession, particularly among younger age groups, and are increasingly taking up leadership positions and hiring roles, but challenges persist. IBM signs on to a partnership deal in support of the popular NCSC CyberFirst Girls scheme, designed to foster gender diversity in the cyber security profession. ISC2’s Clar Rosso talks about diversifying the cyber workforce and supporting cyber professionals as they keep up with growing compliance and security policy demands.
    0 Comments 0 Shares 79 Views
  • WWW.COMPUTERWEEKLY.COM
    Cyber attack downs systems at Marks & Spencer
    Veteran UK retailer Marks & Spencer (M&S) has apologised to customers after a cyber incident of a currently undisclosed nature forced multiple public-facing services offline, with shoppers predictably taking to social media in their droves to lament the outages. In a note published on the afternoon of 22 April, the company revealed it had been “managing a cyber incident” affecting contactless payments and online click-and-collect services over the Easter Bank Holiday. According to reports, a second technical problem occurred at the weekend affecting only contactless payments. “As soon as we became aware of the incident, it was necessary to make some minor, temporary changes to our store operations to protect customers and the business and we are sorry for any inconvenience experienced,” a spokesperson said. “Importantly, our stores remain open and our website and app are operating as normal. “Customer trust is incredibly important to us, and if the situation changes an update will be provided as appropriate,” they added. M&S additionally said it has enlisted third-party cyber forensics to assist with incident management, and is taking further actions to protect its network and ensure it can continue to maintain its customer services. Computer Weekly also understands the cyber attack has been reported to the Information Commissioner’s Office (ICO) and the National Cyber Security Centre (NCSC). “The incident at Marks & Spencer serves as a reminder of the interdependencies in modern retail operations. The disruption to click-and-vollect services and contactless payments underscores how any technical issue can have far-reaching consequences across an entire organisation,” said Javvad Malik, lead security awareness advocate at KnowBe4. “M&S's prompt communication and engagement with the ICO demonstrate a commendable level of transparency and regulatory compliance. However, the event also reveals potential gaps in cyber resilience and crisis management strategies.” Although unconfirmed at this stage, the nature of the attack’s impact, and the language deployed by M&S, suggests that the retailer may be dealing with the impact of a ransomware attack on certain systems. But regardless of the precise nature of the incident, it is by no means an isolated one, with retailers frequently in the crosshairs of threat actors. For example, retailers have high public brand awareness upon which cyber criminals like to capitalise for their own fame and notoriety. Added to this, cyber criminals can use the seasonal nature of the retail sector to ramp up pressure on the victim by disrupting their business at a critical point and making them more likely to cave to extortion demands – the timing of the M&S incident over the long Easter weekend may bear this out. Meanwhile, the growth of omnichannel approaches to retail increases the exposed attack surface, as does adoption of new technologies, such as AI-powered recommendation engines. According to NCC Group, the consumer cyclicals (non-essential purchases) and non-cyclicals (essential purchases) sectors, which both encompass retailers in general, were the second and fifth most targeted verticals by cyber criminal ransomware gangs in the first half of 2024. “There is an urgent need for all sectors to respond to this increased targeting from threat actors, but especially those storing huge amounts of data,” said Matt Hull, global head of threat intelligence at NCC Group. “Now more than ever businesses should expect to be a target for cyber criminals and take a proactive approach to security rather than waiting for potential threats to strike.” Read more about retail technology An Ericsson report finds retailers identified networking and IT as the biggest frustration, with two-fifths suffering loss of revenue at remote branch locations because of poor connectivity. When Tesco Clubcard was developed 30 years ago, using the technology of the time to analyse data was a long shot, but it grew into a scheme that birthed retail loyalty as we know it today. UK supermarkets continue to deal with the impact of a ransomware attack on the systems of supply chain software supplier Blue Yonder, which is disrupting multiple aspects of their businesses including deliveries.
    0 Comments 0 Shares 80 Views
  • WWW.COMPUTERWEEKLY.COM
    Beyond baselines - getting real about security and resilience
    In 2024, the National Cyber Security Centre (NCSC) celebrated a decade of its baseline cyber security certification, Cyber Essentials (CE). While the NCSC has touted the scheme’s benefits, CEO Richard Horne has nonetheless been explicit about the “widening gap” between the UK’s cyber defences and the threats faced. This comes amid a heightened level of physical threat from state actors, including via sabotage and espionage, as well as greater awareness of state threats to research and innovation. This changing threat picture cast greater attention on the work of the National Protective Security Authority (NPSA), the UK’s national technical authority for physical and personnel protective security. The elevated threat raises the question of whether the NPSA should follow the NCSC’s suit and develop its own baseline protective security certification as an equivalent to CE. However, to address the threat and build genuine resilience, we believe the UK needs an approach that goes beyond baselines and is informed by risk.  The CE certification was launched in 2014. It outlines a baseline level of security that is intended to be universally applicable and risk agnostic. The NCSC asserts that CE is “suitable for all organisations, or any size, in any sector”. CE is assessed without reference to the organisation or its risk profile because the CE controls are aimed at commodity attacks that are ubiquitous for Internet-connected organisations. After 10 years the number of organisations certified under CE continues to increase year-on-year. The NCSC also has plans to expand the scheme further to better address supply chain risks. These achievements notwithstanding, there have been suggestions that the adoption of CE has been lower than expected, with one report stating that uptake remains below 1% of eligible organisations. The argument for a baseline cyber security certification is a good one; strengthening the cyber security of individual organisations leads to a more resilient ecosystem and is a public good. The controls involved in CE are sufficiently universal that there is no need for application to refer to an organisation’s specific risk assessment. However, there are reasons to question whether a CE-equivalent baseline security certification for protective security could be effective. First, it is harder to identify a single shared ‘baseline’ level of protective security. CE is focused on five core security controls applicable to any organisation. It is not clear that a similar baseline set of controls could be constructed to simultaneously address areas as diverse as physical security, insider threat, or the security of research collaboration. Second, the CE controls would almost certainly be duplicated in any protective security certification. This might deter organisations that already have CE from seeking the new certification – at a time when relatively few organisations have CE. Third, the creation of separate NCSC and NPSA baseline certifications would reinforce silos between different aspects of security. We should be moving towards an approach in which organisations adopt a proportionate approach to security that addresses threats regardless of their means of realisation. An attempt to mirror CE in the protective security space therefore risks falling between two stools; being overly strenuous for most organisations, while insufficient to tackle genuine threats. At the same time, it risks reinforcing an unhelpful physical-cyber divide in many organisations’ approach to security. CE remains relevant at a technical level, but the way it is framed increasingly appears as a hold over from an earlier geopolitical age. The cyber security industry often portrays its work as primarily technical and unobjectionable. Cyber threats can be presented as impersonal – an inevitable consequence of being online. The NCSC refers to CE as “basic cyber hygiene” and similar metaphors from public health or ecology are regularly deployed to ‘de-securitise’ these security controls. In contrast, the UK has become increasingly explicit about the deteriorating threat environment and the necessity of a concerted response. That messaging is likely to accelerate as the UK government builds the public case necessary for a significant increase in defence spending. This would also align with the UK’s widening national conversation on resilience across domains and sectors. The forthcoming Cyber Security and Resilience Bill (CSRB) is an example of this trend. Although the CSRB is primarily targeted at bolstering cyber defences for critical services, it is part of a set of parallel efforts on physical security, economic stability, and community preparedness that aim at a holistic approach to threats.  The UK Government’s Resilience Framework outlines an all-hazards approach, covering everything from extreme weather and pandemics to supply chain disruptions and CNI failures, and emphasises preparation and prevention across society. A new National Security Council on resilience has also been created, chaired by the Chancellor of the Duchy of Lancaster and is made up of the Secretaries of State for a wide range of sectors. A separate 'physical security' certification scheme would run contrary to the trend towards a holistic approach to resilience. Read more about the NCSC's work IBM signs on to a partnership deal in support of the popular NCSC CyberFirst Girls scheme designed to foster gender diversity in the cyber security profession. The NCSC urges service providers, large organisations and critical sectors to start thinking today about how they will migrate to post-quantum cryptography over the next decade. NCSC CEO Richard Horne is to echo wider warnings about the growing number and severity of cyber threats facing the UK as he launches the security body’s eighth annual report. Rather than developing separate certifications, a better option would be a unified security resilience certification for at-risk organisations. This model would complement established baselines like CE. Unlike the baseline approach of CE, the starting point for the new certification would be a credible organisational security risk assessment. This assessment would be integrated, bridging security domains such as cyber, physical, and personnel security. Beyond this the framework would be modular, reflecting the absence of a single organisation-agnostic baseline in protective security. The scheme would certify that the organisation had taken proportionate protective security measures in response to its own risk assessment. Achieving this standard would require substantial effort and would not be appropriate for most organisations. The certification process would necessarily be more in-depth than the process for CE. Nonetheless, by leveraging unified risk profiling and cross-sector collaboration between the NCSC and NPSA, this approach would enable organisations to go beyond compliance checklists to achieve genuine, outcome-focused resilience. This certification would be accompanied by an awareness campaign that is frank about the geopolitical threat faced by at-risk organisations. It would be important to make clear that this is not ‘business as usual’. This approach would reduce certification fatigue while delivering a robust, adaptive defence posture. It aligns with forthcoming resilience legislation, and with a broader national view of resilience as a desirable achievement in an increasingly turbulent geopolitical landscape. Neil Ashdown is head of research for Tyburn St Raphael, a security consultancy. Tash Buckley is a former research analyst at RUSI and a security educator and lecturer, researching cyber power and the intersection of science, technology innovation, and national security.
    0 Comments 0 Shares 47 Views
  • WWW.COMPUTERWEEKLY.COM
    AI-powered APIs proving highly vulnerable to attack
    More than 150 billion application programming interface (API) attacks were observed in the wild during 2023 and 2024, according to data released this week by cloud security specialist Akamai, with the growth of artificial intelligence (AI) powered APIs and AI-enabled attacks compounding to create a steadily expanding attack surface. In its latest State of apps and API security 2025 report, Akamai also said it observed volumes of web-based cyber attacks up by a third over the course of 2024 to 311 billion all told, a pronounced surge that appears to correlate closely to an expansion in the scope of threats arising from AI. “AI is transforming web and API security, enhancing threat detection but also creating new challenges,” said Rupesh Chokshi, senior vice-president and general manager of Akamai’s Application Security Portfolio. “This report is a must read to understand what’s driving the shift and how defenders can stay ahead with the right mitigation strategies.” Akamai said the integration of AI tools with core platforms via APIs is “substantially” expanding the attack surface because the vast majority of AI-powered APIs are not only publicly accessible, but tend to rely on inadequate protections, lacking such things as authentication mechanisms, for example. This problem is now also compounded by a growing number of AI-driven attacks. For end-users, this means that while security teams are able to enhance web application and API security by enhancing their defensive capabilities with AI-powered automation – for example, by helping to find threats, predict possible breaches and bring down incident response times – AI also helps attacks improve the effectiveness of their attacks by automating web scraping and bringing more dynamic attack methodologies to bear. Looking ahead, Akamai said that although AI-driven API management would doubtless continue to evolve, AI-driven attacks would likely remain a significant concern, meaning organisations need to adopt more robust, defence-in-depth security strategies. Turning to web attacks, Akamai said that it observed a dramatic rise in application layer (aka Layer 7) distributed-denial-of-service (DdoS) attacks targeting both web apps and APIs, with monthly volumes growing from over 500 billion at the start of 2023 to more than a trillion at the end of 2024 – bad bots and the persistence of HTTP-flooding as an attack vector seem to have driven this. The technology sector was the most frequently targeted vertical for such attacks – more than seven trillion during the period covered by the survey. Broken out by geography, EMEA was on the receiving end of 2.7 trillion Layer 7 DDoS attacks, 306 billion hitting targets in the UK and 369 billion in Germany. Akamai said that safeguarding web apps and APIs would continue to be an ever more essential need for organisations. It laid out a number of key actions that security leaders should consider taking: To lay down an API security plan incorporating shift-left and DevSecOps techniques to integrate security from initial API design through post-production, paying particular attention to continuous discovery and visibility, authentication, rate limiting and bot mitigation; Implement more robust core security measures such as continuous threat monitoring and response, and use API testing tools such as dynamic application security testing (DAST); Be proactive against threats, using specialised DDoS protection tools, for example, and paying attention to patch management, access control and network segmentation; Act early to mitigate API vulnerabilities, following established guidelines, such as OWASP’s, to help ensure more robust security, and address risks associated with bad coding practice or misconfigurations; Pay more attention to ransomware threats, taking advantage of zero-trust architectures, microsegmentation, and the Mitre ATT&CK framework; Finally, prepare for AI with defence strategies that include bot defences, AI-powered cyber tools, specialist firewalls and more proactive measures such as continuous assessment and zero trust. Read more about API security Lax API protections make it easier for threat actors to steal data, inject malware and perform account takeovers. An API security strategy helps combat this. APIs are the backbone of most modern applications, and companies must build in API security from the start. Follow these guidelines to design, deploy and protect your APIs. External API integrations are critical, but so is managing third-party API risks to maintain customer trust, remain compliant and ensure long-term operational resilience.
    0 Comments 0 Shares 54 Views
  • WWW.COMPUTERWEEKLY.COM
    Ofcom bans leasing of Global Titles to crackdown on spoofing
    everythingpossible - stock.adobe News Ofcom bans leasing of Global Titles to crackdown on spoofing Telco regulator Ofcom is cracking down on a loophole being exploited by cyber criminals to access sensitive mobile data By Cliff Saran, Managing Editor Published: 22 Apr 2025 12:06 Telco regulator Ofcom has said it is closing a technical loophole that poses a risk to mobile users’ privacy and security, announcing a crackdown on the leasing of phone numbers known as Global Titles. Global Titles (GT) are used by mobile networks to send and receive signalling messages, which help to ensure a call or text message is received by the intended recipient. According to Ofcom, these signalling messages are used in the background, supporting billions of calls and text message made worldwide, and are never seen by mobile phone users. However, Ofcom said criminals can use Global Titles to intercept and divert calls and messages, which enables them to capture information held by mobile networks. As an example, a hacker could intercept security codes sent by banks to a customer via a text message.  Global Titles are sometimes leased out by mobile networks – largely to legitimate businesses who use them to offer mobile services. However, they can fall into the wrong hands. This can lead to the security and privacy of ordinary mobile users being compromised as their personal data may be directly or indirectly accessed by criminals. The National Cyber Security Centre (NCSC) has recognised the risk presented by Global Titles in telecommunications. Due to failures of industry-led efforts to address these issues, Ofcom said that it has taken the step to ban the leasing of Global Titles with immediate effect.  Discussing the leasing of Global Titles, NCSC chief technical officer Ollie Whitehouse said: “This technique, which is actively used by unregulated commercial companies, poses privacy and security risks to everyday users, and we urge our international partners to follow suit in addressing it.” Among the Global Titles-based services that Ofcom regards as high are Home Location Register (HLR) lookup. These include authentication services, least cost routing and number authentication services. Ofcom said HLR is an example of a higher risk service because it facilitate access to operational data held by mobile networks, some of which may be personal data and/or location data, which is subject to legal requirements under relevant data protection legislation. “We expect range holders [the original telco operator holding the GT] providing, or indirectly facilitating provision of, HLR lookup services to be alert to the risk that such services may be facilitating access to operational data held by mobile networks which may be contrary to relevant data protection legislation,” Ofcom stated in its Guidance for number range holders to prevent misuse of Global Titles. Ofcom’s group director for networks and communications Natalie Black said: “We are taking world-leading action to tackle the threat posed by criminals gaining access to mobile networks. Leased Global Titles are one of the most significant and persistent sources of malicious signalling. Our ban will help prevent them from falling into the wrong hands – protecting mobile users and our critical telecoms infrastructure in the process.”  “Alongside this, we have also published new guidance for mobile operators on their responsibilities to prevent the misuse of their Global Titles,” added Black. The ban on entering new leasing arrangements is effective immediately. For leasing that is already in place, the ban will come into force on 22 April 2026. This will give time to legitimate businesses that currently lease Global Titles from mobile networks to be able to make alternative arrangements.    Read more about mobile network security Building mobile security awareness training for end users: Do concerns of malware, social engineering and unpatched software on employee mobile devices have you up at night? One good place to start is mobile security awareness training. Why mobile security audits are important in the enterprise: Mobile devices bring their own set of challenges and risks to enterprise security. To handle mobile-specific threats, IT should conduct regular mobile security audits. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 Comments 0 Shares 33 Views
  • WWW.COMPUTERWEEKLY.COM
    Investigatory Powers Tribunal has no power to award costs against PSNI over evidence failures
    The Investigatory Powers Tribunal, the court that rules on the lawfulness of surveillance by police and intelligence agencies, has no powers to award costs against government bodies when they deliberately withhold or delay the disclosure of relevant evidence or fail to follow court orders. A panel of five judges has found that the tribunal has no statutory powers to impose sanctions against police forces or intelligence agencies if they delay or fail to follow orders from the tribunal to disclose relevant evidence. The ruling comes after the Investigatory Powers Tribunal found that two UK police forces had unlawfully spied on investigative journalists Barry McCaffrey and Trevor Birney, including harvesting phone data, following their investigations into police corruption. The Police Service of Northern Ireland (PSNI) targeted Birney and McCaffrey after they produced a documentary exposing police collusion in the murders of six innocent Catholics watching a football match in Loughinisland in 1994. Although the people alleged to be behind the killings are known to police, none have been prosecuted. The tribunal acknowledged in a judgment on 18 April that the PSNI repeatedly withheld and delayed the disclosure of important evidence, in some cases until the night before a court hearing. However, five tribunal judges concluded they had no statutory powers to award costs against the police force. The judges have called for the Secretary of State to intervene to address the matter by introducing rules for the tribunal or passing primary legislation. “We do not regard the outcome as entirely satisfactory … the facts of the present case illustrate why it would be helpful at least in principle for this tribunal to have the power to award costs,” the judges said. They added that they “see force” in the journalists’ submissions “that there is a need for the tribunal to have the power to award costs, in particular against respondents, where there has been expenditure wasted as a result of their conduct and where, in particular, orders of the tribunal are persistently breached”. However, the five judges found they had no powers to award costs under existing legislation or the tribunal rules, and that it “would be a matter for the Secretary of State or Parliament”, to intervene, they said in a 19-page ruling.  Birney and McCaffrey had claimed for reimbursement of part of their legal costs, after the PSNI allegedly misled the tribunal by obfuscating critical evidence of PSNI and Metropolitan Police surveillance operations against them, leading to two court hearings having to be abandoned. Ben Jaffey KC, representing the journalists, told a tribunal hearing in March 2024 that the PSNI had failed to disclose surveillance operations against the two journalists until the night before scheduled court hearings, in breach of the tribunal’s orders.  In one case, the PSNI served key evidence at 11:19 pm the night before a court hearing, forcing the journalists’ lawyers to work through the night, and leaving no time to properly consider the evidence the next day. On another occasion, the PSNI failed to disclose a Directed Surveillance Order against the two journalists until the morning of a court hearing, when the journalists’ legal representative was allowed to take notes from it but not allowed a copy. Commenting on the verdict, Trevor Birney said the tribunal’s conclusion that it lacked power to order costs was deeply disturbing. “The tribunal has effectively said that public bodies can behave badly – delay, obstruct, conceal – and face no consequence,” he said. “That’s not justice; it’s a reward for wrongdoing.” Barry McCaffrey added: “The tribunal recognised the delays and failures in disclosure but effectively said its hands were tied. That leaves us with a system where transparency and accountability can be deliberately undermined without fear of reprisal.” How the PSNI delayed disclosure of key evidence 16 February 2024: Durham Police incorrectly told the lawyers of the journalists there had been no Directed Surveillance Authorisation against them. 16 February 2024: PSNI served a “skeleton argument” making additional disclosures but neglected to disclose the Directed Surveillance Authorisation. 23 February 2024: PSNI disclosed the existence of a Directed Surveillance Authorisation issued to authorise spying against the journalists “only two clear days” before a court hearing: “No clear explanation has ever been offered for that exceptionally late disclosure.” 25 February 2024: PSNI discloses further evidence at 11:19pm on the evening before a court hearing. 26 February 2024: On the day of the scheduled court hearing, a lawyer for the two journalists was allowed to view and take limited notes of the Directed Surveillance Authorisation only the morning before the court hearing. Lawyers for the journalists attempt to make sense of large volumes of additional material disclosed just before the court hearing. This includes a disclosure from the PSNI that it had received extensive phone communications data from the Metropolitan Police Service, which was later added as a respondent to the case. The hearing is adjourned late on the first day as it could “not fairly go ahead”. 8 May 2024: PSNI discloses large volumes of evidence raising new issues, including a “defensive operation” to monitor police phone calls to journalists by the PSNI, and details of an attempt by a Durham Constabulary to preserve journalist Trevor Birney’s emails stored on Apple’s iCloud. The tribunal ordered further searches of evidence and written explanations for the late disclosure. The journalists warned that the ruling risks eroding public confidence in legal safeguards and sets a dangerous precedent that could embolden further misconduct by public authorities.  The chief constable of the PSNI, Jon Boutcher, has appointed Angus McCullogh KC to conduct a review into PSNI surveillance of lawyers and journalists. Birney and McCaffrey have called for a full public inquiry into the unlawful surveillance and institutional failures surrounding their case. Read more about police surveillance of journalists in Northern Ireland Investigative reporter Dónal MacIntyre has asked the Investigatory Powers Tribunal to look into allegations that he was placed under directed surveillance and had his social media posts monitored by Northern Ireland police. Journalists seek legal costs after PSNI’s ‘ridiculous’ withholding of evidence in spying operation delayed court hearings. The Metropolitan Police monitored the phones of 16 BBC journalists on behalf of police in Northern Ireland, a cross-party group of MPs heard. Over 40 journalists and lawyers submit evidence to PSNI surveillance inquiry. Conservative MP adds to calls for public inquiry over PSNI police spying. Tribunal criticises PSNI and Met Police for spying operation to identify journalists’ sources. Detective wrongly claimed journalist’s solicitor attempted to buy gun, surveillance tribunal hears. Ex-PSNI officer ‘deeply angered’ by comments made by a former detective at a tribunal investigating allegations of unlawful surveillance against journalists. Detective reported journalist’s lawyers to regulator in ‘unlawful’ PSNI surveillance case. Lawyers and journalists seeking ‘payback’ over police phone surveillance, claims former detective. We need a judge-led inquiry into police spying on journalists and lawyers. Former assistant chief constable, Alan McQuillan, claims the PSNI used a dedicated laptop to access the phone communications data of hundreds of lawyers and journalists. Northern Irish police used covert powers to monitor over 300 journalists. Police chief commissions ‘independent review’ of surveillance against journalists and lawyers. Police accessed phone records of ‘trouble-making journalists’. BBC instructs lawyers over allegations of police surveillance of journalist. The Policing Board of Northern Ireland has asked the Police Service of Northern Ireland to produce a public report on its use of covert surveillance powers against journalists and lawyers after it gave ‘utterly vague’ answers. PSNI chief constable Jon Boutcher has agreed to provide a report on police surveillance of journalists and lawyers to Northern Ireland’s policing watchdog but denies industrial use of surveillance powers. Report reveals Northern Ireland police put up to 18 journalists and lawyers under surveillance. Three police forces took part in surveillance operations between 2011 and 2018 to identify sources that leaked information to journalists Trevor Birney and Barry McCaffrey, the Investigatory Powers Tribunal hears. Amnesty International and the Committee on the Administration of Justice have asked Northern Ireland’s policing watchdog to open an inquiry into the Police Service of Northern Ireland’s use of surveillance powers against journalists. Britain’s most secret court is to hear claims that UK authorities unlawfully targeted two journalists in a ‘covert surveillance’ operation after they exposed the failure of police in Northern Ireland to investigate paramilitary killings. The Police Service of Northern Ireland is unable to delete terabytes of unlawfully seized data taken from journalists who exposed police failings in the investigation of the Loughinisland sectarian murders. The Investigatory Powers Tribunal has agreed to investigate complaints by Northern Ireland investigative journalists Trevor Birney and Barry McCaffrey that they were unlawfully placed under surveillance.
    0 Comments 0 Shares 58 Views
  • WWW.COMPUTERWEEKLY.COM
    Collaboration is the best defence against nation-state threats
    Maksim Kabakou - Fotolia Opinion Collaboration is the best defence against nation-state threats The rise of DeepSeek has prompted the usual well-documented concerns around AI, but also raised worries about its potential links to the Chinese state. The Security Think Tank considers the steps security leaders can take to counter threats posed by nation state industrial espionage? By Stephen McDermid, Okta Published: 17 Apr 2025 Businesses are under attack from all corners of the globe and while many organisations may think that nation-state threat actors would never target or be interested in them, the reality is that no-one is exempt from security threats. Security leaders need to ensure they are staying up to speed on the latest threat intelligence, this can either be through an in-house capability or via third-party threat intel providers. Once they understand the tactics, techniques and procedures (TTPs) deployed by these threat actors, organisations can then ensure they have robust mechanisms in place to digest and act on this information to implement appropriate controls. Organisational culture plays a key role in ensuring everyone is aware of the threats and risks posed to the business. It is vital that leaders educate users on what the most prevalent threats may look like and how to respond, this is a primary defence to protecting their business. Social engineering remains one of the most widely used methods of attack and so implementing processes that are resistant to individual compromise is key. Using phishing resistant authentication methods, ensuring strict identity governance and control, and having a well-tested incident response capability are all crucial steps to preventing and mitigating these types of attacks. Unfortunately, securing your own organisation is not enough and historically nation-state threat actors have taken advantage of weak third-party suppliers and supply chain governance. Having strong supply chain governance and assurance is now one of the top trends across industries and it’s critical businesses understand the dependencies and access that suppliers have. The Security Think Tank on nation state espionage Mike Gillespie and Ellie Hurst, Advent IM: Will DeepSeek force us to take application security seriously? Elisabeth Mackay, PA Consulting: How CISOs can counter the threat of nation state espionage. Andrew Hodges, Quorum Cyber: Countering nation-state cyber espionage: A CISO field guide. Nick New, Optalysys: DeepSeek will help evolve the conversation around privacy. If prevention fails, lateral movement post-compromise is one of the first actions threat actors will attempt and so endpoint detection and response, and zero-trust solutions that can prevent and detect unauthorised access are also vital. In 2023, 1.9 billion session cookies were stolen from Fortune 1000 employees. With the session token, attackers are bypassing MFA and so it is much harder to detect and respond. Having solutions  in place as part of a zero-trust architecture to detect session token replay attempts can stop these attacks and alert to possible credential or endpoint compromise. Ultimately, collaboration and partnership across organisations and industry will help organisations understand these threats, the risks posed by nation-state actors and more importantly allow them to work together to prevent them. Stephen McDermid is EMEA CSO at Okta   In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 Comments 0 Shares 67 Views
  • WWW.COMPUTERWEEKLY.COM
    AI's power play: the high-stakes race for energy capacity
    The use of artificial intelligence (AI), particularly generative AI, relies on a lot of energy. As its adoption grows and people become more adept at harnessing its power, increasingly strong ties are being created between the tech and energy industries. Whilst this may be a good thing, it also brings about new challenges and legal considerations. After all, the long-term success of digital infrastructure depends on two core issues. Technical and operational constraints naturally need to be considered but, at the same time, a significant emphasis should be placed on stakeholders establishing clear legal contracts and investment safeguards from the start of a project. It's understandable why individuals might get caught in the hype and excitement of a new idea, when a proactive approach to identifying and clearly allocating project risks and rewards upfront is crucial for successfully navigating the complex legal environments over the years and decades in which these projects come online. Training a single large language model can consume as much electricity as a small town. Data centres currently make up around 1.5% of global electricity demand.  The International Energy Agency (IEA) forecasts that electricity demand from data centres will more than double by 2030, a hunger primarily driven by AI. This surge could require new global energy capacity equivalent to roughly four times the United Kingdom's current total electricity consumption. This increasing energy demand is concentrated primarily in the areas where data centres are or will be located, straining local power grids and requiring either substantial and rapid grid infrastructure upgrades or, more commonly, a race between data centre owners and operators to secure reliable and sustainable energy sources dedicated to their operations. While AI demands significant power, it also holds promise for improving energy management. AI can potentially optimise power grids, integrate renewable energy sources more effectively, predict equipment failures and enhance energy efficiency across various industries and buildings. This could help offset some of the overall impact on global energy demand. However, the energy sector has been slower in adopting AI compared to the tech and financial services industries. Further integration is expected here too.  The legal and contractual framework for AI-energy projects is intricate and, in many areas, novel. It involves navigating diverse regulatory systems, supply chain complexities and geopolitical uncertainties. This leads to complex negotiations concerning risk allocation, pricing mechanisms and responsibilities avoiding downtime. Furthermore, the regulatory landscape for both AI and energy is constantly evolving, making compliance and contractual certainty a moving target.In this dynamic and complex environment, it is crucial to anticipate, during the contract drafting phase, how disputes could arise and what mechanisms are needed to avoid them, or resolve them early and quickly if they cannot be avoided. Contracts should be meticulously written to foresee potential issues while maintaining enough flexibility to allow for an inevitable degree of unpredictability during a project which will last for decades. That means parties need to clearly define their responsibilities, establish performance metrics (and how those will be tracked) and allocate risks effectively. Importantly, once a contract is signed parties need to immediately and consistently apply and enforce it. It should go without saying – you could argue that the fact it needs saying tells its own story - but incorporating robust governance and dispute resolution methods is essential, with international arbitration recommended for these multi-party, multi-contract projects given advantages such as neutrality, privacy and enforceability in cross-border contexts.  It is also prudent to proactively consider investment protections (including through investment agreements with host country governments and under public international law treaties) as well as potential restructuring scenarios, including upon events like force majeure, changes in law or financial distress. This foresight can help protect investments and ensure the continuity and long-term success of these critical projects in the face of unwelcome challenges.  This is important not only for the participants in the particular projects but also for the wider energy and tech sectors which will be impacted significantly by the availability of this important technology and the speed at which its adoption can grow.   Charlie Morgan is a partner in Herbert Smith Freehills's disputes practice with a focus on tech, energy and venture capitalism.
    0 Comments 0 Shares 72 Views
  • WWW.COMPUTERWEEKLY.COM
    Tariff turmoil is making supply chain security riskier
    Cyber security remained the most pressing challenge facing those in supply chain management roles during the first three months of 2025, but since the inauguration of Donald Trump in January, uncertainty over the president’s approach to tariffs has caused chaos for supply chains not just in the US, but around the world, and these two areas of risk are closely entwined. This is according to a report from cyber and risk management consultancy West Monroe, which found that while security remains top of mind for 23% of respondents to a recent polling exercise, the impact of tariffs has surged to become the top issue for 20%, in a matter of weeks edging out factors such as geopolitical tension, material costs, the climate crisis and labour costs. Although its fieldwork was conducted in March, prior to Trump’s so-called Liberation Day tariff announcement, West Monroe’s data shows that during Q1, a significant number of organisations in the US started making changes to their supply chains in advance. A total of 58% said they altered their product, materials or sourcing mix, 56% altered their transportation mix, 45% altered their production schedule, 31% updated their pricing to pass increased costs to customers, and 28% altered their geographic presence. “I don’t think these are necessarily quick changes to make, but there is cyber risk if and when those changes are made,” said Christina Powers, cyber security partner at West Monroe. Broadly, she said the need to move quickly to replace lost revenues, shifts in the supplier ecosystem and other impacts arising from the tariffs may create gaps in best practice when it comes to supply chain management. “For example, if you’re starting to work with a different supplier – maybe they were already on your list but they weren’t a tier one supplier, you’re tapping into tier two suppliers – so maybe they went through less due diligence and less scrutiny when you were initially onboarding them,” said Powers. “Or if you’re looking to change suppliers now, there could be a little more of a rushed diligence process being done to try to make that change more quickly,” she said. “There could be less visibility into what potential access these companies may have. From another angle, if you’re not working with a familiar contact, or not working with familiar processes, there’s a higher risk of things like impersonation attacks, whether or not that’s for financial gain or to get access to sensitive data.” Read more about the impact of US tariffs IT buyers appear to have spent the past few months refreshing PCs in preparation for the new US tariffs. Moore’s Law predicts that every 18 months, IT buyers can get more for the same outlay. But US tariffs may mean they end up paying a higher price. Delivering excellent customer experience is a tough job on regular days. Now add rising prices because of tariffs. Finally, with goods potentially priced higher thanks to the tariffs, some organisations may also look to offset costs in rather more creative ways than simply passing them onto their customers. In some instances, however ill-advised this may be, this could see IT and cyber security budgets taking a hit. “There is a risk around cyber security which is often viewed as a cost centre,” said Powers. “It is focused on value preservation and risk reduction, but it’s not necessarily value creation per se. So, there could be pushes to offset some of what organisations are having to deal with.” But the story doesn’t end here, she said, for there are other ways in which cyber security and tariffs are coupled together. “With a lot of the uncertainty that’s happening right now, there’s a very volatile market,” she said. “From a cyber security perspective, that could lead to incentives for individuals or groups or nation-states to look to exploit vulnerabilities or go after certain companies. “You may see that nations that were historically friendly [to the US] have different feelings now, so there could be an increase in exploitation. “On the data side, there could be an increase in potential espionage looking for trade secrets, intellectual property and things of that nature,” said Powers. “There are some Chinese manufacturers exploiting luxury brands and where their goods are being made, and what it takes to produce them.” If there’s a core message for security leaders to hold onto during this time of intense economic uncertainty and volatility, it would be not to allow the organisation to lose focus on the integrity of its supply chain arrangements. “Now is the time to be more vigilant, not only to hold the line, but actually to increase supply chain scrutiny from a cyber perspective, because there is so much uncertainty, change, volatility and, I think, anger associated with this,” said Powers.
    0 Comments 0 Shares 64 Views
  • WWW.COMPUTERWEEKLY.COM
    UK class action sets stage for Google showdown
    BCFC - stock.adobe.com News UK class action sets stage for Google showdown Another class action has been filed against search engine giant Google for anti-competitive practices that negatively affect small businesses By Cliff Saran, Managing Editor Published: 17 Apr 2025 11:10 UK based legal professor Or Brook has filed a class action against Google worth approximately £5bn in the UK Competition Appeal Tribunal (CAT). The class action, brought on behalf of hundreds of thousands of UK-based organisations that used Google’s search advertising services, accuses Google of abusing its near-total dominance in the general search market to drive up prices. This latest class action follows on from one filed by Nikki Stopford, co-founder of Consumer Voice, and legal firm Hausfeld & Co LLP, and appears to focus on the Google’s anti-competitiveness. Stopford’s case looks at the cost to consumers due to increased advertising costs businesses that use Google Search pay as a result of anti-competitive practices. In November last year, Google’s attempt to throw out Stopford’s case was dismissed, paving the way for the case to be heard at the CAT. Along with Stopford’s case, in January,  the Competition and Markets Authority (CMA) began an investigation seeking to determine if Google has strategic market status in search and search advertising activities, and whether these services are delivering good outcomes for people and businesses in the UK. The Brook case appears to be looking specifically at the cost to business arising from Google business practices that stipulate its Chrome browser and search engine are configured as the default options on Android devices and Google’s payments to Apple to ensure Google search is default on the Safari browser. The class action also covers Google’s Search Engine Management Platform (SA360). Brook alleges that this offers better functionality and more features regarding Google’s own advertising offering than that of its competitors. Damien Geradin, founding partner of Geradin Partners, the legal firm representing Brook, said: “This is the first claim of its kind in the UK that seeks redress for the harm caused specifically to businesses who have been forced to pay inflated prices for advertising space on Google pages.” In the claim, Brook argues that Google has been shutting out competition in the general search and search advertising markets. The claim argues that Google’s conduct has prevented competitors in the general search market from distributing their own search engines, which has enabled Google to maintain its dominance, leading to restricted competition in general search. Brook contests that Google has ensured that its own search platform is the only viable means of advertising to the vast majority of consumers, and ensured its dominance in search advertising. She said: “Today, UK businesses and organisations, big or small, have almost no choice but to use Google ads to advertise their products and services. Regulators around the world have described Google as a monopoly and securing a spot on Google’s top pages is essential for visibility. “Google has been leveraging its dominance in the general search and search advertising market to overcharge advertisers. This class action is about holding Google accountable for its unlawful practices and seeking compensation on behalf of UK advertisers who have been overcharged.” On top of the class actions, Google is also being investigated by the CMA, which is looking at whether its Play Store requires app developers to sign up to unfair terms and conditions as a condition of distributing their apps. Read more stories about Google’s legal wrangles Apple and Google app stores come under CMA scrutiny: The Competition and Markets Authority in the UK is looking at whether the Play Store and App Store support innovation and are pro-competition. Google slams US government’s proposal to split company up on anti-competitive grounds: The US Department of Justice has incurred the wrath of Google for suggesting a series of remedies, after the US court ruled back in August that Google had an illegal monopoly over internet search market. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 Comments 0 Shares 71 Views
  • WWW.COMPUTERWEEKLY.COM
    Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim
    Markus Schümmelfeder has spent more than a decade looking for ways to help biopharmaceutical giant Boehringer Ingelheim exploit digital and data. He joined the company in February 2014 as corporate vice-president in IT and became CIO in April 2018. “It was a natural evolution,” he says. “Over time, you see what can be done as a CIO and have an ambition to make things happen. This job opportunity came around and it was when digitisation began. I saw many possibilities arising that were not there before.” Schümmelfeder says the opportunity to become CIO was terrific timing: “It was a chance to bring technology into the company, to make more use of data, and evolve the IT organisation from being a service deliverer into a real enabler. My aim for all the years I’ve been with Boehringer is to integrate IT into the business community.” Now, as the company’s 54,000 employees use more data than ever before across the value chain, including research, manufacturing, marketing and sales, Schuemmelfeder’s aim is being realised. He says professionals across the business understand technology is crucial to effective operational processes: “It’s about bringing us close together to make magic happen.” Schümmelfeder says one of his key achievements since becoming CIO is leading the company on a data journey. His vision supported the company’s progress along this pathway. “I went to the board and said, ‘This is what we should do, what we want to do, what makes sense, and what we perceive will be necessary for the future’,” he says. “We started that process roughly five years ago and everyone knows how important data is today.” Making the transition to a data-enabled organisation is far from straightforward. Rather than being focused on creating reports, Schümmelfeder says his vision aimed to show people across the organisation how they could exploit information assets effectively. One of the key tenets for success has been standardisation. “This is a fundamental force, and the team has done good work here,” he says. “10 years ago, we had between 4,500 and 5,000 systems across the organisation. Today, we have below 1,000. So, we reduced our footprint by 80%, which is a great accomplishment.” Standardisation has allowed the IT team to deliver another part of Schümmelfeder’s vision – a platform-based approach to digitisation. Rather than investing in point solutions to solve specific business challenges, the platform approach uses cloud-based services to help people “jump start topics” as the business need arises. The crucial technological foundation for this shift to standardisation has been the cloud, particularly Amazon Web Services (AWS), Microsoft Azure and a range of consolidated enterprise services, such as Red Hat OpenShift, Kubernetes, Atlassian Jira and Confluence, Databricks, and Snowflake. Schümmelfeder says the result is a flexible, scalable IT resource across all business activities.  “You can create a cloud environment in minutes,” he says. “You can have an automated test environment that is directly attached and ready to use. You can create APIs immediately on the platform. We want people to deliver solutions at a faster pace, rather than creating individual solutions again and again.” Boehringer recently announced the launch of its One Medicine Platform, powered by the Veeva Development Cloud. The unified platform combines data and processes, enabling Boehringer to streamline its product development. Schümmelfeder says the technology plays a crucial enabling role. The One Medicine Platform is integrated with Boehringer’s data ecosystem, Dataland, which helps employees make data-driven decisions that boost organisational performance. Dataland has been running since 2022. The ecosystem collates data from across the company and makes it available securely for professionals to run simulations and data analyses. “In the research and development space for medicine, there was nothing like a solid enterprise platform,” says Schümmelfeder, referring to his company’s relationship with Veeva. “We had about 50, maybe even more, tools that were often not interconnected. If you wanted to replicate data from one service to another, you’d have to download the data, copy and paste, and so on. That approach is tedious.” The One Medicine Platform allows Boehringer to connect data across functions, optimise trial efficiency around its research sites, and accelerate the delivery of new medicines to treat currently incurable diseases. Schümmelfeder says the Veeva technology gives the business the edge it requires. “We saw we were slower than our competitors in executing clinical trials. We thought we could be much better. We wanted to look for a new way of executing clinical trials, and we needed to discuss our processes and potentially redefine and change them based on the platform approach,” he says. “We chose Veeva because it was the most capable technology to help us deliver the spirit of a platform. It’s also an evolving technology with good future potential.” Schümmelfeder says the data platform he’s pioneered is helping Boehringer explore emerging technologies. One key element is Apollo, a specialist approach to artificial intelligence (AI), allowing employees to select from 40 large language models (LLMs) to explore their use cases and exploit data safely. He says this large number of LLMs allows Boehringer employees to select the best model for a specific use case. Alongside mainstream models like Google Gemini and Open AI’s ChatGPT, the company uses niche models dedicated to research that can deliver more appropriate answers than general models. Schümmelfeder says Boehringer does not develop models internally. He says the rapid pace of AI development makes it more sensible to dedicate IT resources to other areas. The company’s staff can use approved models and tools to undertake data-led research in several key areas: “We have a toolbox staff can dip into when they realise an idea or use case.” He outlines three specific AI-enabled use cases: Genomic Lens generates new insights that enable scientists to discover new disease mechanisms in human DNA; the company uses algorithms and historical data to identify the right populations for clinical trials quickly and effectively; and Smart Process Development, which applies machine learning and genetic algorithms to create productivity boosts in biopharmaceutical processes. “My aim for all the years I’ve been with Boehringer is to integrate IT into the business community” Markus Schümmelfeder, Boehringer Ingelheim Another key area of research and development is assessing the potential power of quantum computing. Schümmelfeder suggests Boehringer has one of the strongest quantum teams in Europe. He recognises that other digital and business leaders might feel the company’s commitment is ahead of the adoption curve. “And I would say, ‘Yes, you’re right’, but then you need to understand how this technology works. We are helping to make breakthroughs, to bring code to the industry and to discover how we will use quantum. So, we have a strong team that brings a lot to the table to help this area evolve,” he says. “I’m convinced quantum computing will be a huge gamechanger for the pharma industry once the technology can be used and set into operations. That situation is why I believe you have to be involved in quantum early to understand how it works. You need to bring knowledge into the organisation and be part of making quantum work.” While Schümmelfeder acknowledges Boehringer isn’t pursuing true quantum research yet, the company has built relationships with other technology specialists, such as Google Research. He says these developments are the foundations for future success in key areas, such as understanding product toxicity: “It’s relatively early, but you can see the investment. I hope we can see the first real use cases by the end of this decade.” Schümmelfeder considers the type of data-enabled organisation he’d like to create during the next few years and suggests the good news is that the technological foundations for further transformation are now in place. “We don’t need a technology revolution, I think we’ve done that,” he says. “We’ve done our homework, and we’ve standardised and harmonised. The next stage is not about more standardisation, it’s more about looking specifically at where we need to be successful. That focus is on research and development, medicine, our end-customers and how to improve the lives of patients and animals. That work is at the core of what we want to do.” With the technology systems and services in place, Schümmelfeder says he’ll concentrate on ensuring the right culture exists to exploit digitisation. That focus will require a concerted effort to evolve the skills across the organisation. The aim here will be to ensure many people in all parts of the business have the right capabilities. “When you talk about data, you don’t need 10 people able to do things, you need thousands of people who can execute,” he says. “You need to bring this knowledge to the business. That means business and IT must integrate deeply to make things happen. The IT team has to go to the business community and ask big questions like, ‘What do you need? Tell me the one thing that can make you truly successful?’” Schümmelfeder says that finding the answers to these questions shouldn’t be straightforward. Sometimes, he expects the search to be uncomfortable. IT can’t sit back – the company’s 2,000 technology professionals must drive the identification of digital solutions to business problems. Line-of-business professionals must also feel comfortable and confident using emerging technologies and data. He says the company’s Data X Academy plays a crucial role. Boehringer worked with Capgemini to develop this in-house data science training academy. Data X Academy has already trained 4,000 people across IT and the business. Schümmelfeder hopes this number will reach 15,000 people during the next 24 months and allow data-savvy people across the organisation to work together to develop solutions to intractable challenges. “We want to ask the right questions on the business side and create lighthouse use cases in IT that show people what we can do,” he says. “We can drive change together with the business and create an impact for the organisation, our customers and patients.” Read more data and digital interviews with IT leaders The importance of building a data foundation: We speak to Terren Peterson, Capital One’s vice-president of engineering, about how data pipelines and platforms are essential for AI success. Interview: James Fleming, CIO, Francis Crick Institute: Helping to cure cancer with computers puts digital leadership on another level – and the world-leading research institute is turning to data science and artificial intelligence to achieve its groundbreaking goals. Interview: Wendy Redshaw, chief digital information officer, NatWest Retail Bank: The retail bank is moving at pace to introduce generative AI into key customer-facing services as part of a wider digital transformation across the organisation.
    0 Comments 0 Shares 85 Views
  • WWW.COMPUTERWEEKLY.COM
    CVE Foundation pledges continuity after Mitre funding cut
    In the wake of the abrupt termination of the Mitre contract to run CVE Programme, a group of vulnerability experts and members of Mitre’s existing CVE Board have launched a new non-profit with the intention of safeguarding the programme’s future. The CVE Foundation’s founders want to ensure the continuity, viability and stability of the 25-year-old CVE Programme, which up to today (April 16) has been operated as a US government-funded initiative, with oversight and management provided by Mitre under contract. Even reckoning without the impact of Mitre’s loss of the CVE programme contract – which is one of a number of Mitre-held government contracts axed in recent weeks – and has already led to layoffs at the DC-area contractor – the CVE Board members say they already had longstanding concerns about the sustainability and neutrality of such a globally relied-upon resource being tied to a single government. Their concerns became suddenly heightened after a letter from Mitre’s Yosry Barsoum warning that the CVE Programme was under threat circulated this week. “CVE, as a cornerstone of the global cyber security ecosystem, is too important to be vulnerable itself,” said Kent Landfield, an officer of the foundation. “Cyber security professionals around the globe rely on CVE identifiers and data as part of their daily work – from security tools and advisories to threat intelligence and response. Without CVE, defenders are at a massive disadvantage against global cyber threats.” The founders said that while they hoped today would never come, they have spent the past year working diligently in the background to create a strategy to transition the CVE system into a dedicated, independent non-profit. Unlike Mitre – originally a computer research spin-out at MIT in Boston that now operates multiple R&D efforts – the CVE Foundation will be solely dedicated to delivering high-quality vulnerability identification, and maintaining the integrity and availability of the existing CVE Programme database on behalf of security professionals worldwide. The foundation says its official launch marks a “major step toward eliminating a single point of failure in the vulnerability management ecosystems” and safeguarding the programme’s reputation as a trusted, community-driven resource. “For the international cyber security community, this move represents an opportunity to establish governance that reflects the global nature of today’s threat landscape,” the founders said. Although at the time of writing the CVE Programme remains up and running, with new commits made to its GitHub in the past hours, reaction to the contract’s cancellation has been swift and scathing. “With 25 years of consistent public funding, the CVE framework is embedded into security programmes, vendor feeds, and risk assessment workflows,” said Tim Grieveson, CSO and executive vice-president at ThingsRecon, an attack surface discovery specialist. “Without it, we risk breaking the common language that keeps security teams aligned to identify and address vulnerabilities effectively. “Delays in sharing vulnerability data would increase response times and give threat actors the upper hand,” he added. “With regulations like SEC, NIS2, and Dora demanding real-time risk visibility, a lack of understanding of risk exposure and any delayed response could seriously hinder the ability to react effectively.” To maintain existing levels of resilience in the face of the shutdown, it’s important for security leaders to ensure organisations have a clear understanding of their attack surface and their suppliers, said Grieveson. Added to this, collaboration and information sharing in the security community will become even more essential than it already is. Read more on this story Mitre, the operator of the world-renowned CVE repository, has warned of significant impacts to global cyber security standards, and increased risk from threat actors, as it emerges its US government contract will lapse imminently. Chris Burton, head of professional services at Yorkshire-based penetration testing and security services provider Pentest People, said he hoped cooler heads would prevail. “It’s completely understandable there are concerns about the government pulling funding for the Mitre CVE Programme; it’s a troubling development for the security industry,” he said. “If the issue is purely financial, crowdfunding could offer a viable path forward, rallying public support for a project many believe in,” added Burton. “If it’s operational, there may be an opportunity for a dedicated community board to step in and lead. “Either way, this isn’t the end, it’s a chance to rethink and reimagine. Let’s not panic just yet; there are still options on the table, as a global community. I think we should see how this unfolds.” At a more practical level, Grieveson shared some additional steps for security teams to take right now: Map internal tooling dependencies on CVE feeds and APIs to know what breaks should the database go dark; Identify alternative sources to maintain vulnerability intelligence, focusing on context, business impact and proximity to ensure comprehensive coverage of threats, whether they be current, emerging or historic; Accelerate cross-industry intelligence sharing to proactively leverage tactics, tools and threat actor data.
    0 Comments 0 Shares 86 Views
  • WWW.COMPUTERWEEKLY.COM
    CISA extends Mitre CVE contract at last moment
    tanarch - stock.adobe.com News CISA extends Mitre CVE contract at last moment The US Cybersecurity and Infrastructure Security Agency has ridden to the rescue of the under-threat Mitre CVE Programme, approving a last-minute, 11-month contract extension to preserve the project’s vital security vulnerability work By Alex Scroxton, Security Editor Published: 16 Apr 2025 16:16 In a last-minute intervention, the US Cybersecurity and Infrastructure Security Agency (CISA) has extended its contract for the Mitre-operated Common Vulnerabilities and Exposures (CVE) Programme, relied on by security professionals around the world to keep up to date on the latest publicly disclosed security vulnerabilities. The future of the CVE Programme came into doubt earlier this week when a leaked letter from Mitre’s Yosry Barsoum warned that the contract pathway for the non-profit to run the programme was set to lapse within 24 hours. Barsoum said that should a break in service occur, the programme would experience multiple impacts including “deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure”. The revelation caused consternation around the world, with security professionals bracing for massive change in the industry as a result of the removal of what Mitre describes as a “foundational pillar” for the sector. Agreement to extend the contract under which Mitre oversees the vital CVE Programme was reached late on Tuesday 15 April, but news of this only began to trickle out on Wednesday morning. A CISA spokesperson said: “The CVE Program is invaluable to the cyber community and a priority of CISA. Last night, CISA executed the option period on the contract to ensure there will be no lapse in critical CVE services. We appreciate our partners’ and stakeholders’ patience.” CISA additionally confirmed that the contract extension will last for 11 months. Computer Weekly reached out to Mitre for further comment but the organisation had not yet responded at press time. The narrowly averted disruption comes at a difficult time for the cyber security community as it works flat out to ward off a vast array of threats from financially motivated and nation-state threat actors. At the same time, the industry must reckon with the impact of massive cuts being made across the US government by Elon Musk’s Department of Government Efficiency (DOGE). These cuts are now hitting America’s state cyber security apparatus including at the Department of Homeland Security (DHS) and CISA itself, which sits within the DHS. According to reports, it is likely that CISA may be looking at a reduction in its workforce of between a third and 90%, which would have a significant impact on the agency’s ability to protect US government bodies and critical infrastructure from cyber threats, and internationally, its ability to collaborate with partner agencies such as the UK’s National Cyber Security Centre (NCSC). CISA is also facing a comprehensive review of its activities over the past six years, focusing on instances in which its conduct may have run contrary to the purposes and policies established in Executive Order 14149, signed by president Trump on 20 January and titled Restoring freedom of speech and ending federal censorship. This review comes alongside a deeper probe into former CISA leader Chris Krebs, who last week saw his federal security clearance, and those of his current employer SentinelOne, revoked by Trump, to the consternation of many. Krebs was fired from CISA at the end of 2020 after he disputed Trump’s narrative that the presidential election had been rigged in favour of Joe Biden. Krebs and CISA had maintained there was absolutely no evidence of any interference. Read more on this story Mitre, the operator of the world-renowned CVE repository, has warned of significant impacts to global cyber security standards, and increased risk from threat actors, as it emerges its US government contract will lapse imminently. A group of vulnerability experts and members of Mitre’s existing CVE Board have launched a new non-profit with the intention of safeguarding the CVE Programme’s future and ensuring its independence. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 Comments 0 Shares 88 Views
  • WWW.COMPUTERWEEKLY.COM
    Footballers object to processing of performance data
    Football players are issuing “stop processing” requests to gaming, betting and data-processing firms over the use of their performance, health and injury data, citing ethical concerns with how the information being distributed about them can affect their career prospects. Under Article 21 of the UK’s General Data Protection Regulation (GDPR), individuals have the right to object to the processing of their personal data. Submitted on behalf of players by the Global Sports Data and Technology Group (GSDT) – an enterprise co-founded by former Cardiff City and Leyton Orient manager Russell Slade, and technologist Jason Dunlop – the “stop processing” requests are asking the companies involved to cease their processing of all tracking and performance data, as well as other personal information such as health or injury information. The “stop processing” requests sent by GSDT – as part of its Project Red Card initiative to give footballers and other sportspeople more control over the collection and use of their performance data – follow its extensive engagement with companies in the gaming, betting and sports data industries over the past five years. While GSDT are unable to disclose which companies the requests have been sent to, they are some of the largest betting, gaming and sports data consultancy companies in the world.   Speaking with Computer Weekly, Dunlop and Slade said that they must now take action via Article 21 of the UK GDPR because the ethical concerns GSDT has raised about the use of football players data have been ignored by firms throughout the sports data ecosystem. “We’re still in correspondence with them, but I just don’t think we’re moving forward,” said Dunlop. “It is disappointing that, despite ongoing engagement with these companies for the past five years, we have reached a point where we must take action to protect our players' and the processing of their data. “This issue could easily be resolved if players were recognised as stakeholders in the sports data ecosystem. While we have seen some attempts by others to address this, three critical questions remain: Do players have the right to object to industries outside sport using their data? Who should decide who uses a player's data? And what is the true value of this data to the athlete?” Slade added the concerns about player data extend to every level of the game: “Many people still don’t understand how this industry works and don’t realise how important accurate data is to a player’s value and career. It is crucial that we get this right for all players.” According to Dunlop, the processing of player information by companies in the gaming, betting and sports data industries takes place without informed consent or financial compensation. “There is an exemption for commercial processing, but let’s think about the ethics of this,” he said, highlighting that footballers are also people in a workplace trying to do a job. “If you were in work, and every time you went to the toilet, your manager measured the time that you went, left, came back and what you did, you’d probably have some major concerns about that. “Here we’ve got people in work who are being tracked 25 times a second for everything that they do, and that data is going to all sorts of people who they have no idea about.” Dunlop added that while most of the data collection is facilitated by clubs themselves for the purposes of player improvement, the information gathered by companies allowed into stadium’s is then widely distributed to a wide range of third parties, which in turn use the data for their own profit without the input or involvement of players themselves. Slade emphasised that while GSDT has no problem with football clubs using the player data for performance improvement purposes internally, the issue is that it then “goes everywhere”. Dunlop also highlighted a range of ethical concerns around, for example, the accuracy of player data, which can negatively affect their career prospects if inaccurate information is distributed, and the use of people’s data in the context of gambling if they are personally opposed to it. Ultimately, Dunlop said GSDT’s goal is to strike a fair balance between players’ rights and the commercial interests of firms that benefit from the data processing. “The difficulty is, how do you move from where we are to a position where all the stakeholders are comfortable with it?” he said, adding that players also have a right to not make it work and say no to the data processing completely if that’s what they decide. They told Computer Weekly that, because of the relatively short career spans of footballers, giving them more control over how their data is used can help to support them financially when they hang their boots up. “There’s a lot of life left after football, and that’s if players are lucky enough to see out a career to the age of 33 or 34,” said Slade, adding this also affects “bread-and-butter” players in lower leagues. “All these bets and this money is being exchanged on the basis of your data, and you aren't seeing a penny of that.” Depending on the response of firms to the Article 21 requests, Dunlop and Slade said that next steps would involve approaching the data regulator, before considering legal action. Their hope is that, because football plays such a central role in so many people’s lives, it can set a precedent for people becoming more aware of their data rights and how to exercise them. “I think this is a really good example where sport can lead the way for general data subjects,” said Dunlop. Read more about data rights High Court: Sky Betting ‘parasitic’ in targeting problem gambler: UK High Court rules that Sky Betting acted unlawfully after breaching a customer’s data protection rights when it obtained his personal data through cookies and used it to profile him for the purposes of direct marketing, despite his ‘impaired’ ability to provide meaningful consent. Meta settles lawsuit over surveillance business model: Meta settles lawsuit over use of personal data in targeted advertising, opening up the possibility of other UK users raising legal objections to its processing. Open Rights Group accuses LiveRamp of ‘unlawful’ data processing: Privacy campaigners at Open Rights Group have submitted formal complaints to UK and French data regulators about allegedly unlawful data processing by online advertising firm LiveRamp.
    0 Comments 0 Shares 88 Views
  • WWW.COMPUTERWEEKLY.COM
    AI chip restrictions limit Nvidia H20 China exports
    Bruno Coelho - stock.adobe.com News AI chip restrictions limit Nvidia H20 China exports The US government’s export controls have come into effect, limiting Nvidia’s ability to sell its H20 chip in China By Cliff Saran, Managing Editor Published: 16 Apr 2025 14:04 Nvidia is expecting a big hit to its business as reports emerge of the White House imposing export restrictions on its H20 GPU (graphics processor unit) in China. The new restrictions appear to have been carried over from the previous administration’s policy to restrict access to AI chips and advanced AI models. The Framework for Artificial Intelligence Diffusion from the US Bureau of Industry and Security (BIS), which was published January 15 2025, puts in place export administration regulation controls on advanced semiconductors. By imposing controls that allow exports, re-exports and transfers (in-country) of large quantities of advanced computing integrated circuits only to certain destinations and end users, BIS said the export controls could reduce the risk that malicious state and non-state actors gain access to advanced AI models. At the time, Ned Finkle, vice-president for government affairs at Nvidia, posted a damning indictment of the former US administration’s attempts to curb semiconductor exports. In a blog post, he said: “In its last days in office, the Biden Administration seeks to undermine America’s leadership with a 200+ page regulatory morass, drafted in secret and without proper legislative review. This sweeping overreach would impose bureaucratic control over how America’s leading semiconductors, computers, systems and even software are designed and marketed globally.” Finkle described the Biden Administration’s approach as “attempting to rig market outcomes and stifle competition”, adding: “The Biden Administration’s new rule threatens to squander America’s hard-won technological advantage”, and said they “would do nothing to enhance US security”. The new rules were set to come into effect on April 15, and appears that the Trump administration is not rescinding on the restrictions the Biden Administration had put in place. According to a news story on BBC, Nvidia has now said that the Trump administration has informed it that a licence will be required to export the H20 chip to China. In the transcript of the company’s Q4 2025 earnings call, posted on Motley Fool, Nvidia chief financial officer Colette Kress noted that, as a percentage of its total datacentre revenue, datacentre sales in China remained well below levels seen on the onset of export controls. “Absent of any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for datacentre solutions remains very competitive,” she said. While the company said it would continue to comply with export controls while serving its customers, its share price took a hit as a result of controls coming into effect. The H20 is a less powerful Nvidia AI accelerator, designed for the Chinese market. According to Antonia Hmaidi, senior analyst at Mercator Institute for China Studies, Nvidia sold a million H20s to Chinese customers in 2024. While the Financial Times recently reported that Chinese rival, Huawei, has been ramping up production of its home-grown AI offering, the Ascend chip, Hmaidi noted that in 2024, it only shipped 200,000 units, which “reveals structural issues in China’s semiconductor industry”. Hmaidi also noted that Huawei’s software lags behind Nvidia’s, with developers in China reluctant to adopt the chip for training most models. The export changes affecting the H20 come just a day after the Trump administration announced Nvidia was leading what it described as an “American-made chips boom”. Nvidia said that within the next four years, it plans to produce up to half a trillion dollars of AI infrastructure in the US through partnerships with TSMC, Foxconn, Wistron, Amkor and SPIL.  The company said it has started production of its Blackwell chips at TSMC’s chip plants in Phoenix, Arizona. It is also building supercomputer manufacturing plants in Texas, with Foxconn in Houston and with Wistron in Dallas. According to Nvidia, production at both plants is expected to ramp up in the next 12 to 15 months. Read more about US tariffs and restrictions Trump puts stamp on CHIPS Act deals with new office: President Donald Trump tasks a new U.S. Investment Accelerator office with managing the CHIPS Program Office and reducing businesses’ regulatory burdens. Biden's AI diffusion rule met with heavy backlash: A newly proposed rule from the Biden administration targeting AI and chip exports has been met with intense criticism. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 Comments 0 Shares 74 Views
  • WWW.COMPUTERWEEKLY.COM
    Microsoft remains committed to AI in France
    With a large ecosystem of partners in France in both the public and private sectors, Microsoft already has a big stake in the country. But last May, the company announced it will be upping the ante with an investment of €4bn to accelerate the adoption of artificial intelligence (AI) and cloud technologies.  The company said that much of the money will go towards developing a datacentre using the latest generation of technology and in training citizens on AI. Both improved infrastructure and enhanced AI skills figure prominently in France’s National Strategy for AI and the recommendations of the French Commission for Artificial Intelligence, which aim to position France as a leader in both development and use of AI.  In addition to building a new datacentre near Mulhouse, Microsoft will use some of the funding to expand its datacentre capacity in Paris and Marseilles. The company announced in May 2024 that it plans to have a total of 25,000 GPUs available for AI workloads by the end of 2025. The expanded datacentre capacity should provide a boost across the economy as AI and cloud are being used in all industries in France.   In her keynote at the event in March, Corine De Bilbao, president of Microsoft France, said that if AI is applied the right way, it can double France’s economic growth between now and 2030. Not only will AI enable faster innovation, but it will also help organisations in the country face the talent shortage and reinvent manufacturing processes. Infrastructure alone is not enough – a skilled population and a healthy ecosystem are also needed. This is why, according to De Bilbao, Microsoft will train one million French people by 2027 and will help 2,500 startups during the same timeframe.  The recommendations of the French Artificial Intelligence Commission include training in different forms, such as holding ongoing public debates on the economic and societal impacts of AI, adding AI to higher education programmes in many areas of study, and training people on specific AI tools. Microsoft intends to help in these areas and train office workers, so they know how to prompt AI tools to get the best results, and so they understand what happens with their data and how it’s processed. The company will also train developers and make sure companies of all sizes have the skills they need to use Microsoft’s latest tools.  Microsoft is already involved in the startup community – for example, it’s one of the partners of  Station F, which claims to be the world’s largest startup campus. A thousand startups are hosted in Station F, which offers more than 30 programmes to help entrepreneurs. Philippe Limantour, CTO of Microsoft France, told Computer Weekly: “We have a dedicated programme in Station F called Microsoft GenAI Studio that supports select startups. And we help startups with our technology and by providing training.”  AI comes with a new set of security threats. But it also delivers some new tools that can be used to protect organisations and individuals. According to Vasu Jakkal, corporate vice-president of Microsoft Security, business and technology leaders are particularly concerned with leakage of sensitive data, and indirect prompt injection attacks. Jakkal said in her keynote that all datacentres will be protected with new measures to counter attacks specific to AI – attacks on prompts and models, for example.  Jakkal also spoke about how GenAI can be used to boost cyber security. For example, Microsoft Security Copilot, which was launched last year, helps not only to detect security incidents and respond to them, but also to find the source. She said during her keynote that Microsoft detected more than 30 billion phishing emails target customers between January and December 2024, a volume of attacks that far surpasses what teams can handle manually. She said a brand new set of phishing triage agents in Microsoft Security Copilot can now handle some of the work to free teams to focus on more complex cyber threats and take proactive measures.  Scientific research and engineering were also big topics of conversation during the event with Antoine Petit, CEO of the French National Centre for Scientific Research (CNRS), saying during a panel discussion that CNRS opened a group called AI for Science and Science for AI. Petit said that the centre recognises the importance not only of conducting more research in AI but also in applying AI to help scientists in other research. But he said the technology is still in its infancy so nobody knows exactly how it will affect science.   Alain Bécoulet, deputy director general of ITER, who was on the same panel, said that scientific organisations need to free researchers from some of the more mundane tasks so they can play their role as creators. AI may offer a way of providing the information that is both necessary and sufficient, so that researchers and engineers can fulfil their roles.   A topic that permeated all discussions at the event was the ethical use of AI in France. Limantour told Computer Weekly that Microsoft has been focused on responsible AI for a long time. This is not only for reasons of compliance, but it’s also because the company thinks responsible use of AI is the best way to get value out of the technology. “The future is bright for people who are trained to use AI safely,” Limantour said. Read more about AI in France L’Oréal: Making AI worth it. TCS to inject AI and quantum computing into aerospace through French delivery centre. AI Action Summit: Two major AI initiatives launched.
    0 Comments 0 Shares 110 Views
  • WWW.COMPUTERWEEKLY.COM
    Saudi Arabia struggling to reach global leadership in deeptech
    Wasim Alnahlawi - stock.adobe.co News Saudi Arabia struggling to reach global leadership in deeptech Petrostate monarchy trying to build surrogate industry made of foreign startups because own ecosystem is too immature By Mark Ballard Published: 16 Apr 2025 11:15 Saudi Arabia is investing in foreign deeptech startups to get them to come to its country and help it build an industry, because its own sci-tech ecosystem is too immature to meet its ambition of becoming a global leader in state-of-the-art fields of technology.  The yawning gap to the goal it set for deeptech to help it transform from monolithic petrostate to diverse hi-tech economy was apparent in a 2019 study by Boston Consulting Group (BCG) that found of 8,600 deeptech firms worldwide, the Middle East had just three in the UAE and two in Iran. The US, the leader, had 4,198 firms. The UK, almost joint-third with Germany and behind China, had 435. By 2022, Saudi Arabia (KSA) had 43, its ministry of communications and information technology (MCIT) said upon launching its roadmap to achieving global leadership in January. That was six years after it embarked on its Vision 2030 strategy to build tech industries to drive its transformation.  But it has made some notable investments in foreign firms that will help it establish a surrogate deeptech industry while it pursues the long, hard work it has begun in building a substrate of scientists and investors from which an indigenous deeptech industry can grow.  This, along with reform of burdensome regulations, might remove obstacles it has found deter not only its own aspiring entrepreneurs, but foreign startups as well.  It promotes some success in spinning deeptech startups out of King Abdullah University of Sciences & Technology (Kaust) in Riyadh, where it has concentrated efforts to build advanced research facilities to attract foreign scientists, and given them startup funding to turn their breakthroughs into ventures that might one day make money. Investing in education It has meanwhile been investing in STEM education in schools, channeling two million students into university in the hope that its young population demographic will give it an edge over other countries. Last year, it counted 20,000 deeptech researchers in its universities, though it does not say how many of those are foreign. Their numbers increased 75% in five years, according to its roadmap report. It declared a goal to increase them another 700% in the next five years. Those researchers it does have rank among the best in the world.  To support them, it pledged a tenfold increase in public spending on research, development and innovation. It’s cutting red tape to make it easier to start and run a business, and creating special economic zones with laxer rules.  Its problems, however, are systemic. The scientists that deeptech firms need are leaving the country in a brain drain that by last count amounted to 81%. Though startup funding has boomed in Saudi Arabia, little of it has gone to deeptech because its “infant” sector is so underdeveloped, the roadmap report says. Most of it came from public budgets, via Wa’ed Ventures, the investment arm of state oil firm Aramco, and Kaust. A growing network of startup investors, incubators and accelerators are not equipped to handle deeptech. Deeptech startups are defined by their great need for scientists, guidance and lots of capital to see them through lengthy, R&D-laden startup periods.  Given that, it might take KSA a decade to build an industry, Alizée Blanchin, research director of influential deeptech dealmaker Hello Tomorrow – whose team co-authored the roadmap report with Kaust and MCIT – told Computer Weekly. “If you look at the number of startups that are spinouts from universities or research centres, that is very nascent still,” she said. Hence, the Saudi monarchy has made getting help from foreign startups a key element of its strategy. However, its sci-tech business ecosystem is still so immature that foreign startups do not want to come. Therefore, KSA settled on buying foreign deeptech startups into the country by investing in them instead, and using them to build its ecosystem.  Blanchin said with that strategy, it might even achieve its ambition. It’s also making up for its immaturity with its speed and decisiveness. “Contrary to other ecosystems, they make things happen,” she said. “They put the resource in, and they put things together, and find the means and the partners very quickly. In Europe, we take three years to think about something and then a year-and-a-half to put it together.” Read more about Saudi Arabia’s tech ambitions and challenges The strategy will be exemplified later this year, when Saudi state oil firm Aramco will crank up the world’s largest industrial quantum computer, with intimate help from Pasqal, the pioneering French startup supplying it. Aramco investment arm Wa’ed Ventures tied the deal with two investments in Pasqal. The latter involved Wa’ed joining a consortium investment of €108m when Pasqal was just three years old. Pasqal now has a regional headquarters at Kaust.  “We got involved in Saudi thanks to Aramco,” Pasqal CEO Georges-Olivier Reymond told Computer Weekly. “[In] 2022, they had the idea to be a customer and investor. It was a fantastic opportunity for a company like Pasqal. This was the first time ever a private company was buying a quantum computer. It was a big deal.” Pasqal also got a commitment from Aramco to build and demonstrate the first use cases for quantum’s application in industry, the crucial ingredient quantum computing lacks as its startups strive to commercialise their costly R&D and turn a profit.  Other foreign deals KSA cited in a recent report intended to form the basis of a roadmap to global leadership in quantum similarly involve it securing help developing systems that use quantum computers, not the scientific expertise to build them.  Asked what chance KSA had of achieving its ambition, Jean-François Bobier, vice-president of deeptech at BCG, said: “Their corporations have a strong willingness to experiment and invest in new technology. There are not many companies willing to deploy or be early adopters. On that point, it’s attractive.” MCIT, Kaust and Aramco were not prepared to comment. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 Comments 0 Shares 87 Views
  • WWW.COMPUTERWEEKLY.COM
    MITRE warns over lapse in CVE coverage
    One of the cyber security world's most significant assets, the common vulnerabilities and exposures (CVE) system operated by US-based non-profit MITRE appears to be heading for trouble after it emerged that the contract pathway for MITRE to continue to run the project on behalf of the US authorities, is set to lapse on Wednesday 16 April with no replacement ready. In a letter to MITRE board members circulated today, a copy of which has been reviewed by Computer Weekly, Yosry Barsoum, vice president and director at the Centre for Securing Homeland (CSH) at MITRE, said the US government was currently making “considerable efforts” to continue MITRE’s longstanding role in the CVE programme. “If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” wrote Barsoum. “MITRE continues to be committed to CVE as a global resource. We thank you as a member of the CVE Board for your continued partnership,” he added. A spokesperson for MITRE confirmed the legitimacy of Barsoum’s statement to Computer Weekly. They described the CVE programme as a “foundational pillar” of the cyber sector, anchoring a global industry worth close to $40bn (£30bn). The 25 year-old CVE system is designed to serve as a reference and repository for disclosed cyber security vulnerabilities, and has been maintained by MITRE since its inception at the end of the 1990s, with funding drawn from the National Cyber Security Division of the Department of Homeland Security. Over the years its impact on the world of security research has been of immense significance, providing cyber defenders with data on emerging vulnerabilities and threats, some of which have been implicated in some of the largest cyber incidents ever seen – such as WannaCry, SolarWinds Sunburst, Log4j, and MOVEit to name but a few. Its continuing work will be familiar to most thanks to the sheer volume of CVEs – recognisable by their unique identifiers comprising the letters CVE, the year, and a numeric code – released on the second Tuesday of every month by Microsoft in its Patch Tuesday update. If it was to have to cease operations, even temporarily pending a contract renewal, the impact would be keenly felt across the entire technology industry. Patch Tuesday aside, the current number of CVEs of all types being discovered and disclosed is running at record highs and shows no signs of slowing. Disruption to the CVE system would be a gift to both financially-motivated cyber criminals and nation-state actors alike, who would be able to swiftly take advantage of any downtime as they continue to seek out, develop and weaponise new vulnerabilities, while security professionals would be left fumbling in the dark. Coming amidst deep and painful government cuts being made in the US, the potential risk to the national security postures of the US and its allies from states such as China and Russia, is also extremely serious – a fact not lost on many members of the security community who took to social media late 15 April to spread the word. Writing on LinkedIn, one observer speculated that the deprecation of MITRE’s contract was by design, and that taken alongside cuts to the likes of the Cybersecurity and Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST), the US was tearing down core security institutions amid a significant ongoing cyber crisis. But with customary community spirit, many cyber professionals are already stepping up to address the looming shutdown. Patrick Garrity, a security researcher at VulnCheck, said: “We want to take a moment to thank MITRE for its decades of contributions to the CVE programme. “Given the current uncertainty surrounding which services at MITRE or within the CVE programme may be affected, VulnCheck has proactively reserved 1,000 CVEs for 2025.” Garrity added that VulnCheck’s reporting service would continue to assign CVE numbers for as long as it could do so. “VulnCheck is closely monitoring the situation to ensure that both the community and our customers continue to receive timely, accurate vulnerability data,” he said. MITRE added that historical CVE Records will continue to be available at GitHub. Read more about MITRE's work A MITRE collaboration with industry looks to strengthen AI defences with trusted contributors set to receive, protected, anonymised data about AI cyber incidents. The Mitre ATT&CK framework may seem daunting at first, but it is a key tool that helps SOC teams conduct threat modeling. Learn how to get started. An FDA MITRE playbook highlights key medical device security considerations and a resource appendix to help healthcare organisations navigate incident preparedness and response.
    0 Comments 0 Shares 82 Views
  • WWW.COMPUTERWEEKLY.COM
    UK challenger bank targets US’s mid-tier banking sector with tech platform
    News UK challenger bank targets US’s mid-tier banking sector with tech platform Starling Bank expands its banking as a service platform sales operation into the North American market By Karl Flinders, Chief reporter and senior editor EMEA Published: 15 Apr 2025 15:45 Starling Bank has created a US subsidiary where it will target mid-sized banks with its banking software as a service (BaaS) offering. The tech operation of Starling Bank has already won customers in Europe and Australia for its platform, which was originally launched in the UK in 2018 and rebadged as Engine in 2022. Its first BaaS customer was savings fintech Raisin. In 2021, it expanded its BaaS sales operation into continental Europe, and has since launched in Australia. The US operation will be the business’s first overseas subsidiary, with a regional headquarters planned for the US East Coast. It’s not just banks taking up BaaS platforms. Non-banking businesses such as retailers, ecommerce companies and distributors are increasingly looking to offer financial products to their customers. This could be credit, loans or even debit cards. To provide these services, however, they need to be regulated and have access to expensive banking tech, so businesses are instead using financial services offered by banks and fintechs. The application programming interface (API)-driven services are regulated through the bank, which also provides the tech infrastructure. In the past, financial services firms and retail businesses have partnered with banks to offer financial services, whereby they brand the front end, but the banking service, which includes the systems and regulatory approval, is provided by a traditional bank. However, demand for financial services to be embedded into services is has changed this. Read more about Starling Bank According to Starling, the US presents significant opportunities for Engine sales due to the large number of mid-tier banks, community banks and credit unions, “many of which are keen to launch new digital services for their customers”. Starling was founded in 2014, and it received its banking licence in 2016, launching the following year. It was designed by Anne Boden, a banker with an IT background, and aims to use modern technology to make banking as convenient as possible, while enabling customers to benefit from the data they generate in their everyday lives. It reached Unicorn status, a value of over $1bn, by 2021. As a company that leads on tech, with a tech-heavy workforce, the banking-as-a-service model is a natural fit alongside the more traditional services of a bank. “This is a significant step as we take the technology that has enabled Starling’s success in the UK to more financial institutions around the world,” said Raman Bhatia, CEO at Starling Bank. “In Engine, we have a world-class software as a service business that delivers the technology and the expertise that banks need to succeed in the digital age.” According to research from Research and Markets, the global market for BaaS was valued at $29.5bn in 2024, and is projected to reach $74.8bn by 2030, growing 16.8% a year. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue The DEI backlash is over – we are talking a full scale revolt – WITsend Sysdig: A new arms race on the evolving battlefield of cloud security – CW Developer Network View All Blogs
    0 Comments 0 Shares 80 Views
  • WWW.COMPUTERWEEKLY.COM
    Hertz warns UK customers of Cleo-linked data breach
    Oleksandr - stock.adobe.com News Hertz warns UK customers of Cleo-linked data breach Car hire giant Hertz reveals UK customer data was affected in a cyber incident orchestrated via a series of vulnerabilities in Cleo managed file transfer products By Alex Scroxton, Security Editor Published: 15 Apr 2025 15:48 Car hire giant Hertz has disclosed a worldwide data breach affecting the UK and other major markets, after becoming embroiled in a serious compromise of Cleo Communications’ suite of managed file transfer (MFT) products by the Clop (aka Cl0p) ransomware gang. Although parent Hertz Corporation – which besides the eponymous rental firm operates the Dollar and Thrifty brands – was earlier named by Clop on its leak site, the organisation had previously said there was no evidence of an intrusion. In its latest notice, it did not name Clop or officially disclose an extortion or ransomware attack, but revealed that it appeared the incident had affected the personal information of certain individuals. A spokesperson said: “On 10 February 2025, we confirmed that Hertz data was acquired by an unauthorised third party that we understand exploited zero-day vulnerabilities within Cleo’s platform in October 2024 and December 2024. Hertz immediately began analysing the data to determine the scope of the event and to identify individuals whose personal information may have been impacted. “We completed this data analysis on 2 April 2025, and concluded that the personal information involved in this event may include the following regarding UK individuals: name, contact information, date of birth, driver’s license information and payment card information.” Hertz has reported the incident to law enforcement and is in the process of engaging relevant national regulators. It is also working with Kroll to provide two years of free identity monitoring services to potentially affected individuals. This offer is also being made available to affected customers in the US – where other data including social security numbers, as well as Medicare and Medicaid identification, has also been affected. Customers in Australia, Canada, the European Union (EU) and New Zealand can also consult localised notices for further guidance. US-based Cleo has become the latest in a long line of file transfer services and tools to have been targeted by Clop – probably the most notable of these being the compromise of Progress Software’s MOVEit tool in the spring of 2023. Its Cleo attacks arose through two common vulnerabilities and exposures (CVEs) tracked as CVE-2024-50623 and CVE-2024-55956 in its Harmony, VLTrader and LexiCom products. The first of these arises through improper handling of file uploads in the Autorun directory, which enables an attacker to upload malicious files to a server and execute them. The second enables remote code execution (RCE) through Autorun by enabling an unauthenticated user to import and execute arbitrary Bash or PowerShell commands on the host using default settings. It also lets an attacker deploy modular Java backdoors to steal data and conduct lateral movement. Dray Agha, senior manager of security operations at Huntress, which has been at the forefront of tracking the Cleo incident since the vulnerabilities first surfaced, said: “The Hertz data breach underscores the significant risks posed by unpatched zero-day vulnerabilities in widely used third-party platforms like Cleo. This highlights the importance of maintaining robust vulnerability management programmes to identify and address security gaps in software promptly, especially those used for sensitive data transfer. “The breach also reflects a growing trend of cyber criminals targeting secure file transfer platforms, which are integral to many organisations’ operations. The evolving tactics of ransomware groups shift focus from encryption to data theft and extortion, signal the need for comprehensive cyber security strategies, including encryption of sensitive data at rest and in transit, and heightened monitoring of external connections.” Read more about Clop’s Cleo compromise The exploitation of two new vulnerabilities in a popular file transfer service saw the Clop ransomware gang soar in February, according to NCC. The new Cleo zero-day vulnerability, CVE-2024-55956, is separate from CVE-2024-50623 despite both vulnerabilities being used by threat actors to target the same endpoints. In December 2024, threat actors began exploiting a new zero-day vulnerability in Cleo's managed file transfer products, but the details of the flaw remained unclear. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue The DEI backlash is over – we are talking a full scale revolt – WITsend Sysdig: A new arms race on the evolving battlefield of cloud security – CW Developer Network View All Blogs
    0 Comments 0 Shares 89 Views
  • WWW.COMPUTERWEEKLY.COM
    Roadmap for commercial adoption of quantum computing gains clarity
    Over the past few months, some significant breakthroughs in quantum computing technology have indicated how quickly the technology is evolving. While it remains very much in the domain of academia and researchers tackling error correction, the roadmaps of quantum computing businesses suggest that useful machines are on their way. IBM’s roadmap shows that this year, there will be a move away from its current Heron machine to a new device called Flamingo, which is effectively based on connecting two Heron devices together. During its first quantum computing developer conference in November 2024, IBM demonstrated the connectivity technology, called L-couplers, which connects two Heron R2 chips with four connectors measuring up to a metre long. Flamingo marks the start of a three-year programme to evolve the number of gates on a quantum device from 5,000 to 15,000 by 2028, using a modular quantum computing architecture. In February, Microsoft published research of topological qubits called Majorana fermions, which the company anticipated would offer more stable qubits, requiring less error correction. It’s also working on a device called Majorana 1, which can be used to detect these qubits, enabling it to be used in running quantum computing calculations. Will Ashford-Brown, director of strategic insights at Heligan Group, said: “Every day we inch closer to realising commercial quantum usage for real applications. Size, cooling, price, speed and impact are all part of the long tail of improvements, but it would seem we are at the point where commercial application, investment and opportunity are knocking at the door.” He anticipates that the availability of a new generation of quantum computing platforms will result in a surge in customer demand. “Presently, the market has been mostly limited to national research laboratories and supercomputing labs,” said Ashford-Brown. “But commercial adoption is getting started, beginning with the tech giants. Microsoft, Amazon, Google and IBM have all partnered with quantum computing startups to provide quantum-based cloud services or are developing their own machines.” While quantum computing evolves, there’s also a lot of interest in hybrid approaches that can take advantage of the technology to speed up computationally complex calculations. D-Wave, for instance, recently expanded its quantum-optimisation offering, with several initiatives aimed at boosting adoption. It said Ford Otosan, a joint venture between Ford Motor Company and Koç Holding in Turkey, has deployed a hybrid-quantum application in production based on D-Wave technology, which streamlines manufacturing processes for the Ford Transit. The US has an eight-year plan to make quantum computing commercially useful. Alice & Bob, Quantinuum and Rigetti are among 10 quantum computing businesses selected by the US Department of Defense to participate in the first stage of the agency’s Quantum Benchmarking Initiative (QBI). This aims to assess the feasibility of building an industrially useful quantum computer by 2033. These developments represent a small snapshot of the immense work taking place across the tech sector to develop quantum computing and hybrid architectures that use quantum technology to accelerate computationally difficult tasks. Graeme Malcolm, founder and CEO of M Squared Lasers, believes there is now a need for a decisive commercial push. “The industry is on the cusp of crossing the so-called ‘quantum valley of death’ – a pivotal transition from research excellence to commercial reality,” he said. Read more about quantum computing Research team demonstrates certified quantum randomness: A 56-qubit trapped ion quantum computer from Quantinuum has demonstrated quantum supremacy as a random number generator. When will quantum computing be available? It depends: Quantum computing availability timelines depend on who is measuring and how they interpret ‘availability’. Varied definitions make for a complex market. Given the government’s recent injection of funding, which will see £121m being put up to drive development of quantum technology in the UK, he added: “Our collective focus must now shift to industrialisation. A nation without quantum will be a nation without critical advantage.” However, in spite of the progress being made, a survey from Economist Impact recently reported that 57% of respondents believe misconceptions about quantum computing are actively hindering advancement. The findings suggest a disconnect between technological development and business readiness, reinforcing the need for better communication, education and alignment at the executive level to maintain the momentum of progress. Helen Ponsford, head of trade, technology and industry events programming at Economist Impact, said: “With 80% of respondents stating that demonstrating industry-specific use cases is essential to accelerating adoption, and two-thirds highlighting the importance of proving return on investment, the message is clear: commercial relevance must closely follow scientific breakthroughs for quantum to sustain its growth.” Although there has been plenty of progress in making quantum computing technology available to software developers through platforms and software developer kits, no discussion on quantum computing is complete without addressing security concerns, which need to be in place well before mass adoption. Looking at quantum-safe cryptography, Daniel Shiu, chief cryptographer at Arqit, said: “Even though the timeline for a viable quantum computer is uncertain, two things are clear – the industry is advancing and the threat is already here. Any systems compromised today could have their data decrypted once quantum machines arrive, unless adequately protected. Quantum security is a concern we need to address now.”
    0 Comments 0 Shares 90 Views
More Stories