Computer Weekly
Computer Weekly
Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
  • 1 أشخاص أعجبو بهذا
  • 658 المنشورات
  • 2 الصور
  • 0 الفيديوهات
  • 0 معاينة
  • Science &Technology
البحث
التحديثات الأخيرة
  • WWW.COMPUTERWEEKLY.COM
    M&S suspends all online sales as cyber attack worsens
    Sikov - stock.adobe.com News M&S suspends all online sales as cyber attack worsens M&S shuts down online sales as it works to contain and mitigate a severe cyber attack on its systems By Alex Scroxton, Security Editor Published: 25 Apr 2025 16:15 Marks and Spencer (M&S) has suspended all sales via its website and mobile application as it continues to work to contain an unspecified cyber security incident. “As part of our proactive management of a cyber incident, we have now made the decision to pause taking orders via our M&S.com websites and apps,” a spokesperson said in an update posted to social media platform X. “Our product range remains available to browse online. We are truly sorry for this inconvenience. Our stores are open to welcome customers.” Earlier in the week, M&S had said there was no need for customers to take any immediate action, and the spokesperson additionally confirmed this remains the case now. Should this change, this will be communicated. “Our experienced team – supported by leading cyber experts – is working extremely hard to restart online and app shopping,” they said. The cyber security incident began over the long Easter weekend and resulted initially in the suspension of contactless payments, and the click-and-collect online shopping service. The expansion of its scope lends weight to growing speculation that M&S is dealing with some form of ransomware or extortion incident, although this has not been confirmed. Read more about the M&S incident 22 April 2025: A cyber attack at Marks & Spencer has caused significant disruption to customers, leaving them unable to make contactless payments or use click-and-collect services. 24 April: M&S is still unable to provide contactless payment or click-and-collect services amid a cyber attack that it says has forced it to move a number of processes offline to safeguard its customers, staff and business. M&S is known to be working with third-party security providers and the National Cyber Security Centre (NCSC) to establish precisely what has happened, but as is often the case in such scenarios, in-depth information is rarely released during the initial incident investigation, as to do so can cause further problems for the victims. “This latest update highlights that the incident is now having a material impact, with all online and app sales being paused,” said William Wright, CEO of security services provider Closed Door Security. “This will create a huge inconvenience for customers and will also significantly impact M&S financially. Data shows that almost a quarter of the store’s sales happen online, so no matter how long this pause is put in place, it will hurt M&S financially.” Wright observed that although M&S’s official line is that customer data has not yet been impacted, this could easily change at any minute as new forensics findings come to light. He reiterated general advice on preventing fraudsters and scammers from taking advantage of the crisis. “M&S customers should keep an eye on their online accounts and bank statements and also be on guard,” he said. “We don’t know if criminals have accessed any customer data, but it’s always safer to be on guard. Attackers will also use the ongoing incident to conduct phishing campaigns, with lures designed to look like genuine communications from M&S – possibly even claiming to offer further information on the incident – aimed at tricking their recipients into handing over personal or financial information. “It is essential that online users take note of this threat and treat all communications with caution,” said Wright. “Avoid clicking on links and attachments from unknown senders and always check the address where an email is coming from. The best way to keep updated on information around the incident is to visit the M&S corporate website or monitor their official social channels.” In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue OpenUK details ‘state’ of Kubernetes  – Open Source Insider SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network View All Blogs
    0 التعليقات 0 المشاركات 11 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    UK MoJ crime prediction algorithms raise serious concerns
    Data-based profiling tools are being used by the UK Ministry of Justice (MoJ) to algorithmically “predict” people’s risk of committing criminal offences, but pressure group Statewatch says the use of historically biased data will further entrench structural discrimination. Documents obtained by Statewatch via a Freedom of Information (FoI) campaign reveal the MoJ is already using one flawed algorithm to “predict” people’s risk of reoffending, and is actively developing another system to “predict” who will commit murder. While authorities deploying predictive policing tools say they can be used to more efficiently direct resources, critics argue that, in practice, they are used to repeatedly target poor and racialised communities, as these groups have historically been “over-policed” and are therefore over-represented in police datasets. This then creates a negative feedback loop, where these “so-called predictions” lead to further over-policing of certain groups and areas, thereby reinforcing and exacerbating the pre-existing discrimination as increasing amounts of data are collected. Tracing the historical proliferation of predictive policing systems in their 2018 book Police: A field guide, authors David Correia and Tyler Wall argue that such tools provide “seemingly objective data” for law enforcement authorities to continue engaging in discriminatory policing practices, “but in a manner that appears free from racial profiling”. They added it therefore “shouldn’t be a surprise that predictive policing locates the violence of the future in the poor of the present”. Computer Weekly contacted the MoJ about how it is dealing with the propensity of predictive policing systems to further entrench structural discrimination, but received no response on this point. Known as the Offender Assessment System (OASys), the first crime prediction tool was initially developed by the Home Office over three pilot studies before being rolled out across the prison and probation system of England and Wales between 2001 and 2005. According to His Majesty’s Prison and Probation Service (HMPPS), OASys “identifies and classifies offending-related needs” and assesses “the risk of harm offenders pose to themselves and others”, using machine learning techniques so the system “learns” from the data inputs to adapt the way it functions. Structural racism and other forms of systemic bias may be coded into OASys risk scores – both directly and indirectly Sobanan Narenthiran, Breakthrough Social Enterprise The risk scores generated by the algorithms are then used to make a wide range of decisions that can severely affect people’s lives. This includes decisions about their bail and sentencing, the type of prison they’ll be sent to, and whether they’ll be able to access education or rehabilitation programmes while incarcerated. The documents obtained by Statewatch show the OASys tool is being used to profile thousands of prisoners in England and Wales every week. In just one week, between 6 and 12 January 2025, for example, the tool was used to complete a total of 9,420 reoffending risk assessments – a rate of more than 1,300 per day. As of January this year, the system’s database holds over seven million risk scores setting out people’s alleged risk of reoffending, which includes completed assessments and those in progress. Commenting on OASys, Sobanan Narenthiran – a former prisoner and now co-CEO of Breakthrough Social Enterprise, an organisation that “supports people at risk or with experience of the criminal justice system to enter the world of technology” – told Statewatch that “structural racism and other forms of systemic bias may be coded into OASys risk scores – both directly and indirectly”. He further argued that information entered in OASys is likely to be “heavily influenced by systemic issues like biased policing and over-surveillance of certain communities”, noting, for example, that: “Black and other racialised individuals may be more frequently stopped, searched, arrested and charged due to structural inequalities in law enforcement.  “As a result, they may appear ‘higher risk’ in the system, not because of any greater actual risk, but because the data reflects these inequalities. This is a classic case of ‘garbage in, garbage out’.” Computer Weekly contacted the MoJ about how the department is ensuring accuracy in its decision-making, given the sheer volume of algorithmic assessments it is making every day, but received no direct response on this point. A spokesperson said that practitioners verify information and follow detailed scoring guidance for consistency. While the second crime prediction tool is currently in development, the intention is to algorithmically identify those most at risk of committing murder by pulling a wide variety of data about them from different sources, such as the probation service and specific police forces involved in the project. Statewatch says the types of information processed could include names, dates of birth, gender and ethnicity, and a number that identifies people on the Police National Computer (PNC). Originally called the “homicide prediction project”, the initiative has since been renamed to “sharing data to improve risk assessment”, and could be used to profile convicted and non-convicted people alike. According to a data sharing agreement between the MoJ and Greater Manchester Police (GMP) obtained by Statewatch, for example, the types of data being shared can include the age a person had their first contact with the police, and the age they were first the victim of a crime, including for domestic violence. Listed under “special categories of personal data”, the agreement also envisages the sharing of “health markers which are expected to have significant predictive power”. This can include data related to mental health, addiction, suicide, vulnerability, self-harm and disability. Statewatch highlighted how data from people not convicted of any criminal offence will be used as part of the project. In both cases, Statewatch says using data from “institutionally racist” organisations like police forces and the MoJ will only work to “reinforce and magnify” the structural discrimination that underpins the UK’s criminal justice system. Time and again, research shows that algorithmic systems for ‘predicting’ crime are inherently flawed Sofia Lyall, Statewatch “The Ministry of Justice’s attempt to build this murder prediction system is the latest chilling and dystopian example of the government’s intent to develop so-called crime ‘prediction’ systems,” said Statewatch researcher Sofia Lyall. “Like other systems of its kind, it will code in bias towards racialised and low-income communities. Building an automated tool to profile people as violent criminals is deeply wrong, and using such sensitive data on mental health, addiction and disability is highly intrusive and alarming.” Lyall added: “Time and again, research shows that algorithmic systems for ‘predicting’ crime are inherently flawed.” Statewatch also noted that Black people in particular are significantly over-represented in the data held by the MoJ, as are people of all ethnicities from more deprived areas. According to an official evaluation of the risk scores produced by OASys from 2015, the system has discrepancies in accuracy based on gender, age and ethnicity, with the risk scores generated being disproportionately less accurate for racialised people than white people, and especially so for Black and mixed-race people. “Relative predictive validity was greater for female than male offenders, for White offenders than offenders of Asian, Black and Mixed ethnicity, and for older than younger offenders,” it said. “After controlling for differences in risk profiles, lower validity for all Black, Asian and Minority Ethnic (BME) groups (non-violent reoffending) and Black and Mixed ethnicity offenders (violent reoffending) was the greatest concern.” A number of prisoners affected by the OASys algorithm have also told Statewatch about the impacts of biased or inaccurate data. Several minoritised ethnic prisoners, for example, said their assessors entered a discriminatory and false “gangs” label in their OASys reports without evidence, a decision they say was based on racist assumptions. Speaking with a researcher from the University of Birmingham about the impact of inaccurate data in OASys, another man serving a life sentence likened it to “a small snowball running downhill”. The prisoner said: “Each turn it picks up more and more snow (inaccurate entries) until eventually you are left with this massive snowball which bears no semblance to the original small ball of snow. In other words, I no longer exist. I have become a construct of their imagination. It is the ultimate act of dehumanisation.” Narenthiran also described how, despite known issues with the system’s accuracy, it is difficult to challenge any incorrect data contained in OASys reports: “To do this, I needed to modify information recorded in an OASys assessment, and it’s a frustrating and often opaque process. “In many cases, individuals are either unaware of what’s been written about them or are not given meaningful opportunities to review and respond to the assessment before it’s finalised. Even when concerns are raised, they’re frequently dismissed or ignored unless there is strong legal advocacy involved.” While the murder prediction tool is still in development, Computer Weekly contacted the MoJ for further information about both systems – including what means of redress the department envisages people being able to use to challenge decisions made about them when, for example, information is inaccurate. A spokesperson for the department said that continuous improvement, research and validation ensure the integrity and quality of these tools, and that ethical implications such as fairness and potential data bias are considered whenever new tools or research projects are developed. They added that neither the murder prediction tool nor OASys use ethnicity as a direct predictor, and that if individuals are not satisfied with the outcome of a formal complaint to HMPSS, they can write to the Prison and Probation Ombudsman. Regarding OASys, they added there are five risk predictor tools that make up the system, which are revalidated to effectively predict reoffending risk. Commenting on the murder prediction tool specifically, the MoJ said: “This project is being conducted for research purposes only. It has been designed using existing data held by HM Prison and Probation Service and police forces on convicted offenders to help us better understand the risk of people on probation going on to commit serious violence. A report will be published in due course.” It added the project aims to improve risk assessment of serious crime and keep the public safe through better analysis of existing crime and risk assessment data, and that while a specific predictive tool will not be developed for operational use, the findings of the project may inform future work on other tools. The MoJ also insisted that only data about people with at least one criminal conviction has been used so far. Despite serious concerns around the system, the MoJ continues to use OASys assessments across the prison and probation services. In response to Statewatch’s FoI campaign, the MoJ confirmed that “the HMPPS Assess Risks, Needs and Strengths (ARNS) project is developing a new digital tool to replace the OASys tool”. An early prototype of the new system has been in the pilot phase since December 2024, “with a view to a national roll-out in 2026”. ARNS is “being built in-house by a team from [Ministry of] Justice Digital who are liaising with Capita, who currently provide technical support for OASys”. The government has also launched an “independent sentencing review” looking at how to “harness new technology to manage offenders outside prison”, including the use of “predictive” and profiling risk assessment tools, as well as electronic tagging. Statewatch has also called for a halt to the development of the crime prediction tool. “Instead of throwing money towards developing dodgy and racist AI and algorithms, the government must invest in genuinely supportive welfare services. Making welfare cuts while investing in techno-solutionist ‘quick fixes’ will only further undermine people’s safety and well-being,” said Lyall. Read more about law enforcement technology  Met Police challenged on claim LFR supported by ‘majority of Lewisham residents’: A community impact assessment for the Met Police’s deployment of live facial-recognition tech in Lewisham brings into question the force’s previous claims to Computer Weekly that its use of the technology is supported by ‘the majority of residents’. AI surveillance towers place migrants in ‘even greater jeopardy’: The use of autonomous surveillance towers throughout the English coast forces migrants into increasingly dangerous routes and contributes to their criminalisation. Ban predictive policing and facial recognition, says civil society: A coalition of civil society groups is calling for an outright ban on predictive policing and biometric surveillance in the UK.
    0 التعليقات 0 المشاركات 10 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Podcast: RSA 2025 to grapple with AI compliance, US and EU regulation
    We preview RSA 2025 with Vigitrust CEO Mathieu Gorge who looks forward to learning lots around compliance and regulation as CIOs wrestle with artificial intelligence and geopolitical upheavals
    0 التعليقات 0 المشاركات 38 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Data breach class action costs mount up
    Organisations holding data on US citizens must do more to address gaps in their cyber security posture and respond to incidents in a timelier fashion if they are to avoid falling victim to rising legal costs. An analysis of the past six months of data breach filings Stateside, conducted by continuous controls monitoring (CCM) specialist Panaseer, found that organisations are paying out millions of dollars in regulatory fines, class action settlements and individual payouts. From August 2024 to February 2025, the data – drawn from third-party sources – revealed that 43 lawsuits were filed and 73 settlements reached. Panaseer found US organisations have paid a total of $154,557,000 (£116,195,000) in class action costs since last August, with settlements averaging $3m and the largest hitting $21m. Individual payouts to affected employees or customers ranged from $150 a head to $12,000, money that many can ill-afford to add when other costs, such as engaging third-party forensics and remediation services, are taken into account. “While people – and the courts – can be understanding when a company falls victim to an attack, they’re far less forgiving when it looks like the organisation failed in its duty of care around data,” says Jonathan Gill, CEO at Panaseer. “But most breaches don’t happen because companies wilfully ignore security. Instead, they will set a target risk position, then over time slide back and take on more exposure than intended because well-intentioned people don’t have information they can trust, presented in a language they understand, to do the important work. It’s a process problem, not a people problem.”  Read more about data breach costs An effective risk management policy can help companies determine the best ways to offset the costs associated with a data breach and avoid reputational damage. In addition to the $5 million, Deloitte is covering the cost of the RIBridges data breach call centre, credit monitoring and identity protection for affected individuals. IBM publishes data on the spiralling costs of cyber attacks and data breaches, while researchers identify what appears to be the largest ransomware payment ever made. Gill said that without a system of record in place covering incident preparedness, the gap between where businesses think they are and where they actually are can widen until organisations believe they are doing everything right, when the reality is much different. “Assumptions about coverage can mask critical blind spots: unpatched systems, misconfigurations and unnoticed gaps that persist beneath the surface,” he said. “And as our analysis shows, these ‘unknown unknowns’ can be incredibly costly, not just in fines and legal fees, but in reputational damage and loss of customer trust.” The most common failings leading to costly payouts were inadequate cyber security measures, noted in 50% of filings and 97% of settlements; failure to encrypt data, noted in 40% of filings but just 1% of settlements; and delays to breach notifications, noted in 10% of filings and 3% of settlements. Overall, the data show US data breach litigation reached record levels in 2024, with filings doubling over 2023. Notably, states with tougher privacy laws, such as California, Florida, Illinois and New Jersey, unsurprisingly saw the most class action activity. Gill said organisations needed to recognise that the best defence against winding up in a US court is to be able to demonstrate and prove that they have conducted appropriate and effective due diligence around their security – starting by painting a clear and accurate picture of their core data and IT assets, and the measures that are in place to protect them. “Demonstrating a good faith effort is one of the strongest defences against legal action,” he said. “Yet the root cause of today’s cyber security challenges isn’t just threats, it’s the way we manage them. “The attack surface is expanding, visibility is shrinking and security teams are juggling an ever-growing stack of siloed solutions – 83 on average, from 29 different vendors,” said Gill. “This lack of visibility creates a ripple effect. Security teams struggle to track assets, decision-makers lack the right insights and stakeholders can’t translate technical complexity into business risk. Over time, controls drift, alert fatigue sets in and preventable breaches occur.” To break this cycle, he urged chief information security officers to bring security back to three foundational basics – visibility, alignment and clarity – with a system of record that functions similarly to how Workday works for HR leaders, or Salesforce for sales. “[A] trusted, truthful source gives teams a single, validated view of security data, understandable by all stakeholders,” said Gill. “This in turn allows teams to report on cyber security and drive action based on data-driven insights, mapped to business priorities. “This way, organisations can prevent problems before they escalate, streamline operations and move from reactive firefighting to proactive resilience. And then, even if the worst happens, they can show they did the right things.” 
    0 التعليقات 0 المشاركات 45 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    VMware patches put spotlight on support
    Organisations using VMware now have no choice but to buy an annual subscription for a bundled product if they plan to continue using the hypervisor. As Computer Weekly has previously reported, Broadcom has simplified the VMware product family, which is now only available as a subscription, licensed on a per-core basis. Some organisations, like Telefónica Germany, have managed to remain on perpetual licences by purchasing second-hand VMware licences and using a third-party support provider. But a recent security alert has brought into focus the difficulty of keeping licensed copies of VMware running without upgrading to a VMware subscription. Last month, Broadcom published a critical security advisory that covered three new zero-day vulnerabilities affecting multiple VMware products, including ESXi, Workstation and Fusion. The most severe of these was a critical vulnerability in ESXi and Workstation. According to Rapid7, these are not remotely exploitable vulnerabilities – they require an attacker to have existing privileged access on a virtual machine (VM) that is running on an affected VMware hypervisor. In a blog, Rapid7 noted that it may be possible to chain together the three vulnerabilities: “This is a situation where an attacker who has already compromised a virtual machine’s guest OS and gained privileged access (administrator or root) could move into the hypervisor itself.” Broadcom said administrators should assume that all versions of ESXi, vSphere and VCF are affected, apart from versions listed as “fixed”. “If there is any uncertainty about whether a system is affected, it should be presumed vulnerable, and immediate action should be taken,” the Broadcom advisory warned, adding that exploitation of the vulnerabilities has occurred “in the wild”. In terms of VMware users running older versions of ESXi, Broadcom has issued a patch for ESX 6.7, which is available via the Support Portal to all customers. ESX 6.5 customers, meanwhile, need to use the extended support process for access to patches, said Broadcom. It said products that are past their end of general support dates are not evaluated, and urged organisations using vSphere 6.5 and 6.7 to update to vSphere 8. To apply the patches issued by Broadcom, IT decision-makers will need to upgrade to a Broadcom subscription for VMware – unless they are prepared to source second-user licences covering a supported version of vSphere. This provides patches and updates for the latest supported VMware releases.  If managed carefully, moving to a VMware subscription could be the right approach, especially in organisations that can use the full VMware Cloud Foundation (VCF) suite and need a platform that can manage both virtualisation and containerisation. As Holland Barry, field chief technology officer for cloud and infrastructure at DXC Technology, pointed out in a recent Computer Weekly article, organisations adapting to VMware’s evolving licensing models are finding opportunities to optimise costs and enhance efficiencies. “Many have successfully streamlined their IT estates by replacing redundant functionalities – such as logging, observability, automation, software-defined networking, microsegmentation and hyperconverged infrastructure – with integrated solutions now available within their VMware Cloud Foundation model,” he said. For Bola Rotibi, principal analyst at CCS Insight, VCF’s architectural principle is based on building for interoperability. For hybrid and multicloud deployment scenarios, VCF provides what Rotibi regards as a consistent, enterprise-grade cloud experience. However, one of VCF’s biggest advantages, according to Rotibi, is its ability to support VMs and Kubernetes-based workloads on a single platform. “Many enterprises are still running legacy applications that rely on virtual machines,” she said. However, they also want to modernise with cloud-native, containerised applications. “Instead of forcing businesses to choose between two separate architectures, VCF seamlessly integrates both.” Barry recommends IT leaders align their hardware footprints to VMware’s new 16-core-per-CPU socket minimum, which, in his experience, is crucial for maximising performance and value. “By carefully recalibrating memory-to-CPU ratios, businesses have ensured that workloads run optimally without unnecessary overhead,” he added. Many IT leaders will not want to take a risk by running IT systems unpatched, but VMware is a mature product, which implies that best practices for maintaining a secure VMware environment are well understood.  According to third-party support provider Spinnaker Support, VMware customers are having to figure out for themselves whether older, unsupported products are impacted by newly discovered vulnerabilities. Looking at a recent vulnerability affecting version 6.7 of VMware, Spinnaker Support said the feature that needed patches was not something built into version 5.5, making the risk irrelevant in organisations using the older version of the VMware product. While Broadcom’s bundling of VMware products simplifies the product family, in Spinnaker’s experience, this means VMware patches are being released for products that many organisations do not use.  Craig Savage, vice-president of cyber security at Spinnaker Support, said: “Broadcom’s bundling strategy is making it harder for customers to separate genuine security risks from noise. When everything is wrapped into large, expensive packages, understanding what truly needs protection – and what doesn’t – becomes far more difficult.” Read more VMware strategy stories Nutanix event shows massive interest in VMware migration: A recent event held by VMware rival Nutanix attracted many people new to the hyperconverged infrastructure provider. What are the options when migrating from VMware: Broadcom’s changes to VMware licensing means some people are facing big price increases – we look at how these can be avoided.
    0 التعليقات 0 المشاركات 52 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    M&S systems remain offline days after cyber incident
    Contactless payments and click-and-collect at Marks and Spencer (M&S) remain unavailable 72 hours after a cyber security incident at the retailer forced it to take the services offline. Further details of the incident, which began on Monday 21 April – although a separate issue had dogged contactless payments earlier in the Easter weekend – remain unavailable, but M&S has enlisted third-party cyber forensics, as well as working alongside the National Cyber Security Centre (NCSC), to establish the facts. In a further update published to its website late on 23 April, M&S said that in the course of its incident management activities, it continued to be necessary to alter some of its operations to preserve the security of both its customers, and the wider business. “We have made the proactive decision to move some of our processes offline to protect our colleagues, partners, suppliers and our business,” said a spokesperson. “Our stores remain open and customers can continue to shop on our website and our app. “However, we are not currently processing contactless payments, we have paused the collection of click-and-collect orders in stores, and there may be some delays to online order delivery times. We are incredibly grateful for the understanding and support that our customers, colleagues, partners and suppliers have shown. “We are working hard to restore our services and minimise disruption and are being supported by industry-leading experts. We will continue to update as appropriate as we work to resolve these issues.” M&S has already won some praise from cyber security professionals for playing a relatively straight bat when it comes to its incident disclosure and customer messaging. However, as it has still been unable to confirm the precise nature of the cyber attack – a set of circumstances that inevitably leads to speculation about ransomware – customers may still be concerned about whether or not their financial and other personal data has been compromised. For now, M&S is maintaining the line that there is no reason for consumers to take action. However, according to McAfee EMEA head Vonny Gamot, there are still some steps it might be wise to take. “First, it’s important to know that high-profile attacks like this provide fresh opportunities for scammers,” she said. “Unfortunately, fraudsters looking to capitalise on the situation will launch further rounds of phishing attacks, usually via email or text, that direct people to bogus sites designed to steal sensitive information. “Whether it’s an email requesting an alternate payment method due to missed transactions or a text asking you to reset your login details, it’s always wise to keep a cautious eye open.” Fraudsters and scammers will frequently play on emotions by creating a sense of urgency in their messaging in an attempt to get potential victims to let their guard down. Messages exploiting the M&S incident may, for example, imply that your data or money have been stolen and urge you to click on links to secure your accounts. If in doubt, said Gamot, best practice is to stop and question any unexpected or unsolicited contacts relating to the incident, and verify them with M&S. Customers may also wish to update their passwords and keep an eye on their bank and credit card accounts. If any changes appear that you did not action, these need to be reported, and if you believe your data may have been taken, place a fraud alert on your credit cards to take advantage of additional scrutiny. Read more about recent cyber attacks Car hire giant Hertz reveals UK customer data was affected in a cyber incident orchestrated via a series of vulnerabilities in Cleo managed file transfer products. TfL provides more detail on the financial impact of the September 2024 cyber attack that crippled several of its online systems. The Post Office offered a short extension to enable it to asses the impact of the MoneyGram cyber incident, but the contract has now expired and MoneyGram services are no longer available in Post Office branches.
    0 التعليقات 0 المشاركات 55 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Challenges persist as UK’s Cyber Security and Resilience Bill moves forward
    Gajus - stock.adobe.com Opinion Challenges persist as UK’s Cyber Security and Resilience Bill moves forward Elements of the proposed Cyber Security and Resilience Bill are welcome but questions remain about how best to act in the face of persistent challenges like geopolitical chaos, threats to critical infrastructure, and technological advances, writes CSBR chief exec James Morris By James Morris, the CSBR Published: 24 Apr 2025 Since the government announced in the King’s speech last year that they would bring forward a Cyber Security and Resilience Bill, much has changed.  The geopolitical context has become more chaotic with the new Trump administration testing long held norms of the rules-based international order, the economy continues to struggle and new advances in AI complicate our understanding of the evolving threat landscape. In such a fast-moving world what should drive the government’s thinking around this much awaited legislation? On 1 April 2025 the Department of Science Innovation and Technology (DSIT) published a ‘policy statement’ on the proposed bill. The proposals centre on a significant evolution of the current regulatory regime to align the UK with the NIS2 framework adopted by the EU.  The policy statement says that the bill ‘will address specific cyber security challenges faced by the UK while aligning, where appropriate, with the approach taken by the EU NIS 2 Directive.’  The policy statement acknowledges that the UK faces ‘specific cyber security challenges’ but doesn’t specify what these challenges are; but it is critical acknowledgement, nonetheless. The UK does face particular cyber security challenges. We face vulnerabilities in our NHS and across other areas of government as was outlined in a recent National Audit Office report.  Our critical national infrastructure (CNI) is also likely to be exposed to more sophisticated threats as the landscape of global geopolitical rivalry – particularly with China and Russia – continues to evolve. The challenge for the bill is how it can provide a comprehensive cyber and national security framework across critical national infrastructure in the UK to address these ‘specific’ challenges. The policy statement does not make reference to our financial services industry which is a critical part of our economy. UK transposition of the original NIS regulations specifically excluded financial services. Will this still be the case for the Cyber Security and Resilience Bill?  Financial services has some of the strongest sector specific security standards and there is a strong argument that these standards should be used as the model for other sectors. There are elements of the proposals which are to be welcomed. The focus on the resilience of supply chains, the bringing of managed service providers (MSPs) under the umbrella of regulation, the recognition that datacentres are now part of our CNI, and a new more transparent incident reporting regime are important and urgent requirements. The proposed approach is one of ‘sectoral regulation’ with existing industry regulators given more powers. The danger of such an approach is that the regulatory landscape could become fragmented with different approaches applied and no overarching strategy adopted across the piece. The government’s proposed solution is that the Secretary of State will produce a periodic ‘statement of strategic priorities’ which it hopes would bring consistency and coherence across sectors. The key question is how such a statement of priorities would be developed? It will require in-depth consultation both with the regulators but also with industry itself to make it meaningful and to ensure it is relevant and can be operationalised. Read more about the Cyber Security and Business Resilience Bill July 2024: In the Cyber Security and Resilience Bill introduced in the King's Speech, the UK's new government pledges to give regulators more teeth to ensure compliance with security best practice and to mandate incident reporting. October 2024: The UK government says that enforced cyber incident and ransomware reporting for critical sectors of the economy will help to build a better picture of the threat landscape and enable more proactive and preventative responses. March 2025: The government’s proposed Cyber Security and Resilience Bill is set to include regulatory provisions covering both datacentre operators and larger IT service providers. April 2025: The government’s recent policy statement around the Security and Resilience Bill will have implications on hundreds of managed service providers. The policy statement also envisages a new role for the Information Commissioner's Office (ICO). It says, ‘the primary intent of this measure is to enhance the ICO’s capability to identify and mitigate cyber risks before they materialise, thus preventing attacks and strengthening the digital services sector against future threats.’  In order for the ICO to take on these new responsibilities it will need significant new resources, skills and capacity. In addition, it’s remit will need to be tightly defined to avoid duplication with the NCSC or to ensure has the necessary teeth with regards to the sectoral regulators. One of the more controversial proposals in the statement is the proposed approach with dealing with emerging trends in the threat landscape. The government’s proposed solution is to grant the Secretary State what are commonly known as ‘Henry the Eighth’ powers to change the regulations and to bring more industry sectors into the remit of the regulatory framework. It is unclear how any proposed changes would be scrutinised as they would not require an Act of Parliament for them to be enforced. This top-down approach is often adopted by governments when they are faced with fast moving sectors; but it is vital that these directive powers are given proper scrutiny. The challenge is to ensure that seeking better cyber security resilience regulation doesn’t become obsolete or outdated before it has even reached the statute book.  It is also the case that the regulatory framework needs to balance the need for the better cyber security and resilience without snuffing out innovation in our business ecosystem.  Business – large and small – must be brought into this process from the bottom up to encourage compliance and understanding. It also needs to be recognised that legislation and regulation will not, in isolation, solve all our problems. Alongside the legislation there needs to be an intensified effort to embed cyber security and resilience awareness, processes and practice into the heart of our society with a shared understanding of the threat and shared determination to resist it. James Morris is chief executive of the CSBR, a non-profit think tank exploring policy and solutions for security and resilience in the UK. A former MP, he served as chair of the All-Party Parliamentary Group for Cyber Security and Business Resilience. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue OpenUK details ‘state’ of Kubernetes  – Open Source Insider SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network View All Blogs
    0 التعليقات 0 المشاركات 51 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Interview: Daniele Tonella, CTO, ING
    Daniele Tonella, global head of IT at ING Bank, tells Computer Weekly about his first nine months in the job, which has so far seen him navigate four layers of tech. The chief technology officer (CTO), who describes himself as “a mechanical engineer by mistake” with a passion for tech, started coding when he was 10 years old. At 18, on finishing high school and “with the self-confidence of that age”, he felt he knew technology and decided to study mechanical engineering. “My passion was in tech, but it was too late to change course, so I finished it,” he says. But he adds he “never stopped tinkering and coding on the side”, and when he completed his studies, he moved into technology. Following his graduation, he worked in consulting on tech strategy before taking IT roles at finance firms in his native Switzerland, as well as France and Italy. He left Italian bank UniCredit, where he was head of IT, three years ago, and embarked on “a classic portfolio” career, with board and advisory roles, including working with tech startups. Tonella joined ING, which has about 39 million customers and operates in retail and wholesale banking, as global CTO in August 2024. “If anything at ING has a chip or line of code in it, it falls within my broader responsibility,” he tells Computer Weekly. He leads about 15,000 IT staff internally – about a quarter of the bank’s total workforce – and about 4,000 external IT professionals. Numerous projects are ongoing at ING, including those to support entering new markets, regulatory compliance and the use of artificial intelligence (AI). But Tonella’s broad focus has been on what he describes as the “four layers of tech”. “In general terms, [our] tech is layers made up of four big activities, with each one building on the one below.” This has seen him strive to make IT reliable, control its quality, introduce scalability and innovate. He says the first layer is reliability, and if that is not provided, it is difficult to contribute anything else. All I got was, ‘We’ve got these incidents here and there, can you fix it?’ So, we’ve created a whole initiative to stabilise the underlying fundamentals.” “If anything at ING has a chip or line of code in it, it falls within my broader responsibility” Daniele Tonella, ING Bank Tonella’s work here was aided by a decision made at ING in 2016, when unlike many traditional banks, it opted to move away from using mainframes, completing their decommissioning in 2020. “ING has a quite modern infrastructure. The group decided many years ago to move away from mainframes and into an application stack based on microservices and highly modular,” says Tonella. “That was a very wise decision.” He says work to stabilise the IT infrastructure is never complete, but ING ranks high among its peers for availability. “We believe it is stable enough to look at what is above.” One of the layers above involves creating a scalable tech platform with shared global services for the entire business across all its countries of operation. ING operates across the world, whether it be a market leader in the Benelux region or a challenger in countries such as Italy, Spain and Australia. For example, in Italy, it has just rolled out its mobile banking app, OneApp, which Tonella says is a global standard. Before this, the Italian business had its own separate mobile banking app. “Globally, we have a common set of assets and now we are substituting the local ones with them,” he adds. “With the global standard, [regional operations] can enrich their offering to clients with services that other banks in the group have developed on the same framework,” explains Tonella. “You’re building once, and it can scale out.” The OneApp mobile app was developed by the bank’s IT team in the Netherlands, but it is the global standard. But ING has IT development teams “everywhere”, says Tonella, with hubs in Poland, Romania and the Philippines, where it has “lots of engineers”, as well as engineers in countries such as Germany, Netherlands, Italy and Spain, where it operates. Each country can develop for its own consumption, as well as for global consumption. ING also has a central development team for the wider group. The scalable tech platform is an ongoing project, and one that triggered another major challenge. Tonella says when working on the scalable tech platform, he “noticed quickly” that some of the numbers needed to demonstrate scalable tech were missing. This led to a project Tonella describes as “control”. “It is everything around the cost accounting, around productivity. It is the measurements of everything, which is numbers related,” he says. “I would like to see numbers shown in a different way than what we have today. This is my control tower.” Tonella says understanding the numbers is not just about running the business, but also changing it. “When you have hardware or an application, it’s like having a car. At some point, it gets old and you need to buy a new one. These investments emerge once in a while, so how do I know that we are dedicating enough of the budget to this?” He adds: “Also, how do we ensure that every initiative the bank wants to run is clear in terms of how much it’s going to cost, and what is the revenue benefit that we expect to have?” The fourth layer is everything related to innovation, including reaching new markets and introducing tech such as generative AI (GenAI) – “anything that was not there before”, says Tonella. “Innovation at ING tends to happen pretty much by itself,” he says. “Our employees tend to feel empowered to explore and strive too.” He describes the innovation layer as where the bank is providing customer value through technology. Tonella says nearly a decade ago, ING took up agile development methods, at the time when the Spotify model was all the rage. ING reorganised around squads and tribes, which are small teams of less than 10 people allocated budgets to work on their ideas. Innovation at ING tends to happen pretty much by itself. Our employees tend to feel empowered to explore and strive too Daniele Tonela, ING Bank He describes them as being like little startups within ING. Tonella says ING has a history of being innovative, and although its customer base of 39 million means it’s not as big as many competitors, he says the bank is ahead of most when it comes to digital banking. “We were a neobank before neobanking was a thing,” he says. In the 1990s, it introduced online retail bank ING Direct, which had no branches. Tonella says the bank is continuously innovating, with GenAI a major focus at present, which the bank is taking a “conservatively aggressive” approach to. “ING has set the foundation for avoiding GenAI becoming a ‘tech toy’ conversation,” he says. “The owner of all GenAI initiatives is the chief analytics officer, who reports to the chief operating officer (COO). “GenAI is more than technology, of course. It is technology, but at its core it’s a transformation force for the way we do banking,” he adds. The bank is enabling development around GenAI in five areas: know your customer (KYC), call centres, wholesale banking to improve customer due diligence, retail for the hyper-personalisation of offerings, and inside tech for engineering. “We brought in strict governance that focused all exploration on GenAI on five areas, and only under the control of the COO. This is important because AI has gained a lot of attention and traction. Without this governance, and due to the entrepreneurial nature of our bank, we might have seen bits of GenAI all over the place.” Read more CIO interviews Skills gaps, electrification and customisation driving need for change, says Aston Martin CIO Steve O’Connor. NatWest Retail Bank chief digital information officer Wendy Redshaw on how it is moving at pace to introduce GenAI into key customer-facing services as part of a wider digital transformation across the organisation. Currys CIO Andy Gamble on the four pillars of the retailer’s artificial intelligence strategy and how GenAI can enable staff to be the best versions of themselves.
    0 التعليقات 0 المشاركات 42 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    March ransomware slowdown probably a red herring
    On a month-by-month basis, recorded ransomware attacks dropped by 32% in March 2025, to 600 in total, according to NCC Group’s latest monthly Threat Pulse data, but the decline appears to be very much a red herring, and likely the result of large, one-off events in previous months that yielded multiple victims, such as Clop/Cl0p’s attacks on Cleo. Indeed, according to NCC, ransomware incidents are in fact up by 46% compared with March 2024. Note, as always, that these data are drawn from NCC’s own telemetry, and do not necessarily reflect the true scale of the problem. “The slight decline in attacks in February is a bit of a red herring given the unprecedented levels we have seen over the past months, with the volume of incidents year-on-year increasing 46% in March,” said NCC threat intelligence head Matt Hull. “As ever, we are seeing threat actors diversifying, and leveraging increasingly complex and sophisticated attack methods to stay ahead, not only to cause mass disruption, but to gain attention in the ransomware world.”  Last month, Babuk 2.0 appeared to be the most active threat group, accounting for 84, about 20% of recorded attacks, up 33% on January. Second place was shared by Akira and RansomHub, which both scored 62 victims, slightly down on February. In fourth place was the Safepay crew, which conducted 42 observed attacks after experiencing something of a fallow period. However, there may be a second red herring in the barrel, observed Hull, as the emergence of Babuk 2.0 in particular is raising questions as to the legitimacy of their alleged attacks. The original Babuk gang has claimed no connection to the new operation, and security researchers are generally united in the belief that Babuk 2.0 is fraudulent – more fraudulent than usual, at least – and is possibly recycling old leaked data and trying to use it to scare victims into paying out. Such tactics were similarly observed following the 2024 disruption to LockBit. Read more about ransomware Perimeter security appliances and devices, particularly VPNs, prove to be the most popular entry points into victim networks for financially motivated ransomware gangs, according to reports. Car hire giant Hertz reveals UK customer data was affected in a cyber incident orchestrated via a series of vulnerabilities in Cleo managed file transfer products. In February, leaked internal exchanges within the Black Basta group offered a new opportunity to investigate one of its leaders: Tramp. He may have been arrested in Armenia in June 2024, before being released. Broken down by sector, industrials was the most targeted last month, with 150 attacks – 27% of the total – observed. Consumer discretionary came in second with 124 attacks, down 55% on February. By geography, North America remained the top target, with almost half of all observed attacks taking place in the region – more than double the number seen in EMEA, which saw 26% of attacks. APAC saw 14% of attacks, and South America 7%. Hull said North America would likely remain a key focus for cyber criminal gangs in the coming months, given rising geopolitical tensions, and division stoked between the US and Canada, which may make Canadian organisations more likely to be victimised. This month’s Threat Pulse also includes insight into malvertising and its increasing importance in the cyber threat ecosystem. Malvertising is best described as when malware, even ransomware, hides behind online ads that seem harmless at face value, or until clicked upon. This attack vector saw a notable surge last year, and apparently the momentum shows no sign of letting up. Indeed, recent statistics from Microsoft’s threat intel teams found nearly a million devices globally implicated in a large-scale malvertising campaign in March. Those behind it exploited GitHub repositories, Discord servers and Dropbox to run things. Hull said malvertising was becoming more complex, with cyber criminals using trusted platforms – as seen – and turning to generative artificial intelligence tools, like DeepSeek, to activate more sophisticated attacks while lacking technical skills. This trend will make the need to get a firm grasp on threat intelligence particularly relevant to security decision-makers in the near-term, said Hull, and proactive measures and collaboration with others will also be key to staying ahead. “It’s a unique and challenging time for organisations, facing evolving tactics, like AI-enabled malvertising, and a turbulent geopolitical landscape,” said Hull. “So, it’s more important than ever for organisations and individuals alike to remain vigilant and be adaptive to keep pace with these fast-changing threats.”
    0 التعليقات 0 المشاركات 70 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Financially-motivated cyber crime remains biggest threat source
    Financially-motivated threat actors – including ransomware crews – remain the single biggest source of cyber threat in the world, accounting for 55% of active threat groups tracked during 2024, up two percentage points on 2023 and 7% on 2022, demonstrating that cyber crime really does, to a certain extent, pay. At least, this is according to Google Cloud’s Mandiant, which has this week released its latest M-Trends report, an annual, in-depth deep dive into the cyber security world. The dominance of cyber crime is not in and of itself a surprise, and according to Mandiant, cyber criminals are becoming a more complex, diverse, and tooled up threat in the process. “Cyber threats continue to trend towards greater complexity and, as ever, are impacting a diverse set of targeted industries,” said Mandiant Consulting EMEA managing director, Stuart McKenzie. “Financially motivated attacks are still the leading category. While ransomware, data theft and multifaceted extortion are and will continue to be significant global cybercrime concerns, we are also tracking the rise in the adoption of infostealer malware and the developing exploitation of Web3 technologies, including cryptocurrencies.  McKenzie added: “The increasing sophistication and automation offered by artificial intelligence are further exacerbating these threats by enabling more targeted, evasive, and widespread attacks. Organisations need to proactively gather insights to stay ahead of these trends and implement processes and tools to continuously collect and analyse threat intelligence from diverse sources.” The most common means for threat actors to access their victim environments last year was by exploiting disclosed vulnerabilities – 33% of intrusions began in this way worldwide, and 39% in EMEA. In second place, using legitimate credentials obtained by deception or theft, seen in 16% of instances, followed by email phishing in 14% of incidents, web compromises in 9%, and revisiting prior compromises in 8%. The landscape in EMEA differed slightly to this, with email phishing opening the doors to 15% of cyber attacks, and brute force attacks representing 10%. Once ensconced within their target environments and able to get to work, threat actors took a global average of 11 days to establish the lay of the land, conduct lateral movement, and line up their final coup de grace. This period, known in the security world as dwell time, was up approximately 24 hours on 2023, but down significantly on 2022, when cyber criminals hung out for an average of 16 days. Anecdotal evidence suggests that technological factors including, possibly, the adoption of AI by cyber ne’er-do-wells, may have something to do with this drop. Interestingly, median dwell times in EMEA were significantly higher than the worldwide figure, clocking in at 27 days, five days longer than in 2022. When threat actors were discovered inside someone’s IT estate, the victims tended to learn about it from an external source – such as an ethical hacker, a penetration testing or red teaming exercise, a threat intelligence organisation like Mandiant, or in many instances an actual ransomware gang – in 57% of cases. The remaining 43% were discovered internally by security teams and so on. The EMEA figures differed little from this. Nation-state threat actors, or advanced persistent threat (APT) groups create a lot of noise and generate a lot of attention in the cyber security world by dint of the lingering romance associated with spycraft, and in more practical terms, the fractious global geopolitical environment. However, compared to their cyber criminal counterparts, they represent just 8% of threat activity, which is actually a couple of percentage points lower than it was two years ago. Mandiant tracked four active advanced persistent threat (APT) groups in 2024, and 297 unclassified (UNC) groups – meaning not enough information is really available to make a firm bet on what they are up to, so this could include potential APTs. Indeed there is significant overlap in this regard and, Mandiant has on occasion upgraded some groups to full-fledged APTs – such as Sandworm, which now goes by APT44 in its threat actor classification scheme. APT44 is one of the four active APTs observed in 2024. Infamous for its attacks on Ukrainian infrastructure in support of Russia’s invasion, APT44 has long supported the Kremlin’s geopolitical goals and was involved in some of the largest and most devastating cyber attacks to date, including the NotPetya incident. Also newly-designated in 2024 was APT45, operating on behalf of the North Korean regime and described by Mandiant as a “moderately sophisticated” operator active since about 2009. Read more about current security trends The growth of AI is proving a double-edged sword for API security, presenting opportunities for defenders to enhance their resilience, but also more risks from AI-powered attacks, according to a report. Many businesses around the world are taking the decision to alter their supplier mix in the face of tariff uncertainty, but in doing so are creating more cyber risks for themselves. As directors increasingly recognise the threats posed by increasingly sophisticated, AI-driven cyber attacks, risks are being mitigated by changes in physical infrastructure networks, research finds.
    0 التعليقات 0 المشاركات 66 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Rethink authentication to remove the burden on users
    Attackers exploit human nature, making authentication a prime target. The Snowflake data breach is a clear example – hackers used stolen customer credentials, many which lacked multi-factor authentication (MFA), to breach several customer accounts, steal sensitive data and reportedly extort dozens of companies. This incident highlights how one seemingly small, compromised credential can have severe consequences. Phishing scams, credential stuffing, and account takeovers all succeed because authentication still depends on users making security decisions. But no amount of security training can completely stop people from being tricked into handing over their credentials, downloading malware that steals login information, or reusing passwords that can be easily exploited. The problem isn’t the user; it’s the system that requires them to be the last line of defense. With agentic AI set to introduce a surge of non-human identities (NHIs) - bringing an added layer of complexity to an already complicated IT environment - enterprises need to rethink authentication, removing users from the process as much, and as soon, as possible. The explosion of cloud applications, systems and data has made identity security more complex and critical than ever before. Today, the average enterprise manages multiple cloud environments and around 1,000 applications, creating a highly fragmented landscape, which attackers are actively capitalising on. In fact, IBM’s 2025 Threat Intelligence Index  found that most of the cyber attacks investigated last year were caused by cybercriminals using stolen employee credentials to breach corporate networks. With AI-driven attacks set to make this problem even worse, identity abuse shows no signs of a slowdown. Large language models (LLMs) can automate spear-phishing campaigns and scrape billions of exposed credentials to fuel automated identity attacks. With AI enabling attackers to scale their tactics, the transition away from credential-based security must become a priority for businesses. The future of secure modern authentication requires reducing the user burden from the identity paradigm by moving away from passwords and knowledge-based authentication. Passwordless authentication, based on the FIDO (Fast Identity Online) standard replaces traditional passwords with cryptography keys bound to a user’s account on an application or website. Instead of choosing and remembering a password, users authenticate with biometrics or a hardware-backed credential, this is typically provided by the device (laptop or mobile device) and their operating system. These credentials (passkeys) are protected by the operating systems, browsers and password managers, significantly reducing the risk of phishing attacks and stolen credentials.  A modern way to authenticate, passkeys are phishing resistant, offer a better user experience and improve security posture. While not a new or novel concept, passwordless is slow to gain traction because of perceived complexity and lack of clear migration paths. However, the FIDO alliance announced in late 2024 new resources that are set to help accelerate the adoption of passkeys by making them easier for organizations and consumers to use. For example, FIDO’s new proposed specifications enable organisations to securely move passkeys and other credentials from one provider to another. This helps provide flexibility to organisations by removing vendor lock-in. Digital credentials are another technology that helps remove the burden of security decisions from users. While passwordless authentication provides a secure way to access resources, digital credentials (sometimes referred to as verifiable credentials) provide a secure way to share private data. Digital credentials – such as digital employee badges or mobile driver’s licences – allow organisations to validate users without exposing unnecessary or sensitive personal data. For example, a digital driver’s licence lets users prove their age for restricted purchases without revealing unnecessary personal information like their home address or even their actual birthday. Similarly, digital paystubs allow users to confirm salary requirements for a loan without disclosing their actual salary. This solution also helps put the power of data sharing back into the users’ hands – allowing them to choose what type of information is provided, to who and when. Read more about IAM IAM is critical to an organisation's data security posture, and its role in regulatory compliance is just as crucial. Does your IAM program need OAuth or OpenID Connect? Or maybe both? Let's look at the various standards and protocols that make identity management function. If it is deployed correctly, identity and access management is among the plethora of techniques that can help to secure enterprise IT. The move towards passwordless and digital credentials is not just about stopping today’s attackers – it’s about preparing for what’s next. AI-powered attacks: Attackers are already using generative AI (GAI) to create phishing campaigns that are nearly as effective as human-generated ones, automate social engineering at scale, and bypass traditional security controls. Passwordless eliminates one of the most common attack vectors – phishable credentials – making AI driven attacks much harder to execute. Non-human Identities – As agentic AI advances and takes on more roles in the enterprise – whether in software design or IT automation – identity security must evolve in tandem. Digital credentials allow organisations to authenticate NHIs with the same level of cryptographic security as human users, ensuring that AI agents interacting with corporate systems are verifiable and authorised.   Organisations must start preparing now for what lies ahead. While passwordless and digital credentials are not the only steps that should be taken to combat the surge in identity attacks, by deploying these technologies organisations can modernize a strained model – removing security decisions from users, enhancing the user experience and ultimately helping IAM take back its role as gatekeeper. Patrick Wardrop is executive director of product, engineering and design for the Verify IAM product portfolio at IBM Software. 
    0 التعليقات 0 المشاركات 71 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Hitachi Vantara: VSP One leads revamped storage portfolio
    In this storage supplier profile, we look at Hitachi Vantara, which is a small part of a very big organisation. Since we last looked at Hitachi Vantara, its storage portfolio has undergone something of a revamp, based around its VSP One family that offers block, file and object storage – with performance profiles that range from NVMe flash to HDD capacity – available across on-premise, cloud and even mainframe environments. It can provide all this via as-a-service models of procurement, and also offers full-stack IT including compute and – via partnerships – networking, too – and with containerisation for cloud-native deployments. Hitachi Vantara is the storage, data and analytics division of the giant Hitachi group, which as a corporation is ranked 196th in the Forbes Global 500 as of last year. In recent times, Hitachi has declined in this ranking. It was at 38th in 2012, 78th in 2014 and 106th in 2020. Formed out of the Hitachi Data Systems storage group, Vantara was created in 2017 upon a merger with the group’s Pentaho business intelligence operation and the Hitachi Insight Group (IT services and consulting). Hitachi Vantara is a small part of the parent Hitachi corporation, whose consolidated revenues for fiscal year 2020 were $70.72bn, with subsidiaries numbered in the hundreds and approximately just under 270,000 employees worldwide in 2024. Vantara was formed in 2017 to deliver digital systems. Its name was chosen for being “suggestive”, according to CEO Brian Householder in 2017, and is meant to evoke “advantage”, a “vantage point” and “virtualisation”. In Hitachi’s results for 2023, Digital Systems & Services revenues were 2,599 billion yen ($18.4bn) and a rise of 4% year-on-year. However, Vantara storage appears to be a relatively small part of Digital Systems & Services. IDC reckoned the company’s external storage system revenues at $1.55bn and a market share of 4.9%. That was a small increase on 2022, in which it had storage array revenue and share of $1.42bn and 4.4%. That puts Hitachi Vantara seventh in the IDC ranking of storage array players, level with IBM, but only “others” below them.  Hitachi Vantara’s flagship storage platform is Virtual Storage Platform (VSP) One. VSP One – launched in late 2024 – offers a single data fabric across on-premise and cloud with all-QLC and object storage products added at that time. Top of the line in block storage is the VSP 5000 series, which comes as the all-flash 5200 and 5600, with an H-suffix variant to each that indicates the possibility of spinning disk HDD capacity. Per node, each offer 65,280 LUNs at 256TB per LUN and maximum 1,024 snapshots per LUN. All of them allow for Fibre Channel, iSCSI and Ficon mainframe connectivity. The 5200 offers capacity up to just below 300PB. NVMe SCM (storage class memory) can be fitted. The ratio of NVMe flash to SAS HDDs possible is about 1:8, with the latter going up to drive sizes of 18TB. The 5600 model gains a much higher drive and port count. Throughput is also very much improved over the 5200 (312GBps vs 52GBps). VSP block storage arrays run a common operating system, Storage Virtualization OS (SVOS). The VSP One Block range is aimed at the midrange, and comes in a 2U form factor with options for TLC and QLC flash storage capacity. It can scale per appliance from 32TB effective to 1.8PB and to 65 nodes. It also runs SVOS – that can manage other suppliers’ arrays – and can run file and S3 object, too. Read more on storage suppliers Huawei rises in the storage ranks despite sanctions and tariffs: Huawei has leapfrogged HPE in revenue and market share and broadened its storage offer towards AI, the cloud and as-a-service despite sanctions and tariffs. HPE storage battles hard and smart in challenging market: We look at HPE, which has slipped in the storage supplier rankings, but brings a full range of AI-era storage over a mature cloud and consumption model offer. Vantara’s E series are the E590 and E790 – it seems to have lost the E1090 since we last surveyed the company. These also scale to PBs but lack the mainframe access of the 5000s. They also retain spinning disk HDD capacity. VSP One SDS is a software-defined platform aimed at “distributed block applications” that potentially run to PB scale. It runs on third-party hardware and in the Amazon Web Services (AWS) cloud. A key idea is that customers have a great deal of flexibility in how they deploy it. SVOS allows for data flow across applications and locations, with Kubernetes APIs to allow containerised applications. Hitachi Vantara’s file storage offer comes in the form of VSP One File series, in which models are given number suffixes, namely 32, 34 and 38. VSP One File 32 is aimed at entry-level and cost-sensitive deployments, while the 34 aims at customers that want all-flash – presumably the 32 uses HDDs – and the 38 is built for all-out performance. Each model is also distinguished by connectivity options – more Ethernet bandwidth, Fibre Channel choices and raw bandwidth – and node count (from 20 to 80). VSP One File arrays also have some S3 object storage capability, but only as a tier for connection to the cloud.   Object storage is served by VSP One Object, providing S3 native storage that can scale from a few TB to 150TB per node in a minimum eight-node cluster. It’s aimed at data lake and data warehouse, backup repository and AI/ML use cases. Hitachi Content Platform is the company’s unstructured data platform. It comes as a physical or virtual appliance or as storage software, can scale to exabytes and is available as hybrid (with HDDs and flash, for example) and all-flash. Capacity can be extended from on-premise to the three major clouds – AWS, Azure and GCP – or any S3-compatible cloud. Hyper-converged infrastructure comes in the form of hardware appliances in its Hitachi Unified Compute Platform, which can be all-flash, all-NVMe flash and GPU-equipped. UCP uses VMware vSphere as its hypervisor, with node capacity that can go from a few TB to the low 100-plus TB range with a maximum of 64 nodes.  Hitachi Vantara storage can deal with workloads that range from entry-level and SME storage to the most demanding enterprise workloads that include transactional processing and AI. In 2024, Hitachi launched its iQ portfolio that combines Nvidia technologies with Hitachi Vantara storage – in particular Hitachi Content Platform and file storage – and aims to provide reference architectures for AI use cases. Pentaho is a key plank of the Vantara brand. It provides AI-based data cataloguing across multiple on-premise and cloud data stores, data lakes, and so on. It allows for metadata tagging, as well as pipeline and workflow tools including ingest and cleanse, with access controls and data protection, plus migration to data warehouses. The company claims 85% of Fortune 500 companies are Hitachi Vantara customers. Hitachi Vantara put hybrid and multi-cloud operations at the heart of its plans when it launched VSP One. Using its SVOS storage operating system, it aims to make data available to customers across multiple datacentres and public cloud providers. Hitachi Storage Plug-in for Containers provides integration between Hitachi Vantara Virtual Storage Platform One (VSP One) and container orchestrators such as Kubernetes and Red Hat OpenShift. It addresses the complexities of storage management for containers for stateful applications. These include provisioning and enabling persistent storage for containers, as well as advanced storage features and data protection. The company’s Hitachi EverFlex service (see below) also provides containers as a service to provide central management of compute, storage, security and Kubernetes management.   Hitachi’s initial foray into container storage was Hitachi Kubernetes Service, which was built out of IP belonging to Containership in 2019, using CSI drivers to manage persistent volumes directly on Kubernetes nodes. Hitachi Vantara’s EverFlex infrastructure as a service allows customers to take advantage of flexible capacity and pay-per-use consumption that can range from self-managed to fully managed by a third party across the full IT stack, which can be on-premise or hybrid cloud. Everflex Consumption Level allows customers to scale up and down within a pre-agreed capacity range. Meanwhile, Foundation Level adds advanced management, observability and control over the infrastructure. Additional services at Foundation Level include integration capabilities aimed at existing heterogenous environments, provision of resources to plug skills gaps or management by Hitachi Vantara, all based on SLAs. Options that include third-party providers, such as Cisco for networking, are also offered. The company also offers Hitachi EverFlex Data Protection as a Service, which is based on VSP One infrastructure. Service management works through the cloud-based Hitachi Services Manager portal, with storage capacities that start at 50TB and go up to petabytes. Monthly billing consists of a committed amount and a flexible fee that covers excess usage costs.
    0 التعليقات 0 المشاركات 56 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Amid uncertainty, Armis becomes newest CVE numbering authority
    Mitre’s Common Vulnerabilities and Exposures (CVE) Program – which last week came close to shutting down altogether amid a wide-ranging shakeup of the United States government – has designated cyber exposure management specialist Armis as a CVE Numbering Authority (CNA). This means it will be able to review and assign CVE identifiers to newly discovered vulnerabilities in support of the Program’s mission to identify, define and catalogue as many security issues as possible.  “We are focused on going beyond detection to provide real security – before an attack, not just after,” said Armis CTO and co-founder, Nadir Izrael. “It is our duty and goal to help raise the tide of cyber security awareness and action across all industries. This is key to effectively addressing the entire lifecycle of cyber threats and managing cyber risk exposure to keep society safe and secure.” Mitre currently draws on the expertise of 450 CNAs around the world – nearly 250 of them in the US, but including 12 in the UK. The full list includes some of the largest tech firms in the world such as Amazon, Apple, Google, Meta and Microsoft, as well as a litany of other suppliers and government agencies and computer emergency response teams (CERTs). All the organisations listed participate on a voluntary basis, and each has committed to having a public vulnerability disclosure policy, a public source for new disclosures, and to have agreed to the programme’s Ts&Cs. In return, says Mitre, participants are able to demonstrate a mature attitude to vulnerabilities to their customers and to communicate value-added vulnerability information; to control the CVE release process for vulnerabilities in the scope of their participation; to assign CVE IDs without having to share information with other CNAs; and to streamline the vulnerability disclosure process. The addition of Armis to this roster comes amid uncertainty over the Program’s wider future given how close it came to cancellation. In the wake of the incident, many in the security community have argued that a shake-up of how CVEs are managed is long overdue. “This funding interruption underscores a crucial truth for your security strategy: CVE-based vulnerability management cannot serve as the cornerstone of effective security controls. At best, it’s a lagging indicator, underpinned by a programme with unreliable resources,” said Joe Silva, CEO of risk management specialist Spektion. “The future of vulnerability management should focus on identifying real exploitable paths in runtime, rather than merely cataloging potential vulnerabilities. Your organisation’s risk posture should not hinge on the renewal of a government contract. “Even though funding was provided, this further shakes confidence in the CVE system, which is a patchwork crowdsourced effort reliant on shaky government funding. The CVE programme was already not sufficiently comprehensive and timely, and now it’s also less stable.”   Meanwhile, Armis is also today expanding its vulnerability management capabilities by making its proprietary Vulnerability Intelligence Database (VID) free to all-comers. The community-driven database, which is backed by the firm’s in-house Armis Labs unit, offers early warning services and asset intelligence, and is fed a constant stream of crowdsourced intelligence to enhance its users’ ability to prioritise emerging vulnerabilities likely to impact their vertical industries, and take action to shore up their defences before such issues are widely exploited. “As threat actors continue to amplify the scale and sophistication of cyberattacks, a proactive approach to reducing risk is essential,” said Izrael. “The Armis Vulnerability Intelligence Database is a critical, accessible resource built by the security community, for the security community. It translates vulnerability data into real-world impact so that businesses can adapt quickly and make more informed decisions to manage cyber threats.” Armis said that currently, 58% of cyber attack victims only reactively respond to threats after the damage has been done, and nearly a quarter of IT decision-makers say a lack of continuous vulnerability assessment is a significant gap in their security operations, making it imperative to do more to address problems quicker. Read more about the CVE Program’s future 15 April: Mitre, the operator of the world-renowned CVE repository, has warned of significant impacts to global cyber security standards, and increased risk from threat actors, as it emerges its US government contract will lapse in 24 hours. 16 April: With news that Mitre’s contract to run the world-renowned CVE Programme is abruptly terminating, a breakaway group is setting up a non-profit foundation to try to ensure the project’s continuity. The US Cybersecurity and Infrastructure Security Agency has ridden to the rescue of the under-threat Mitre CVE Programme, approving a last-minute, 11-month contract extension to preserve the project’s vital work.
    0 التعليقات 0 المشاركات 59 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Qualys goes to bat for US cricket side San Francisco Unicorns
    News Qualys goes to bat for US cricket side San Francisco Unicorns Cloud security specialist Qualys partners with US T20 cricket squad San Francisco Unicorns and its Sparkle Army fanclub as the team prepares for its summer 2025 campaign By Alex Scroxton, Security Editor Published: 23 Apr 2025 9:59 California-based Twenty20 (T20) cricket side the San Francisco Unicorns has enlisted cloud security and compliance technology specialist Qualys as its inaugural cyber security partner for the upcoming summer 2025 Major League Cricket season in the United States. In exchange for its suite of IT security solutions, including its security intelligence platform Enterprise TruRisk, which automates vulnerability detection, compliance and protection across the organisation, Qualys will be placed front and centre on the team’s matchday and training strips, as well as on signage and merchandise. It will also be able to take advantage of other branding opportunities, including placement in matchday broadcasts and media. The team’s fan club, the so-called Sparkle Army, will also incorporate Qualys’s shield logo into its branding. The two organisations said their partnership reflected Qualys’s commitment to safeguarding digital organisations and supporting local communities. “This season marks a significant milestone for the Unicorns as we come to play in the Bay Area for the first time, and we’re thrilled to deliver world-class cricket via an elite partnership with local cyber security pioneer Qualys,” said team CEO David White. “Qualys stands out as an organisation for its commitment to excellence; a quality we strive for in all aspects of our own setup. Having their logo prominently displayed on our playing jerseys will be a reminder of our own high standards and values, while also resonating with our fans proudly sporting the matchday kit in the stands.” The Unicorns are one of six teams – the others hailing from Los Angeles, New York, Seattle, Dallas-Fort Worth and Washington DC – that inaugurated America’s Major League Cricket franchise just two years ago with the aim of broadening the sport’s appeal in the US. Although it is true the first ever cricket international was between the US and Canada, the sport never really caught on across the Atlantic as it was largely formalised after the War of Independence. Backed by venture capitalists Anand Rajaraman and Venky Harinarayan, who co-founded Junglee, an early internet shopping comparison site acquired by Amazon in the late 1990s, and coached by Australian all-rounder Shane Watson, the Unicorns may be well-placed to capitalise on the existing cricket fandom among the many thousands of Indian and Pakistani nationals employed in Silicon Valley. Its growing pool of players draws heavily on Australian talent, including the likes of former Under-19s captain Cooper Connolly, who received his first Test Cap this year against Sri Lanka; Jake Fraser-McGurk, fresh from his maiden international half-century against the England T20 squad last September; and Matt Short, who scored his maiden One Day International half-century – also against England – off a mere 23 balls. “Qualys is proud to sponsor the San Francisco Unicorns, and we’re honoured to have the opportunity to support a team that mirrors our values of innovation and determination,” said Sumedh Thakar, president and CEO of Qualys. “This partnership reflects our dedication to building strong community connections and celebrating excellence across all fields,” he added. The team will begin this year’s campaign at the Oakland Coliseum on 12 June 2025, when it meets Washington Freedom. Read more about IT in cricket Cricket Australia was able to keep digital services up and running during a period of unprecedented customer demand. With the cloud-connected ball and machine learning, amateur cricket players will soon be able to analyse their bowls and improve their game. Cricketers’ medical images will be stored securely in an independent clinical archive, enabling clinicians to access medical data and make treatment decisions more quickly. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 التعليقات 0 المشاركات 80 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Podcast: Quantum lacks profitability but it will come, says CEO
    Computer Weekly talks to Quantum CEO Jamie Lerner about the company’s expertise in massive volumes of data and a roadmap that includes Myriad, a new file system for forever flash in the AI era
    0 التعليقات 0 المشاركات 69 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    The tech investment gap: bridging the divide for underrepresented entrepreneurs
    Despite the UK’s position as one of the world’s most dynamic technology hubs, the industry still faces a persistent and well-documented challenge - equitable access to investment. Underrepresented entrepreneurs - particularly women, ethnic minorities, LGBTQ+ founders, and individuals from non-traditional backgrounds - remain significantly disadvantaged when it comes to securing the capital needed to start and scale high-growth technology businesses. This disparity is particularly striking in a sector built on innovation and disruption. According to the British Business Bank, in 2023 all-female founding teams received just 2% of UK venture capital (VC) funding, while mixed-gender teams secured only 12% - leaving most capital flowing to all-male teams. These imbalances extend into emerging tech sectors such as artificial intelligence (AI) - research from the Alan Turing Institute reveals that female-founded AI startups accounted for only 2.1% of UK VC deals between 2012 and 2022, receiving a mere 0.3% of the total investment during that time. In addition, the Social Mobility Commission found that people from working-class backgrounds represent only 19% of tech sector professionals - despite making up almost 40% of the UK’s working population. For an industry that prides itself on solving complex challenges, the exclusion of so many potential innovators is more than a contradiction - it reflects the requirement for positive imagination, inclusion, and long-term value creation. While action is being taken within a select group, more is required for an industry shift. While there are a growing number of initiatives aimed at addressing the investment gap, structural inequities remain deeply embedded. The UK’s tech and investment ecosystems - like those in Silicon Valley - are still heavily network-based and often risk averse when evaluating so-called “non-traditional” founders. Despite the vocabulary of innovation, much of the capital allocation still relies on pattern recognition - investing in those who look like, sound like, or were educated alongside previous success stories. This creates a double bind - under-represented founders are undercapitalised, and their lack of capital is used as a proxy for a lack of potential. There are three key systemic barriers: Access to networks - Early-stage fundraising is still dominated by warm introductions. If you’re not in the room, you’re unlikely to be invited to pitch. Bias in risk perception - Founders who don’t fit the typical mould - often white, male, and Oxbridge-educated - are perceived as riskier investments, despite research showing that diverse teams often outperform on several metrics. Structural gaps in support - Programmes such as accelerators, incubators, and grant schemes often don’t reach non-traditional communities early enough to make a meaningful difference. Ironically, while technology contributes to these inequities, it also offers part of the solution. Early examples of disruption are emerging. Data-driven VC platforms are beginning to use machine learning to identify overlooked founders based on fundamentals, not personal networks. Decentralised finance (DeFi) and crowdfunding are creating alternative routes to capital, bypassing traditional gatekeepers. And corporate innovation funds are incorporating environmental, social and governance (ESG) practices and diversity, equality and inclusion (DEI) principles into their portfolios, incentivising broader inclusion. However, without intentional design, AI and algorithmic systems risk replicating - or even exacerbating - the biases found in historical data. If we train decision-making tools on skewed investment histories, we risk encoding today’s inequality into tomorrow’s automation. Having held senior technology and operational leadership roles across global banks and fintech scaleups - and now advising boards and startups across the UK, US, and EU - I’ve seen the issue from multiple vantage points. Technology doesn’t just benefit from diverse talent - it requires it. Whether developing scalable architectures, building responsible AI models, or designing secure digital ecosystems, the strength of the solution is often a reflection of the diversity behind it. Underrepresented founders often bring differentiated market insights, a deep understanding of overlooked customer segments, and resilience forged through navigating structural challenges. These are exactly the attributes needed to succeed in a fast-changing tech landscape. Bridging this gap demands more than good intentions - it requires deliberate, coordinated action. Here are five ways the tech and investment communities can lead: Rebuild the discovery process - Leverage AI to surface strong founding teams from outside traditional clusters. Standardise pitch evaluation criteria to prioritise traction and potential, not just pedigree. Diversify investment committees - Diverse decision-making teams lead to broader deal flow and better investment performance. Review who’s sitting at the table - and who’s missing. Support beyond seed stage - Many underrepresented founders secure micro-funding but are left behind at Series A and beyond. Investors must design follow-on strategies that support sustainable scale. Enterprise buyers as catalysts - Large tech companies can reshape ecosystems by diversifying procurement. A single enterprise contract can define the trajectory of a startup. CIOs and CTOs should assess the diversity of their vendor ecosystems. Measure, report, act - Track where your money is going. Publish diversity metrics. Set tangible goals. Change doesn’t happen without accountability. As stewards of the digital future, those of us in the tech sector must use our influence to reshape the system. Bridging the investment gap is not just a moral imperative - it’s a strategic one. Underrepresented founders are already building the future - from ethical AI and climate solutions to inclusive digital platforms and secure fintech infrastructure. But many are doing so without the capital needed to truly scale. For investors, this is the moment to look beyond the familiar and recognise untapped potential. For technologists, it’s about building inclusive tools by design, not by accident. And for the entire ecosystem - we must expand our definition of what a “tech founder” looks like. Let’s change that - together. Read more about diversity in tech startups European fintech must take different path to Trump’s US on diversity - Leading light in women in fintech says Europe must take separate path to Trump’s US on diversity, equity and inclusion. Is diversity even more under threat in tech? The promotion of diversity in tech is going backwards - and it's a terrible moment for that to happen. Here's why diversity is even more important than it's ever been. Tech workers say diversity and inclusion efforts are working - The status of diversity in the technology sector remains up for debate, but some tech industry workers are seeing improvements.
    0 التعليقات 0 المشاركات 75 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Digital ID sector calls for changes to government data legislation
    Rostislav Sedlacek - stock.adobe News Digital ID sector calls for changes to government data legislation Suppliers urge technology secretary to work more collaboratively with private sector over concerns government’s digital wallet will gain a monopoly in the market By Bryan Glick, Editor in chief Published: 23 Apr 2025 9:30 The digital identity sector is calling on the government to amend its forthcoming data legislation and to change policy around use of the Gov.uk Wallet – which was technology secretary Peter Kyle’s flagship announcement as part of his new digital strategy. In an open letter to Kyle, secretary of state for science, innovation and technology, a group representing suppliers of online safety and age verification technology said the launch in January of the government’s digital wallet “sent shockwaves through the sector”. The Association of Digital Verification Professionals, the Age Verification Providers Association and the Online Safety Tech Industry Association outlined a number of concerns arising from Kyle’s plans to allow the Gov.uk Wallet to be used for commercial purposes, such as proving a shopper’s age when purchasing restricted goods like alcohol. “The news has triggered widespread uncertainty among suppliers and investors,” said the groups in their letter. “We are concerned about the inadvertent creation of a government monopoly in digital identity – one that could stifle innovation, limit consumer choice and impose billions of new costs on the taxpayer for functions the private sector currently provides, such as customer service and integration support.” A recent independent study suggests the industry’s fears are well founded. A report from Juniper Research into the UK digital ID market predicted a 267% annual growth in the number of people using digital identity apps, reaching 25 million by 2029. Juniper forecasted that more than 45% of UK adults will use the government app – whereas private sector providers will see just 9% growth over the same period. The Data (Use and Access) Bill, which is making its way through Parliament, includes legislation that will help to enable the widespread use of digital identity tools supported by government data. The signatories of the open letter want to see amendments to the bill that will avoid “distortions caused by exclusivity or unfair state-subsidised pricing”. Specifically, they are calling for: The Gov.uk Wallet and the government’s One Login single-sign-on tool to be statutorily limited to authentication for public services, to avoid competing with private sector alternatives.   Digital identity software that is certified as compliant with the government’s trust framework to be accepted for authentication with public services, rather than One Login having a monopoly for online government access. Government-issued credentials such as the digital driving licence that Kyle announced alongside the Gov.uk Wallet should be allowed to be held in any certified wallet, not just the government’s own product. The industry groups have proposed a joint technical working group to ensure collaboration between government and certified identity and wallet providers. “Investors will be further reassured if these equality, portability and interoperability principles are enshrined in the Data (Use and Access) Bill,” they wrote. Computer Weekly reported in February about significant concerns among digital ID suppliers and their investors about the government’s plans to compete against them for commercial uses.   Iain Corby, executive director of the Age Verification Providers Association, said that better collaboration would also avoid accusations that the government is attempting to introduce a national digital identity scheme. “This is not an area where government by press release is wise – the sudden and very public announcement, made without any consultation, gave many in the industry the impression that digital identity was being nationalised, and that could easily translate to the wider public as a threat of a national ID card by the back door,” he said. “If that happens, the benefits of digital ID will be lost for another decade. “We have had to act now, rather than wait for a consultative meeting scheduled for next month after Computer Weekly first reported this story, because there is still time to amend the Data (Use and Access) Bill to guarantee equality and consumer choice between private and government-issued digital IDs.” A government spokesperson said: “Citizens have dealt with sluggish processes for too long. Our Gov.uk Wallet will give millions of Britons access to all their existing government credentials – like their driving licence – from their phones and save them hours of wasted time. We will work with the UK digital identity sector to provide the best possible experience to people who choose to use digital identity technology.” Read more about digital identity UK digital identity turns to drama (or farce?) over industry fears and security doubts: Government at loggerheads with industry. Warnings of serious security and data protection problems undermining a vital public service. A burgeoning political campaign seeking greater influence. What’s the government up to with digital verification services? Is the government really looking to compete with the private sector for provision of digital identity? Such a move risks fundamentally undermining public trust in critical digital verification services. Distrust builds between digital ID sector and government amid speculation over ‘ID cards by stealth’: Wednesday 2 April saw the latest meeting of the All-Party Parliamentary Group on digital identity.  In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 التعليقات 0 المشاركات 70 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Secure Future Initiative reveals Microsoft staff focus
    Vitalii Gulenok/istock via Getty News Secure Future Initiative reveals Microsoft staff focus IT security is now a metric in the Microsoft employee appraisal process By Cliff Saran, Managing Editor Published: 22 Apr 2025 16:45 Every Microsoft employee now has a metric dubbed “Security Core Priority” tied directly to performance reviews. This is among the changes the software giant has put in place to enforce security internally.  In a blog post outlining the steps the company has taken to harden internal security, Charles Bell, executive vice-president of Microsoft Security, wrote: “We want every person at Microsoft to understand their role in keeping our customers safe and to have the tools to act on that responsibility.” He said 50,000 employees have participated in the Microsoft Security Academy to improve their security skills and that 99% of employees have completed the company’s Security Foundations and Trust Code courses. In May 2024, Microsoft introduced a governance structure to improve risk visibility and accountability. Since then, Bell said Microsoft has appointed a deputy chief information security officer (CISO) for business applications and consolidated responsibility across its Microsoft 365 and Experiences and Devices divisions. “All 14 Deputy CISOs across Microsoft have completed a risk inventory and prioritisation,” he said, adding that this creates a shared view of enterprise-wide security risk. Bell said new policies, behavioural-based detection models and investigation methods have helped to thwart $4bn in fraud attempts. One example of where modelling can be used is in preventing an attacker that has gained access to one system from moving onto other systems inside the company network. Modelling IT assets using a graph can be beneficial in preventing attackers from successfully moving onto other IT assets once a system has been compromised. Microsoft said that modelling IT assets as a graph reveals unknown vulnerabilities and classes of known issues that need to be mitigated to reduce what it describes as “lateral movement vectors”. According to its April 2025 progress report, Microsoft has made “significant” steps in adopting a standard software developer’s kit for identity and ensuring 100% of user accounts are resistant to multi-factor authentication (MFA) phishing attacks. However, among the areas it’s still working on is protection of cryptographic signing keys and quantum safe public key infrastructure (PKI). Read more about employee cyber security Nationwide Building Society to train people to think like cyber criminals: Nationwide wants to help bring more diversity into UK cyber security skills base through partnership with training specialist. NHS staff lack confidence in health service cyber measures: NHS staff understand their role in protecting the health service from cyber threats and the public backs them in this aim, but legacy tech and a lack of training are hindering efforts, according to BT. To protect high-risk production systems, Microsoft said that in November 2024, it moved 28,000 high-risk users, working on sensitive workflows, to a locked-down Azure Virtual Desktop infrastructure, and is working to improve the user experience for these endpoints. Regarding network protection, the report shows that the company is working on implementing network micro segmentation by reimplementing access control lists. “Currently, 20% of first-party IPs [internet protocols] are tagged and 93% of first-party services have established plans for allocating IPs from tagged ranges and provisioning IP capacity,” Microsoft said. It added that it’s also introducing new capabilities to help customers isolate and secure their network resources. These include Network Security Perimeter, DNS Security Extensions and Azure Bastion Premium private-only mode. In terms of its internal software development practices, Microsoft said it’s been driving four standards to help ensure open source software (OSS) used in its production environments is sourced from governed internal feeds and free of known critical and high-severity public vulnerabilities. In the report, Microsoft said Component Governance, a software composition analysis tool that tracks OSS usage and vulnerabilities in OSS, has achieved broad adoption and is enabled by default. It also has an offering called Centralized Feed Service, which provides governed feeds for consuming open-source software. According to Microsoft, this has reached broad adoption. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 التعليقات 0 المشاركات 101 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Cyber ‘agony aunts’ launch guidebook for women in security
    Two of the UK’s leading female cyber practitioners – Secureworks threat intelligence knowledge manager Rebecca Taylor and CybAid founder and Hewitt Partnerships managing director Amelia Hewitt – are to launch a book for women starting and navigating careers in the cyber security sector. The duo describe their co-authored book, Securely Yours, as a practical, agony aunt-style guide, drawing on both Taylor and Hewitt’s lived experience in the still male-dominated security industry. They aim to tackle a range of topics, including active allyship, the ever-present spectre of burnout, and building a professional brand. Many of these topics are drawn from questions posed by others whom the women have mentored during their careers. “No topic is too taboo,” they said. Although Securely Yours reflects on the experiences of and questions raised by women specifically, the authors hope its practical advice will be helpful to anybody looking to advance their security careers, from whatever background, as well as serving as a resource for those in positions of privilege to better support inclusion in security. “It’s been an immense privilege to share not only our experiences and the advice we’ve gained across our careers, but the insights of a range of incredible individuals who have each had their own journeys within cyber, to create a resource we hope has something for every reader,” said Hewitt. “I am fortunate to have had an amazing cyber security career, and I want others to have the same,” added Taylor. “I feel an accountability within me to elevate, support and mentor underrepresented groups to own their platforms, voices and opportunities. This book is a manifestation of that. I truly hope it makes a difference and helps those wanting to thrive in cyber know that they can do it, that we have their back and that they’re not alone on their journey,” she said. Taylor began her career at Secureworks – now part of Sophos – in an administrative role, before taking advantage of a forward-thinking internal culture at the business to develop her career and security expertise. In a 2024 interview with Computer Weekly, she said: “I started doing lots of studying and learning in the background, and through mentorship and exposure around the business, really focused on progressing my career.” Taylor now works on multiple diversity initiatives – not just gender – both within and outside the business. Over the years, she has supported many other women in the industry with mentoring and other guidance. As pushback against official diversity, equity and inclusion (DEI) initiatives becomes stronger, community support networks – which are often underpinned by mentorships – are becoming particularly valuable for underrepresented groups in security and the wider technology industry. However, recent research conducted by US-based non-profit Woman in Cybersecurity (WiCyS) found that 57% of women in the sector feel excluded from career and growth opportunities, and 56% do not feel they are respected. This is leading to a situation where almost 40% feel they are unable to be their authentic selves in the workplace, and given security suffers from high rates of workforce attrition and burnout, Hewitt and Taylor said it is imperative that more is done to help people feel supported and respected at all stages of their careers. Securely Yours will be available from 1 May 2025 on Amazon, and its launch will be accompanied by a podcast series to expand on the conversations started in the book and offer additional support. Read more about women in cyber Women make up a higher percentage of new entrants to the cyber security profession, particularly among younger age groups, and are increasingly taking up leadership positions and hiring roles, but challenges persist. IBM signs on to a partnership deal in support of the popular NCSC CyberFirst Girls scheme, designed to foster gender diversity in the cyber security profession. ISC2’s Clar Rosso talks about diversifying the cyber workforce and supporting cyber professionals as they keep up with growing compliance and security policy demands.
    0 التعليقات 0 المشاركات 105 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Cyber attack downs systems at Marks & Spencer
    Veteran UK retailer Marks & Spencer (M&S) has apologised to customers after a cyber incident of a currently undisclosed nature forced multiple public-facing services offline, with shoppers predictably taking to social media in their droves to lament the outages. In a note published on the afternoon of 22 April, the company revealed it had been “managing a cyber incident” affecting contactless payments and online click-and-collect services over the Easter Bank Holiday. According to reports, a second technical problem occurred at the weekend affecting only contactless payments. “As soon as we became aware of the incident, it was necessary to make some minor, temporary changes to our store operations to protect customers and the business and we are sorry for any inconvenience experienced,” a spokesperson said. “Importantly, our stores remain open and our website and app are operating as normal. “Customer trust is incredibly important to us, and if the situation changes an update will be provided as appropriate,” they added. M&S additionally said it has enlisted third-party cyber forensics to assist with incident management, and is taking further actions to protect its network and ensure it can continue to maintain its customer services. Computer Weekly also understands the cyber attack has been reported to the Information Commissioner’s Office (ICO) and the National Cyber Security Centre (NCSC). “The incident at Marks & Spencer serves as a reminder of the interdependencies in modern retail operations. The disruption to click-and-vollect services and contactless payments underscores how any technical issue can have far-reaching consequences across an entire organisation,” said Javvad Malik, lead security awareness advocate at KnowBe4. “M&S's prompt communication and engagement with the ICO demonstrate a commendable level of transparency and regulatory compliance. However, the event also reveals potential gaps in cyber resilience and crisis management strategies.” Although unconfirmed at this stage, the nature of the attack’s impact, and the language deployed by M&S, suggests that the retailer may be dealing with the impact of a ransomware attack on certain systems. But regardless of the precise nature of the incident, it is by no means an isolated one, with retailers frequently in the crosshairs of threat actors. For example, retailers have high public brand awareness upon which cyber criminals like to capitalise for their own fame and notoriety. Added to this, cyber criminals can use the seasonal nature of the retail sector to ramp up pressure on the victim by disrupting their business at a critical point and making them more likely to cave to extortion demands – the timing of the M&S incident over the long Easter weekend may bear this out. Meanwhile, the growth of omnichannel approaches to retail increases the exposed attack surface, as does adoption of new technologies, such as AI-powered recommendation engines. According to NCC Group, the consumer cyclicals (non-essential purchases) and non-cyclicals (essential purchases) sectors, which both encompass retailers in general, were the second and fifth most targeted verticals by cyber criminal ransomware gangs in the first half of 2024. “There is an urgent need for all sectors to respond to this increased targeting from threat actors, but especially those storing huge amounts of data,” said Matt Hull, global head of threat intelligence at NCC Group. “Now more than ever businesses should expect to be a target for cyber criminals and take a proactive approach to security rather than waiting for potential threats to strike.” Read more about retail technology An Ericsson report finds retailers identified networking and IT as the biggest frustration, with two-fifths suffering loss of revenue at remote branch locations because of poor connectivity. When Tesco Clubcard was developed 30 years ago, using the technology of the time to analyse data was a long shot, but it grew into a scheme that birthed retail loyalty as we know it today. UK supermarkets continue to deal with the impact of a ransomware attack on the systems of supply chain software supplier Blue Yonder, which is disrupting multiple aspects of their businesses including deliveries.
    0 التعليقات 0 المشاركات 104 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Beyond baselines - getting real about security and resilience
    In 2024, the National Cyber Security Centre (NCSC) celebrated a decade of its baseline cyber security certification, Cyber Essentials (CE). While the NCSC has touted the scheme’s benefits, CEO Richard Horne has nonetheless been explicit about the “widening gap” between the UK’s cyber defences and the threats faced. This comes amid a heightened level of physical threat from state actors, including via sabotage and espionage, as well as greater awareness of state threats to research and innovation. This changing threat picture cast greater attention on the work of the National Protective Security Authority (NPSA), the UK’s national technical authority for physical and personnel protective security. The elevated threat raises the question of whether the NPSA should follow the NCSC’s suit and develop its own baseline protective security certification as an equivalent to CE. However, to address the threat and build genuine resilience, we believe the UK needs an approach that goes beyond baselines and is informed by risk.  The CE certification was launched in 2014. It outlines a baseline level of security that is intended to be universally applicable and risk agnostic. The NCSC asserts that CE is “suitable for all organisations, or any size, in any sector”. CE is assessed without reference to the organisation or its risk profile because the CE controls are aimed at commodity attacks that are ubiquitous for Internet-connected organisations. After 10 years the number of organisations certified under CE continues to increase year-on-year. The NCSC also has plans to expand the scheme further to better address supply chain risks. These achievements notwithstanding, there have been suggestions that the adoption of CE has been lower than expected, with one report stating that uptake remains below 1% of eligible organisations. The argument for a baseline cyber security certification is a good one; strengthening the cyber security of individual organisations leads to a more resilient ecosystem and is a public good. The controls involved in CE are sufficiently universal that there is no need for application to refer to an organisation’s specific risk assessment. However, there are reasons to question whether a CE-equivalent baseline security certification for protective security could be effective. First, it is harder to identify a single shared ‘baseline’ level of protective security. CE is focused on five core security controls applicable to any organisation. It is not clear that a similar baseline set of controls could be constructed to simultaneously address areas as diverse as physical security, insider threat, or the security of research collaboration. Second, the CE controls would almost certainly be duplicated in any protective security certification. This might deter organisations that already have CE from seeking the new certification – at a time when relatively few organisations have CE. Third, the creation of separate NCSC and NPSA baseline certifications would reinforce silos between different aspects of security. We should be moving towards an approach in which organisations adopt a proportionate approach to security that addresses threats regardless of their means of realisation. An attempt to mirror CE in the protective security space therefore risks falling between two stools; being overly strenuous for most organisations, while insufficient to tackle genuine threats. At the same time, it risks reinforcing an unhelpful physical-cyber divide in many organisations’ approach to security. CE remains relevant at a technical level, but the way it is framed increasingly appears as a hold over from an earlier geopolitical age. The cyber security industry often portrays its work as primarily technical and unobjectionable. Cyber threats can be presented as impersonal – an inevitable consequence of being online. The NCSC refers to CE as “basic cyber hygiene” and similar metaphors from public health or ecology are regularly deployed to ‘de-securitise’ these security controls. In contrast, the UK has become increasingly explicit about the deteriorating threat environment and the necessity of a concerted response. That messaging is likely to accelerate as the UK government builds the public case necessary for a significant increase in defence spending. This would also align with the UK’s widening national conversation on resilience across domains and sectors. The forthcoming Cyber Security and Resilience Bill (CSRB) is an example of this trend. Although the CSRB is primarily targeted at bolstering cyber defences for critical services, it is part of a set of parallel efforts on physical security, economic stability, and community preparedness that aim at a holistic approach to threats.  The UK Government’s Resilience Framework outlines an all-hazards approach, covering everything from extreme weather and pandemics to supply chain disruptions and CNI failures, and emphasises preparation and prevention across society. A new National Security Council on resilience has also been created, chaired by the Chancellor of the Duchy of Lancaster and is made up of the Secretaries of State for a wide range of sectors. A separate 'physical security' certification scheme would run contrary to the trend towards a holistic approach to resilience. Read more about the NCSC's work IBM signs on to a partnership deal in support of the popular NCSC CyberFirst Girls scheme designed to foster gender diversity in the cyber security profession. The NCSC urges service providers, large organisations and critical sectors to start thinking today about how they will migrate to post-quantum cryptography over the next decade. NCSC CEO Richard Horne is to echo wider warnings about the growing number and severity of cyber threats facing the UK as he launches the security body’s eighth annual report. Rather than developing separate certifications, a better option would be a unified security resilience certification for at-risk organisations. This model would complement established baselines like CE. Unlike the baseline approach of CE, the starting point for the new certification would be a credible organisational security risk assessment. This assessment would be integrated, bridging security domains such as cyber, physical, and personnel security. Beyond this the framework would be modular, reflecting the absence of a single organisation-agnostic baseline in protective security. The scheme would certify that the organisation had taken proportionate protective security measures in response to its own risk assessment. Achieving this standard would require substantial effort and would not be appropriate for most organisations. The certification process would necessarily be more in-depth than the process for CE. Nonetheless, by leveraging unified risk profiling and cross-sector collaboration between the NCSC and NPSA, this approach would enable organisations to go beyond compliance checklists to achieve genuine, outcome-focused resilience. This certification would be accompanied by an awareness campaign that is frank about the geopolitical threat faced by at-risk organisations. It would be important to make clear that this is not ‘business as usual’. This approach would reduce certification fatigue while delivering a robust, adaptive defence posture. It aligns with forthcoming resilience legislation, and with a broader national view of resilience as a desirable achievement in an increasingly turbulent geopolitical landscape. Neil Ashdown is head of research for Tyburn St Raphael, a security consultancy. Tash Buckley is a former research analyst at RUSI and a security educator and lecturer, researching cyber power and the intersection of science, technology innovation, and national security.
    0 التعليقات 0 المشاركات 68 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    AI-powered APIs proving highly vulnerable to attack
    More than 150 billion application programming interface (API) attacks were observed in the wild during 2023 and 2024, according to data released this week by cloud security specialist Akamai, with the growth of artificial intelligence (AI) powered APIs and AI-enabled attacks compounding to create a steadily expanding attack surface. In its latest State of apps and API security 2025 report, Akamai also said it observed volumes of web-based cyber attacks up by a third over the course of 2024 to 311 billion all told, a pronounced surge that appears to correlate closely to an expansion in the scope of threats arising from AI. “AI is transforming web and API security, enhancing threat detection but also creating new challenges,” said Rupesh Chokshi, senior vice-president and general manager of Akamai’s Application Security Portfolio. “This report is a must read to understand what’s driving the shift and how defenders can stay ahead with the right mitigation strategies.” Akamai said the integration of AI tools with core platforms via APIs is “substantially” expanding the attack surface because the vast majority of AI-powered APIs are not only publicly accessible, but tend to rely on inadequate protections, lacking such things as authentication mechanisms, for example. This problem is now also compounded by a growing number of AI-driven attacks. For end-users, this means that while security teams are able to enhance web application and API security by enhancing their defensive capabilities with AI-powered automation – for example, by helping to find threats, predict possible breaches and bring down incident response times – AI also helps attacks improve the effectiveness of their attacks by automating web scraping and bringing more dynamic attack methodologies to bear. Looking ahead, Akamai said that although AI-driven API management would doubtless continue to evolve, AI-driven attacks would likely remain a significant concern, meaning organisations need to adopt more robust, defence-in-depth security strategies. Turning to web attacks, Akamai said that it observed a dramatic rise in application layer (aka Layer 7) distributed-denial-of-service (DdoS) attacks targeting both web apps and APIs, with monthly volumes growing from over 500 billion at the start of 2023 to more than a trillion at the end of 2024 – bad bots and the persistence of HTTP-flooding as an attack vector seem to have driven this. The technology sector was the most frequently targeted vertical for such attacks – more than seven trillion during the period covered by the survey. Broken out by geography, EMEA was on the receiving end of 2.7 trillion Layer 7 DDoS attacks, 306 billion hitting targets in the UK and 369 billion in Germany. Akamai said that safeguarding web apps and APIs would continue to be an ever more essential need for organisations. It laid out a number of key actions that security leaders should consider taking: To lay down an API security plan incorporating shift-left and DevSecOps techniques to integrate security from initial API design through post-production, paying particular attention to continuous discovery and visibility, authentication, rate limiting and bot mitigation; Implement more robust core security measures such as continuous threat monitoring and response, and use API testing tools such as dynamic application security testing (DAST); Be proactive against threats, using specialised DDoS protection tools, for example, and paying attention to patch management, access control and network segmentation; Act early to mitigate API vulnerabilities, following established guidelines, such as OWASP’s, to help ensure more robust security, and address risks associated with bad coding practice or misconfigurations; Pay more attention to ransomware threats, taking advantage of zero-trust architectures, microsegmentation, and the Mitre ATT&CK framework; Finally, prepare for AI with defence strategies that include bot defences, AI-powered cyber tools, specialist firewalls and more proactive measures such as continuous assessment and zero trust. Read more about API security Lax API protections make it easier for threat actors to steal data, inject malware and perform account takeovers. An API security strategy helps combat this. APIs are the backbone of most modern applications, and companies must build in API security from the start. Follow these guidelines to design, deploy and protect your APIs. External API integrations are critical, but so is managing third-party API risks to maintain customer trust, remain compliant and ensure long-term operational resilience.
    0 التعليقات 0 المشاركات 78 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Ofcom bans leasing of Global Titles to crackdown on spoofing
    everythingpossible - stock.adobe News Ofcom bans leasing of Global Titles to crackdown on spoofing Telco regulator Ofcom is cracking down on a loophole being exploited by cyber criminals to access sensitive mobile data By Cliff Saran, Managing Editor Published: 22 Apr 2025 12:06 Telco regulator Ofcom has said it is closing a technical loophole that poses a risk to mobile users’ privacy and security, announcing a crackdown on the leasing of phone numbers known as Global Titles. Global Titles (GT) are used by mobile networks to send and receive signalling messages, which help to ensure a call or text message is received by the intended recipient. According to Ofcom, these signalling messages are used in the background, supporting billions of calls and text message made worldwide, and are never seen by mobile phone users. However, Ofcom said criminals can use Global Titles to intercept and divert calls and messages, which enables them to capture information held by mobile networks. As an example, a hacker could intercept security codes sent by banks to a customer via a text message.  Global Titles are sometimes leased out by mobile networks – largely to legitimate businesses who use them to offer mobile services. However, they can fall into the wrong hands. This can lead to the security and privacy of ordinary mobile users being compromised as their personal data may be directly or indirectly accessed by criminals. The National Cyber Security Centre (NCSC) has recognised the risk presented by Global Titles in telecommunications. Due to failures of industry-led efforts to address these issues, Ofcom said that it has taken the step to ban the leasing of Global Titles with immediate effect.  Discussing the leasing of Global Titles, NCSC chief technical officer Ollie Whitehouse said: “This technique, which is actively used by unregulated commercial companies, poses privacy and security risks to everyday users, and we urge our international partners to follow suit in addressing it.” Among the Global Titles-based services that Ofcom regards as high are Home Location Register (HLR) lookup. These include authentication services, least cost routing and number authentication services. Ofcom said HLR is an example of a higher risk service because it facilitate access to operational data held by mobile networks, some of which may be personal data and/or location data, which is subject to legal requirements under relevant data protection legislation. “We expect range holders [the original telco operator holding the GT] providing, or indirectly facilitating provision of, HLR lookup services to be alert to the risk that such services may be facilitating access to operational data held by mobile networks which may be contrary to relevant data protection legislation,” Ofcom stated in its Guidance for number range holders to prevent misuse of Global Titles. Ofcom’s group director for networks and communications Natalie Black said: “We are taking world-leading action to tackle the threat posed by criminals gaining access to mobile networks. Leased Global Titles are one of the most significant and persistent sources of malicious signalling. Our ban will help prevent them from falling into the wrong hands – protecting mobile users and our critical telecoms infrastructure in the process.”  “Alongside this, we have also published new guidance for mobile operators on their responsibilities to prevent the misuse of their Global Titles,” added Black. The ban on entering new leasing arrangements is effective immediately. For leasing that is already in place, the ban will come into force on 22 April 2026. This will give time to legitimate businesses that currently lease Global Titles from mobile networks to be able to make alternative arrangements.    Read more about mobile network security Building mobile security awareness training for end users: Do concerns of malware, social engineering and unpatched software on employee mobile devices have you up at night? One good place to start is mobile security awareness training. Why mobile security audits are important in the enterprise: Mobile devices bring their own set of challenges and risks to enterprise security. To handle mobile-specific threats, IT should conduct regular mobile security audits. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue SLM series - Agiloft: Language models in contract lifecycle management – CW Developer Network ABBYY AI Summit 2025: Purpose-built AI for intelligent document processing – CW Developer Network View All Blogs
    0 التعليقات 0 المشاركات 71 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Investigatory Powers Tribunal has no power to award costs against PSNI over evidence failures
    The Investigatory Powers Tribunal, the court that rules on the lawfulness of surveillance by police and intelligence agencies, has no powers to award costs against government bodies when they deliberately withhold or delay the disclosure of relevant evidence or fail to follow court orders. A panel of five judges has found that the tribunal has no statutory powers to impose sanctions against police forces or intelligence agencies if they delay or fail to follow orders from the tribunal to disclose relevant evidence. The ruling comes after the Investigatory Powers Tribunal found that two UK police forces had unlawfully spied on investigative journalists Barry McCaffrey and Trevor Birney, including harvesting phone data, following their investigations into police corruption. The Police Service of Northern Ireland (PSNI) targeted Birney and McCaffrey after they produced a documentary exposing police collusion in the murders of six innocent Catholics watching a football match in Loughinisland in 1994. Although the people alleged to be behind the killings are known to police, none have been prosecuted. The tribunal acknowledged in a judgment on 18 April that the PSNI repeatedly withheld and delayed the disclosure of important evidence, in some cases until the night before a court hearing. However, five tribunal judges concluded they had no statutory powers to award costs against the police force. The judges have called for the Secretary of State to intervene to address the matter by introducing rules for the tribunal or passing primary legislation. “We do not regard the outcome as entirely satisfactory … the facts of the present case illustrate why it would be helpful at least in principle for this tribunal to have the power to award costs,” the judges said. They added that they “see force” in the journalists’ submissions “that there is a need for the tribunal to have the power to award costs, in particular against respondents, where there has been expenditure wasted as a result of their conduct and where, in particular, orders of the tribunal are persistently breached”. However, the five judges found they had no powers to award costs under existing legislation or the tribunal rules, and that it “would be a matter for the Secretary of State or Parliament”, to intervene, they said in a 19-page ruling.  Birney and McCaffrey had claimed for reimbursement of part of their legal costs, after the PSNI allegedly misled the tribunal by obfuscating critical evidence of PSNI and Metropolitan Police surveillance operations against them, leading to two court hearings having to be abandoned. Ben Jaffey KC, representing the journalists, told a tribunal hearing in March 2024 that the PSNI had failed to disclose surveillance operations against the two journalists until the night before scheduled court hearings, in breach of the tribunal’s orders.  In one case, the PSNI served key evidence at 11:19 pm the night before a court hearing, forcing the journalists’ lawyers to work through the night, and leaving no time to properly consider the evidence the next day. On another occasion, the PSNI failed to disclose a Directed Surveillance Order against the two journalists until the morning of a court hearing, when the journalists’ legal representative was allowed to take notes from it but not allowed a copy. Commenting on the verdict, Trevor Birney said the tribunal’s conclusion that it lacked power to order costs was deeply disturbing. “The tribunal has effectively said that public bodies can behave badly – delay, obstruct, conceal – and face no consequence,” he said. “That’s not justice; it’s a reward for wrongdoing.” Barry McCaffrey added: “The tribunal recognised the delays and failures in disclosure but effectively said its hands were tied. That leaves us with a system where transparency and accountability can be deliberately undermined without fear of reprisal.” How the PSNI delayed disclosure of key evidence 16 February 2024: Durham Police incorrectly told the lawyers of the journalists there had been no Directed Surveillance Authorisation against them. 16 February 2024: PSNI served a “skeleton argument” making additional disclosures but neglected to disclose the Directed Surveillance Authorisation. 23 February 2024: PSNI disclosed the existence of a Directed Surveillance Authorisation issued to authorise spying against the journalists “only two clear days” before a court hearing: “No clear explanation has ever been offered for that exceptionally late disclosure.” 25 February 2024: PSNI discloses further evidence at 11:19pm on the evening before a court hearing. 26 February 2024: On the day of the scheduled court hearing, a lawyer for the two journalists was allowed to view and take limited notes of the Directed Surveillance Authorisation only the morning before the court hearing. Lawyers for the journalists attempt to make sense of large volumes of additional material disclosed just before the court hearing. This includes a disclosure from the PSNI that it had received extensive phone communications data from the Metropolitan Police Service, which was later added as a respondent to the case. The hearing is adjourned late on the first day as it could “not fairly go ahead”. 8 May 2024: PSNI discloses large volumes of evidence raising new issues, including a “defensive operation” to monitor police phone calls to journalists by the PSNI, and details of an attempt by a Durham Constabulary to preserve journalist Trevor Birney’s emails stored on Apple’s iCloud. The tribunal ordered further searches of evidence and written explanations for the late disclosure. The journalists warned that the ruling risks eroding public confidence in legal safeguards and sets a dangerous precedent that could embolden further misconduct by public authorities.  The chief constable of the PSNI, Jon Boutcher, has appointed Angus McCullogh KC to conduct a review into PSNI surveillance of lawyers and journalists. Birney and McCaffrey have called for a full public inquiry into the unlawful surveillance and institutional failures surrounding their case. Read more about police surveillance of journalists in Northern Ireland Investigative reporter Dónal MacIntyre has asked the Investigatory Powers Tribunal to look into allegations that he was placed under directed surveillance and had his social media posts monitored by Northern Ireland police. Journalists seek legal costs after PSNI’s ‘ridiculous’ withholding of evidence in spying operation delayed court hearings. The Metropolitan Police monitored the phones of 16 BBC journalists on behalf of police in Northern Ireland, a cross-party group of MPs heard. Over 40 journalists and lawyers submit evidence to PSNI surveillance inquiry. Conservative MP adds to calls for public inquiry over PSNI police spying. Tribunal criticises PSNI and Met Police for spying operation to identify journalists’ sources. Detective wrongly claimed journalist’s solicitor attempted to buy gun, surveillance tribunal hears. Ex-PSNI officer ‘deeply angered’ by comments made by a former detective at a tribunal investigating allegations of unlawful surveillance against journalists. Detective reported journalist’s lawyers to regulator in ‘unlawful’ PSNI surveillance case. Lawyers and journalists seeking ‘payback’ over police phone surveillance, claims former detective. We need a judge-led inquiry into police spying on journalists and lawyers. Former assistant chief constable, Alan McQuillan, claims the PSNI used a dedicated laptop to access the phone communications data of hundreds of lawyers and journalists. Northern Irish police used covert powers to monitor over 300 journalists. Police chief commissions ‘independent review’ of surveillance against journalists and lawyers. Police accessed phone records of ‘trouble-making journalists’. BBC instructs lawyers over allegations of police surveillance of journalist. The Policing Board of Northern Ireland has asked the Police Service of Northern Ireland to produce a public report on its use of covert surveillance powers against journalists and lawyers after it gave ‘utterly vague’ answers. PSNI chief constable Jon Boutcher has agreed to provide a report on police surveillance of journalists and lawyers to Northern Ireland’s policing watchdog but denies industrial use of surveillance powers. Report reveals Northern Ireland police put up to 18 journalists and lawyers under surveillance. Three police forces took part in surveillance operations between 2011 and 2018 to identify sources that leaked information to journalists Trevor Birney and Barry McCaffrey, the Investigatory Powers Tribunal hears. Amnesty International and the Committee on the Administration of Justice have asked Northern Ireland’s policing watchdog to open an inquiry into the Police Service of Northern Ireland’s use of surveillance powers against journalists. Britain’s most secret court is to hear claims that UK authorities unlawfully targeted two journalists in a ‘covert surveillance’ operation after they exposed the failure of police in Northern Ireland to investigate paramilitary killings. The Police Service of Northern Ireland is unable to delete terabytes of unlawfully seized data taken from journalists who exposed police failings in the investigation of the Loughinisland sectarian murders. The Investigatory Powers Tribunal has agreed to investigate complaints by Northern Ireland investigative journalists Trevor Birney and Barry McCaffrey that they were unlawfully placed under surveillance.
    0 التعليقات 0 المشاركات 80 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Collaboration is the best defence against nation-state threats
    Maksim Kabakou - Fotolia Opinion Collaboration is the best defence against nation-state threats The rise of DeepSeek has prompted the usual well-documented concerns around AI, but also raised worries about its potential links to the Chinese state. The Security Think Tank considers the steps security leaders can take to counter threats posed by nation state industrial espionage? By Stephen McDermid, Okta Published: 17 Apr 2025 Businesses are under attack from all corners of the globe and while many organisations may think that nation-state threat actors would never target or be interested in them, the reality is that no-one is exempt from security threats. Security leaders need to ensure they are staying up to speed on the latest threat intelligence, this can either be through an in-house capability or via third-party threat intel providers. Once they understand the tactics, techniques and procedures (TTPs) deployed by these threat actors, organisations can then ensure they have robust mechanisms in place to digest and act on this information to implement appropriate controls. Organisational culture plays a key role in ensuring everyone is aware of the threats and risks posed to the business. It is vital that leaders educate users on what the most prevalent threats may look like and how to respond, this is a primary defence to protecting their business. Social engineering remains one of the most widely used methods of attack and so implementing processes that are resistant to individual compromise is key. Using phishing resistant authentication methods, ensuring strict identity governance and control, and having a well-tested incident response capability are all crucial steps to preventing and mitigating these types of attacks. Unfortunately, securing your own organisation is not enough and historically nation-state threat actors have taken advantage of weak third-party suppliers and supply chain governance. Having strong supply chain governance and assurance is now one of the top trends across industries and it’s critical businesses understand the dependencies and access that suppliers have. The Security Think Tank on nation state espionage Mike Gillespie and Ellie Hurst, Advent IM: Will DeepSeek force us to take application security seriously? Elisabeth Mackay, PA Consulting: How CISOs can counter the threat of nation state espionage. Andrew Hodges, Quorum Cyber: Countering nation-state cyber espionage: A CISO field guide. Nick New, Optalysys: DeepSeek will help evolve the conversation around privacy. If prevention fails, lateral movement post-compromise is one of the first actions threat actors will attempt and so endpoint detection and response, and zero-trust solutions that can prevent and detect unauthorised access are also vital. In 2023, 1.9 billion session cookies were stolen from Fortune 1000 employees. With the session token, attackers are bypassing MFA and so it is much harder to detect and respond. Having solutions  in place as part of a zero-trust architecture to detect session token replay attempts can stop these attacks and alert to possible credential or endpoint compromise. Ultimately, collaboration and partnership across organisations and industry will help organisations understand these threats, the risks posed by nation-state actors and more importantly allow them to work together to prevent them. Stephen McDermid is EMEA CSO at Okta   In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 التعليقات 0 المشاركات 82 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    AI's power play: the high-stakes race for energy capacity
    The use of artificial intelligence (AI), particularly generative AI, relies on a lot of energy. As its adoption grows and people become more adept at harnessing its power, increasingly strong ties are being created between the tech and energy industries. Whilst this may be a good thing, it also brings about new challenges and legal considerations. After all, the long-term success of digital infrastructure depends on two core issues. Technical and operational constraints naturally need to be considered but, at the same time, a significant emphasis should be placed on stakeholders establishing clear legal contracts and investment safeguards from the start of a project. It's understandable why individuals might get caught in the hype and excitement of a new idea, when a proactive approach to identifying and clearly allocating project risks and rewards upfront is crucial for successfully navigating the complex legal environments over the years and decades in which these projects come online. Training a single large language model can consume as much electricity as a small town. Data centres currently make up around 1.5% of global electricity demand.  The International Energy Agency (IEA) forecasts that electricity demand from data centres will more than double by 2030, a hunger primarily driven by AI. This surge could require new global energy capacity equivalent to roughly four times the United Kingdom's current total electricity consumption. This increasing energy demand is concentrated primarily in the areas where data centres are or will be located, straining local power grids and requiring either substantial and rapid grid infrastructure upgrades or, more commonly, a race between data centre owners and operators to secure reliable and sustainable energy sources dedicated to their operations. While AI demands significant power, it also holds promise for improving energy management. AI can potentially optimise power grids, integrate renewable energy sources more effectively, predict equipment failures and enhance energy efficiency across various industries and buildings. This could help offset some of the overall impact on global energy demand. However, the energy sector has been slower in adopting AI compared to the tech and financial services industries. Further integration is expected here too.  The legal and contractual framework for AI-energy projects is intricate and, in many areas, novel. It involves navigating diverse regulatory systems, supply chain complexities and geopolitical uncertainties. This leads to complex negotiations concerning risk allocation, pricing mechanisms and responsibilities avoiding downtime. Furthermore, the regulatory landscape for both AI and energy is constantly evolving, making compliance and contractual certainty a moving target.In this dynamic and complex environment, it is crucial to anticipate, during the contract drafting phase, how disputes could arise and what mechanisms are needed to avoid them, or resolve them early and quickly if they cannot be avoided. Contracts should be meticulously written to foresee potential issues while maintaining enough flexibility to allow for an inevitable degree of unpredictability during a project which will last for decades. That means parties need to clearly define their responsibilities, establish performance metrics (and how those will be tracked) and allocate risks effectively. Importantly, once a contract is signed parties need to immediately and consistently apply and enforce it. It should go without saying – you could argue that the fact it needs saying tells its own story - but incorporating robust governance and dispute resolution methods is essential, with international arbitration recommended for these multi-party, multi-contract projects given advantages such as neutrality, privacy and enforceability in cross-border contexts.  It is also prudent to proactively consider investment protections (including through investment agreements with host country governments and under public international law treaties) as well as potential restructuring scenarios, including upon events like force majeure, changes in law or financial distress. This foresight can help protect investments and ensure the continuity and long-term success of these critical projects in the face of unwelcome challenges.  This is important not only for the participants in the particular projects but also for the wider energy and tech sectors which will be impacted significantly by the availability of this important technology and the speed at which its adoption can grow.   Charlie Morgan is a partner in Herbert Smith Freehills's disputes practice with a focus on tech, energy and venture capitalism.
    0 التعليقات 0 المشاركات 85 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Tariff turmoil is making supply chain security riskier
    Cyber security remained the most pressing challenge facing those in supply chain management roles during the first three months of 2025, but since the inauguration of Donald Trump in January, uncertainty over the president’s approach to tariffs has caused chaos for supply chains not just in the US, but around the world, and these two areas of risk are closely entwined. This is according to a report from cyber and risk management consultancy West Monroe, which found that while security remains top of mind for 23% of respondents to a recent polling exercise, the impact of tariffs has surged to become the top issue for 20%, in a matter of weeks edging out factors such as geopolitical tension, material costs, the climate crisis and labour costs. Although its fieldwork was conducted in March, prior to Trump’s so-called Liberation Day tariff announcement, West Monroe’s data shows that during Q1, a significant number of organisations in the US started making changes to their supply chains in advance. A total of 58% said they altered their product, materials or sourcing mix, 56% altered their transportation mix, 45% altered their production schedule, 31% updated their pricing to pass increased costs to customers, and 28% altered their geographic presence. “I don’t think these are necessarily quick changes to make, but there is cyber risk if and when those changes are made,” said Christina Powers, cyber security partner at West Monroe. Broadly, she said the need to move quickly to replace lost revenues, shifts in the supplier ecosystem and other impacts arising from the tariffs may create gaps in best practice when it comes to supply chain management. “For example, if you’re starting to work with a different supplier – maybe they were already on your list but they weren’t a tier one supplier, you’re tapping into tier two suppliers – so maybe they went through less due diligence and less scrutiny when you were initially onboarding them,” said Powers. “Or if you’re looking to change suppliers now, there could be a little more of a rushed diligence process being done to try to make that change more quickly,” she said. “There could be less visibility into what potential access these companies may have. From another angle, if you’re not working with a familiar contact, or not working with familiar processes, there’s a higher risk of things like impersonation attacks, whether or not that’s for financial gain or to get access to sensitive data.” Read more about the impact of US tariffs IT buyers appear to have spent the past few months refreshing PCs in preparation for the new US tariffs. Moore’s Law predicts that every 18 months, IT buyers can get more for the same outlay. But US tariffs may mean they end up paying a higher price. Delivering excellent customer experience is a tough job on regular days. Now add rising prices because of tariffs. Finally, with goods potentially priced higher thanks to the tariffs, some organisations may also look to offset costs in rather more creative ways than simply passing them onto their customers. In some instances, however ill-advised this may be, this could see IT and cyber security budgets taking a hit. “There is a risk around cyber security which is often viewed as a cost centre,” said Powers. “It is focused on value preservation and risk reduction, but it’s not necessarily value creation per se. So, there could be pushes to offset some of what organisations are having to deal with.” But the story doesn’t end here, she said, for there are other ways in which cyber security and tariffs are coupled together. “With a lot of the uncertainty that’s happening right now, there’s a very volatile market,” she said. “From a cyber security perspective, that could lead to incentives for individuals or groups or nation-states to look to exploit vulnerabilities or go after certain companies. “You may see that nations that were historically friendly [to the US] have different feelings now, so there could be an increase in exploitation. “On the data side, there could be an increase in potential espionage looking for trade secrets, intellectual property and things of that nature,” said Powers. “There are some Chinese manufacturers exploiting luxury brands and where their goods are being made, and what it takes to produce them.” If there’s a core message for security leaders to hold onto during this time of intense economic uncertainty and volatility, it would be not to allow the organisation to lose focus on the integrity of its supply chain arrangements. “Now is the time to be more vigilant, not only to hold the line, but actually to increase supply chain scrutiny from a cyber perspective, because there is so much uncertainty, change, volatility and, I think, anger associated with this,” said Powers.
    0 التعليقات 0 المشاركات 82 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    UK class action sets stage for Google showdown
    BCFC - stock.adobe.com News UK class action sets stage for Google showdown Another class action has been filed against search engine giant Google for anti-competitive practices that negatively affect small businesses By Cliff Saran, Managing Editor Published: 17 Apr 2025 11:10 UK based legal professor Or Brook has filed a class action against Google worth approximately £5bn in the UK Competition Appeal Tribunal (CAT). The class action, brought on behalf of hundreds of thousands of UK-based organisations that used Google’s search advertising services, accuses Google of abusing its near-total dominance in the general search market to drive up prices. This latest class action follows on from one filed by Nikki Stopford, co-founder of Consumer Voice, and legal firm Hausfeld & Co LLP, and appears to focus on the Google’s anti-competitiveness. Stopford’s case looks at the cost to consumers due to increased advertising costs businesses that use Google Search pay as a result of anti-competitive practices. In November last year, Google’s attempt to throw out Stopford’s case was dismissed, paving the way for the case to be heard at the CAT. Along with Stopford’s case, in January,  the Competition and Markets Authority (CMA) began an investigation seeking to determine if Google has strategic market status in search and search advertising activities, and whether these services are delivering good outcomes for people and businesses in the UK. The Brook case appears to be looking specifically at the cost to business arising from Google business practices that stipulate its Chrome browser and search engine are configured as the default options on Android devices and Google’s payments to Apple to ensure Google search is default on the Safari browser. The class action also covers Google’s Search Engine Management Platform (SA360). Brook alleges that this offers better functionality and more features regarding Google’s own advertising offering than that of its competitors. Damien Geradin, founding partner of Geradin Partners, the legal firm representing Brook, said: “This is the first claim of its kind in the UK that seeks redress for the harm caused specifically to businesses who have been forced to pay inflated prices for advertising space on Google pages.” In the claim, Brook argues that Google has been shutting out competition in the general search and search advertising markets. The claim argues that Google’s conduct has prevented competitors in the general search market from distributing their own search engines, which has enabled Google to maintain its dominance, leading to restricted competition in general search. Brook contests that Google has ensured that its own search platform is the only viable means of advertising to the vast majority of consumers, and ensured its dominance in search advertising. She said: “Today, UK businesses and organisations, big or small, have almost no choice but to use Google ads to advertise their products and services. Regulators around the world have described Google as a monopoly and securing a spot on Google’s top pages is essential for visibility. “Google has been leveraging its dominance in the general search and search advertising market to overcharge advertisers. This class action is about holding Google accountable for its unlawful practices and seeking compensation on behalf of UK advertisers who have been overcharged.” On top of the class actions, Google is also being investigated by the CMA, which is looking at whether its Play Store requires app developers to sign up to unfair terms and conditions as a condition of distributing their apps. Read more stories about Google’s legal wrangles Apple and Google app stores come under CMA scrutiny: The Competition and Markets Authority in the UK is looking at whether the Play Store and App Store support innovation and are pro-competition. Google slams US government’s proposal to split company up on anti-competitive grounds: The US Department of Justice has incurred the wrath of Google for suggesting a series of remedies, after the US court ruled back in August that Google had an illegal monopoly over internet search market. In The Current Issue: What is the impact of US tariffs on datacentre equipment costs? VMware backup: Key decision points if you migrate away from VMware Download Current Issue UK digital identity turns to drama (or farce?) over industry fears and security doubts – Computer Weekly Editors Blog The DEI backlash is over – we are talking a full scale revolt – WITsend View All Blogs
    0 التعليقات 0 المشاركات 86 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    Interview: Markus Schümmelfeder, CIO, Boehringer Ingelheim
    Markus Schümmelfeder has spent more than a decade looking for ways to help biopharmaceutical giant Boehringer Ingelheim exploit digital and data. He joined the company in February 2014 as corporate vice-president in IT and became CIO in April 2018. “It was a natural evolution,” he says. “Over time, you see what can be done as a CIO and have an ambition to make things happen. This job opportunity came around and it was when digitisation began. I saw many possibilities arising that were not there before.” Schümmelfeder says the opportunity to become CIO was terrific timing: “It was a chance to bring technology into the company, to make more use of data, and evolve the IT organisation from being a service deliverer into a real enabler. My aim for all the years I’ve been with Boehringer is to integrate IT into the business community.” Now, as the company’s 54,000 employees use more data than ever before across the value chain, including research, manufacturing, marketing and sales, Schuemmelfeder’s aim is being realised. He says professionals across the business understand technology is crucial to effective operational processes: “It’s about bringing us close together to make magic happen.” Schümmelfeder says one of his key achievements since becoming CIO is leading the company on a data journey. His vision supported the company’s progress along this pathway. “I went to the board and said, ‘This is what we should do, what we want to do, what makes sense, and what we perceive will be necessary for the future’,” he says. “We started that process roughly five years ago and everyone knows how important data is today.” Making the transition to a data-enabled organisation is far from straightforward. Rather than being focused on creating reports, Schümmelfeder says his vision aimed to show people across the organisation how they could exploit information assets effectively. One of the key tenets for success has been standardisation. “This is a fundamental force, and the team has done good work here,” he says. “10 years ago, we had between 4,500 and 5,000 systems across the organisation. Today, we have below 1,000. So, we reduced our footprint by 80%, which is a great accomplishment.” Standardisation has allowed the IT team to deliver another part of Schümmelfeder’s vision – a platform-based approach to digitisation. Rather than investing in point solutions to solve specific business challenges, the platform approach uses cloud-based services to help people “jump start topics” as the business need arises. The crucial technological foundation for this shift to standardisation has been the cloud, particularly Amazon Web Services (AWS), Microsoft Azure and a range of consolidated enterprise services, such as Red Hat OpenShift, Kubernetes, Atlassian Jira and Confluence, Databricks, and Snowflake. Schümmelfeder says the result is a flexible, scalable IT resource across all business activities.  “You can create a cloud environment in minutes,” he says. “You can have an automated test environment that is directly attached and ready to use. You can create APIs immediately on the platform. We want people to deliver solutions at a faster pace, rather than creating individual solutions again and again.” Boehringer recently announced the launch of its One Medicine Platform, powered by the Veeva Development Cloud. The unified platform combines data and processes, enabling Boehringer to streamline its product development. Schümmelfeder says the technology plays a crucial enabling role. The One Medicine Platform is integrated with Boehringer’s data ecosystem, Dataland, which helps employees make data-driven decisions that boost organisational performance. Dataland has been running since 2022. The ecosystem collates data from across the company and makes it available securely for professionals to run simulations and data analyses. “In the research and development space for medicine, there was nothing like a solid enterprise platform,” says Schümmelfeder, referring to his company’s relationship with Veeva. “We had about 50, maybe even more, tools that were often not interconnected. If you wanted to replicate data from one service to another, you’d have to download the data, copy and paste, and so on. That approach is tedious.” The One Medicine Platform allows Boehringer to connect data across functions, optimise trial efficiency around its research sites, and accelerate the delivery of new medicines to treat currently incurable diseases. Schümmelfeder says the Veeva technology gives the business the edge it requires. “We saw we were slower than our competitors in executing clinical trials. We thought we could be much better. We wanted to look for a new way of executing clinical trials, and we needed to discuss our processes and potentially redefine and change them based on the platform approach,” he says. “We chose Veeva because it was the most capable technology to help us deliver the spirit of a platform. It’s also an evolving technology with good future potential.” Schümmelfeder says the data platform he’s pioneered is helping Boehringer explore emerging technologies. One key element is Apollo, a specialist approach to artificial intelligence (AI), allowing employees to select from 40 large language models (LLMs) to explore their use cases and exploit data safely. He says this large number of LLMs allows Boehringer employees to select the best model for a specific use case. Alongside mainstream models like Google Gemini and Open AI’s ChatGPT, the company uses niche models dedicated to research that can deliver more appropriate answers than general models. Schümmelfeder says Boehringer does not develop models internally. He says the rapid pace of AI development makes it more sensible to dedicate IT resources to other areas. The company’s staff can use approved models and tools to undertake data-led research in several key areas: “We have a toolbox staff can dip into when they realise an idea or use case.” He outlines three specific AI-enabled use cases: Genomic Lens generates new insights that enable scientists to discover new disease mechanisms in human DNA; the company uses algorithms and historical data to identify the right populations for clinical trials quickly and effectively; and Smart Process Development, which applies machine learning and genetic algorithms to create productivity boosts in biopharmaceutical processes. “My aim for all the years I’ve been with Boehringer is to integrate IT into the business community” Markus Schümmelfeder, Boehringer Ingelheim Another key area of research and development is assessing the potential power of quantum computing. Schümmelfeder suggests Boehringer has one of the strongest quantum teams in Europe. He recognises that other digital and business leaders might feel the company’s commitment is ahead of the adoption curve. “And I would say, ‘Yes, you’re right’, but then you need to understand how this technology works. We are helping to make breakthroughs, to bring code to the industry and to discover how we will use quantum. So, we have a strong team that brings a lot to the table to help this area evolve,” he says. “I’m convinced quantum computing will be a huge gamechanger for the pharma industry once the technology can be used and set into operations. That situation is why I believe you have to be involved in quantum early to understand how it works. You need to bring knowledge into the organisation and be part of making quantum work.” While Schümmelfeder acknowledges Boehringer isn’t pursuing true quantum research yet, the company has built relationships with other technology specialists, such as Google Research. He says these developments are the foundations for future success in key areas, such as understanding product toxicity: “It’s relatively early, but you can see the investment. I hope we can see the first real use cases by the end of this decade.” Schümmelfeder considers the type of data-enabled organisation he’d like to create during the next few years and suggests the good news is that the technological foundations for further transformation are now in place. “We don’t need a technology revolution, I think we’ve done that,” he says. “We’ve done our homework, and we’ve standardised and harmonised. The next stage is not about more standardisation, it’s more about looking specifically at where we need to be successful. That focus is on research and development, medicine, our end-customers and how to improve the lives of patients and animals. That work is at the core of what we want to do.” With the technology systems and services in place, Schümmelfeder says he’ll concentrate on ensuring the right culture exists to exploit digitisation. That focus will require a concerted effort to evolve the skills across the organisation. The aim here will be to ensure many people in all parts of the business have the right capabilities. “When you talk about data, you don’t need 10 people able to do things, you need thousands of people who can execute,” he says. “You need to bring this knowledge to the business. That means business and IT must integrate deeply to make things happen. The IT team has to go to the business community and ask big questions like, ‘What do you need? Tell me the one thing that can make you truly successful?’” Schümmelfeder says that finding the answers to these questions shouldn’t be straightforward. Sometimes, he expects the search to be uncomfortable. IT can’t sit back – the company’s 2,000 technology professionals must drive the identification of digital solutions to business problems. Line-of-business professionals must also feel comfortable and confident using emerging technologies and data. He says the company’s Data X Academy plays a crucial role. Boehringer worked with Capgemini to develop this in-house data science training academy. Data X Academy has already trained 4,000 people across IT and the business. Schümmelfeder hopes this number will reach 15,000 people during the next 24 months and allow data-savvy people across the organisation to work together to develop solutions to intractable challenges. “We want to ask the right questions on the business side and create lighthouse use cases in IT that show people what we can do,” he says. “We can drive change together with the business and create an impact for the organisation, our customers and patients.” Read more data and digital interviews with IT leaders The importance of building a data foundation: We speak to Terren Peterson, Capital One’s vice-president of engineering, about how data pipelines and platforms are essential for AI success. Interview: James Fleming, CIO, Francis Crick Institute: Helping to cure cancer with computers puts digital leadership on another level – and the world-leading research institute is turning to data science and artificial intelligence to achieve its groundbreaking goals. Interview: Wendy Redshaw, chief digital information officer, NatWest Retail Bank: The retail bank is moving at pace to introduce generative AI into key customer-facing services as part of a wider digital transformation across the organisation.
    0 التعليقات 0 المشاركات 113 مشاهدة
  • WWW.COMPUTERWEEKLY.COM
    CVE Foundation pledges continuity after Mitre funding cut
    In the wake of the abrupt termination of the Mitre contract to run CVE Programme, a group of vulnerability experts and members of Mitre’s existing CVE Board have launched a new non-profit with the intention of safeguarding the programme’s future. The CVE Foundation’s founders want to ensure the continuity, viability and stability of the 25-year-old CVE Programme, which up to today (April 16) has been operated as a US government-funded initiative, with oversight and management provided by Mitre under contract. Even reckoning without the impact of Mitre’s loss of the CVE programme contract – which is one of a number of Mitre-held government contracts axed in recent weeks – and has already led to layoffs at the DC-area contractor – the CVE Board members say they already had longstanding concerns about the sustainability and neutrality of such a globally relied-upon resource being tied to a single government. Their concerns became suddenly heightened after a letter from Mitre’s Yosry Barsoum warning that the CVE Programme was under threat circulated this week. “CVE, as a cornerstone of the global cyber security ecosystem, is too important to be vulnerable itself,” said Kent Landfield, an officer of the foundation. “Cyber security professionals around the globe rely on CVE identifiers and data as part of their daily work – from security tools and advisories to threat intelligence and response. Without CVE, defenders are at a massive disadvantage against global cyber threats.” The founders said that while they hoped today would never come, they have spent the past year working diligently in the background to create a strategy to transition the CVE system into a dedicated, independent non-profit. Unlike Mitre – originally a computer research spin-out at MIT in Boston that now operates multiple R&D efforts – the CVE Foundation will be solely dedicated to delivering high-quality vulnerability identification, and maintaining the integrity and availability of the existing CVE Programme database on behalf of security professionals worldwide. The foundation says its official launch marks a “major step toward eliminating a single point of failure in the vulnerability management ecosystems” and safeguarding the programme’s reputation as a trusted, community-driven resource. “For the international cyber security community, this move represents an opportunity to establish governance that reflects the global nature of today’s threat landscape,” the founders said. Although at the time of writing the CVE Programme remains up and running, with new commits made to its GitHub in the past hours, reaction to the contract’s cancellation has been swift and scathing. “With 25 years of consistent public funding, the CVE framework is embedded into security programmes, vendor feeds, and risk assessment workflows,” said Tim Grieveson, CSO and executive vice-president at ThingsRecon, an attack surface discovery specialist. “Without it, we risk breaking the common language that keeps security teams aligned to identify and address vulnerabilities effectively. “Delays in sharing vulnerability data would increase response times and give threat actors the upper hand,” he added. “With regulations like SEC, NIS2, and Dora demanding real-time risk visibility, a lack of understanding of risk exposure and any delayed response could seriously hinder the ability to react effectively.” To maintain existing levels of resilience in the face of the shutdown, it’s important for security leaders to ensure organisations have a clear understanding of their attack surface and their suppliers, said Grieveson. Added to this, collaboration and information sharing in the security community will become even more essential than it already is. Read more on this story Mitre, the operator of the world-renowned CVE repository, has warned of significant impacts to global cyber security standards, and increased risk from threat actors, as it emerges its US government contract will lapse imminently. Chris Burton, head of professional services at Yorkshire-based penetration testing and security services provider Pentest People, said he hoped cooler heads would prevail. “It’s completely understandable there are concerns about the government pulling funding for the Mitre CVE Programme; it’s a troubling development for the security industry,” he said. “If the issue is purely financial, crowdfunding could offer a viable path forward, rallying public support for a project many believe in,” added Burton. “If it’s operational, there may be an opportunity for a dedicated community board to step in and lead. “Either way, this isn’t the end, it’s a chance to rethink and reimagine. Let’s not panic just yet; there are still options on the table, as a global community. I think we should see how this unfolds.” At a more practical level, Grieveson shared some additional steps for security teams to take right now: Map internal tooling dependencies on CVE feeds and APIs to know what breaks should the database go dark; Identify alternative sources to maintain vulnerability intelligence, focusing on context, business impact and proximity to ensure comprehensive coverage of threats, whether they be current, emerging or historic; Accelerate cross-industry intelligence sharing to proactively leverage tactics, tools and threat actor data.
    0 التعليقات 0 المشاركات 100 مشاهدة
المزيد من المنشورات