Computer Weekly
Computer Weekly
Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
1 people like this
341 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • Why the UK must lead on data to unlock AIs full potential
    www.computerweekly.com
    OpinionWhy the UK must lead on data to unlock AIs full potentialUnless the data silos in government are addressed, the UK risks falling short of the Action Plans ambitious goals to lead in AI adoptionByElena Simperl and Neil MajithiaPublished: 10 Feb 2025 The UK government holds some of the worlds most valuable datasets, including official statistics, cultural heritage records and NHS health data. These datasets have powered scientific breakthroughs, business innovation, and improvements in public services.With the publication of the much-anticipated AI Opportunities Action Plan, the transformative potential of government data for AI has never been more apparent. However, recent research by the Open Data Institute (ODI) reveals critical shortcomings in how government datasets are prepared and published for AI.Government data and AIs reliability challengeFoundation models (FMs), such as ChatGPT and Gemini, are increasingly used to provide information on public policies and services. Yet, the ODIs research highlights that while these models scrape government data repositories, they often fail to deliver accurate outputs based on them. Instead, models draw on secondary or unreliable sources, such as social media posts or opinion articles, or simply fabricate answers.The consequences are significant. Citizens using AI tools to understand benefit entitlements, for example, may receive misleading or incomplete advice, undermining public trust in both AI and government services. This is particularly concerning given the UK governments emphasis on improving public service delivery through AI innovation.Data deficits in the AI ecosystemThe AI Opportunities Action Plan, authored by Matt Clifford, rightly emphasises the role of the National Data Library (NDL) as a means to unlock government data for AI innovators. Yet, the current state of government datasets presents significant barriers to achieving this vision.ODI analysis of CommonCrawl, a key dataset repository for AI models, found that it scraped 13,556 pages from data.gov.uk as of April 2024. However, these pages rarely contributed to accurate model outputs. Across 195 test queries, models correctly referenced data.gov.uk statistics in only five cases.This issue arises because government data is often not published in AI-ready formats. While technologies such as DCAT are used to make datasets discoverable, scraping infrastructure like CommonCrawl does not fully support these technologies. As a result, AI models rely on less authoritative sources, perpetuating misinformation. The ODIs findings suggest that the UKs ambition to lead in AI innovation could falter unless this disconnect is addressed.Evidence from ODI experimentsThe ODI conducted two experiments to examine how government data supports AI models and, in turn, how AI models are enabled to support residents of the UK.The first experiment analysed how important UK government websites are for AI. Researchers conducted an ablation study utilising a machine unlearning technique to remove gov.uk websites from a selection of FMs' training data.The results revealed a 42.6% increase in models inaccuracy when deprived of gov.uk content, leading to fundamental errors. For example, one test found that models that did not have access to government websites misinformed users about their eligibility for Child Benefit.In contrast, the second experiment found that government datasets are currently unknown to AI models. This experiment, a study of models' ability to recall specific statistics from data.gov.uk, found that out of 195 queries, models accurately referenced official government statistics releases just five times.The conclusion from these experiments was that while government websites are vital for AI accuracy, government statistics datasets are underutilised despite their enormous value and potential in delivering public services. If we want to realise the potential of AI to deliver benefits such as improving care quality, safety, and cost-effectiveness in the NHS, the government must prioritise improving the quality, accessibility, and usability of its data.The path forwardThe adoption of FAIR principles - ensuring data is findable, accessible, interoperable, and reusable - has long been championed by data.gov.uk and remains a strong foundation. Emerging tools like Croissant, a machine-readable metadata format designed for machine learning, can further enhance discoverability and integration into developers workflows. If dataset descriptions are improved, they will be more usable for human and machine users.The government must incentivise responsible data sharing to ensure equitable access to high-quality data. This could include tax incentives for private-sector data sharing, mandates for publicly funded projects to make their data open where appropriate, or even a levy on AI-generated content to fund trusted information sources. We must use privacy-enhancing technologies such as Solid, which offer individuals direct access and control of their data - for example, their well-being and health data - to ensure access to sensitive data without compromising personal privacy, commercial sensitivity, or national security. This could provide important benefits, such as using machine learning to identify personal risk factors for health conditions, enabling preventative action. Data Trusts can be built on top of Solid to aggregate data. This aggregated data can be collated into datasets with Croissant metadata to prepare it for research use.Aligning with the Action PlanThe AI Opportunities Action Plans emphasis on high-quality data and strong governance aligns with the ODIs longstanding commitment to socio-technical solutions integrating advanced data infrastructure with public trust. To support the development of interoperable systems, AI-ready datasets, and privacy-enhancing technologies, the ODI is advocating for a ten-year National Data Infrastructure Roadmap. This roadmap would support the Action Plans focus on driving AI innovation through investing in long-term data infrastructure.Read more about the AI Opportunities Action PlanMajor obstacles facing Labours AI opportunity action plan: Skills, data held in legacy tech and a lack of leadership are among the areas discussed during a recent Public Accounts Committee session.Disjointed industrial strategy a barrier to UK scaleup success: The House of Lords Communications and Digital Committee calls on Labour to join-up piecemeal initiatives and cut bureaucracy.However, the Action Plan leaves several gaps unaddressed. It does not fully detail how the National Data Library will incorporate user input or engage diverse stakeholders to ensure it delivers public benefit. There is limited detail about formal standards for data quality and provenance, which are critical for ensuring AI-ready datasets. Furthermore, while the Action Plan highlights the need to support AI innovators, it could more explicitly foster data-centric startups specialising in data preparation and governance tools. We hope these gaps are addressed as the government rolls out the recommendations.International leadership through collaborationThe ODIs research highlights the global importance of data-centric approaches to AI governance. However, few nations prioritise this focus, risking undermining the broader adoption of open and shared data practices. Without robust data-centric governance, the foundations of transparent and accountable AI systems could weaken.The ODI has launched the Global AI Policy Data Observatory to address this. This initiative provides practical resources to support policymakers in developing data-centric AI governance. By offering insights into machine-readable metadata, toolkits for responsible data use, and best practices for transparency, the Observatory aims to strengthen the global evidence base for data-centric AI.Realising the UKs AI potentialAccess to high-quality government data is essential for realising AIs potential in public service delivery. By improving data publication practices and investing in long-term infrastructure, the UK can position itself as a global leader in data provision for AI. This leadership will unlock transformative economic and social benefits, aligning with the ambitions of the AI Opportunities Action Plan.The full report is available to download at ODI Report: The UK Government as a Data Provider for AI.Elena Simperl is the director of research at the ODI. Neil Majithia is a researcher at the ODI.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueSLM series: Editorial brief & scope CW Developer NetworkWill Skills England be allowed to change the course of the Government's inherited policy Titanic? When IT Meets PoliticsView All Blogs
    0 Comments ·0 Shares ·35 Views
  • Government opens up bidding for AI growth zones
    www.computerweekly.com
    Flyalone - AdobeNewsGovernment opens up bidding for AI growth zonesAs part of its AI opportunities action plan, the government is encouraging local authorities to put in bids for AI growth zonesByCliff Saran,Managing EditorPublished: 10 Feb 2025 0:01 The government is inviting local and regional authorities across the country to bid to become one of the UKs artificial intelligence (AI) growth zones.The plan to develop AI growth zones is part of Labours 50-point AI opportunities action plan policy, and is focused on speeding up planning permission for AI-led initiatives.At the end of January, science minister Patrick Vallance announced the Culham AI growth zone and his ambition for the so-called Oxford-Cambridge arc which covers Oxford, Milton Keynes and Cambridge to become a region for tech innovation. We are already putting billions behind the Oxford-Cambridge region through East-West Rail, the Culham AI growth zone, and our record-breaking backing for UK R&D, he said.Following Culham, the government now plans to focus on developing AI growth zones in deindustrialised areas of the country. It has encouraged local and regional authorities to submit their proposals, including sites with existing access to power or that would be suitable to establish major energy infrastructure.Secretary of state for science, innovation and technology Peter Kyle said: These new AI growth zones will deliver untold opportunities sparking new jobs, fresh investment and ensuring every corner of the country has a real stake in our AI-powered future.Were leaving no stone unturned in how we can harness expertise from all over the UK to deliver new opportunities, fresh growth, better public services and cement our position as an AI pioneer, and thats the message I will be sending to international partners and AI companies at the AI Action Summit.The governments goal is to encourage energy companies and datacentre developers to provide the infrastructure needed to drive forward the roll-out of AI growth zones. It has committed to speeding up planning permission to build out AI infrastructure quickly, which means building out datacentre capacity and providing the energy capacity the power-hungry servers in these datacentres need to run AI workloads. It said it will be working with network operators to scale each zone to 500MW+, which is enough to power roughly two million homes.The Department for Science, Innovation and Technology (DSIT) plans to assess proposals from energy providers and datacentre firms to help inform the final selection of sites and broader policy decisions, which are expected later this year.The government has already received interest in developing AI growth zones in Scotland, Wales, and north-east and north-west England.Those involved in bidding and encouraging bidding for AI growth zone status see opportunities to inject new investment and attract skills.Scotland Office minister Kirsty McNeill said: Scotland is already at the centre of these plans, with our world-leading universities at the forefront of AI development and our industrial heritage providing a range of possible sites. I would encourage our local authorities to explore becoming an AI growth zone, which will help attract further investment.Tees Valley mayor Ben Houchen hopes to attract people with high tech skills to work in the region. My job, above everything, is to bring good, well-paid, long-term jobs to local people. We have everything we need to host an AI growth zone in our region. We have the land, we have the power and we have shown in our efforts at Teesworks how we can get huge projects moving forward at pace, he said.One of the ongoing challenges facing the UK as it pivots to a high-tech-led industrial strategy is that while there are plenty of great ideas and startups with innovative products, scaling these commercially is a major barrier. Earlier this month, witness statements published in the latest edition of the AI and creative technology scaleup report for the House of Lords Communications and Digital Committee described the UK as terrible place to scale up startup businesses.Read more about Labours AI strategyMajor obstacles facing Labours AI opportunities action plan: Skills, data held in legacy tech and a lack of leadership are among the areas discussed during a recent Public Accounts Committee session.Disjointed industrial strategy a barrier to UK scaleup success: The House of Lords Communications and Digital Committee calls on Labour to join up piecemeal initiatives and cut bureaucracy.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueSLM series: Editorial brief & scope CW Developer NetworkWill Skills England be allowed to change the course of the Government's inherited policy Titanic? When IT Meets PoliticsView All Blogs
    0 Comments ·0 Shares ·45 Views
  • Tech companies brace after UK demands back door access to Apple cloud
    www.computerweekly.com
    Technology companies are bracing themselves for more attacks on encryption after the UK government issued an order requiring Apple to create a back door to allow security officials access to content uploaded on the cloud by any Apple phone or computer user world-wide.The government has used powers under UK surveillance laws to issue a secret order requiring Apple to provide the UK with the ability to access all encrypted material stored by any Apple users on its cloud servers anywhere in the world, the Washington post revealed.The move will put pressure on Apple to withdraw encrypted cloud storage from users in the UK leaving British consumers without the capability to store files, documents or financial information, in a way that will provide them with strong protection from hacking attacks or accidental breaches by cloud providers.People in the technology industry told Computer Weekly that the UK has shown antipathy towards encryption and that it would not be surprising if more technology companies were hit with similar demands from UK officials seeking the ability to access users encrypted data. WhatsApp and Facebook Messenger are potential targets.The Home Secretary served Apple with a Technical Capability Notice, in January, ordering it to provide the government with back door access to material stored by Apple users on its encrypted cloud service, the Washington Post revealed.The notice, issued under the Investigatory Powers Act 2016, makes it a criminal offence for a technology company to reveal the existence of any technical capability notice served against it.The Investigatory Powers Act, gives powers to the government to issue Technical Capability Notices to remove or modify electronic protection applied by tech companies to communications data, under Section 253, part 5(c).A Home Office spokesperson said: We do not comment on operational matters, including for example confirming or denying the existence of any such notices.Matthew Hodgson, CEO of Element, a secure communications platform used by governments, said that the disclosure that a Technical Capability Notice had been served was unprecedented.This is the first time the existence of a Technical Capability Notice under the Investigatory Powers Act appears to have leaked and represents a terrifying escalation in the fight to protect users from blanket surveillance, he said.In evidence to Parliament in March, addressing the governments plans to extend the Investigatory Powers Act 2016, Apple warned that powers in the Act were extremely broad and pose a significant risk to the global vitality of important security technologies.End-to-end encryption was one of the most important security features available to protect information stored in the cloud, ensuring that only users, rather than cloud storage companies, can access their personal data and communications, the company said.It provides an essential layer of additional security because it ensures that malicious actors cannot obtain access to users data even if they are able to breach a cloud service providers data centre.The technology shields citizens from unlawful surveillance, identity theft, fraud and data breaches and serves as an invaluable protection for journalists, human rights activists and diplomats who may be targeted by malicious actors, the company said.Apple raised concerns that the IPA purports to apply outside the boarders of the UK, permitting the UK to claim the right to impose secret requirements on providers located in other countries and that apply to their users globally.These provisions could be used to force a company like Apple, that would never build a back door into its products, to publicly withdraw critical security features from the UK market, depriving UK users of these protections, it wrote.Technology companies are concerned that providing back door access to encrypted storage would make it impossible to comply with data protection and compliance regulations including GDPR, placing further pressure on them to withdraw services from the UK.The UKs Five Eyes allies have taken a broader view of encryption. In an advisory last year, the US Canada, Australian and New Zealand, recommended wide-spread use of encryption, including end-to-end encryption, to mitigate threats from China, which infiltrated US telecoms networks in the Salt Typhoon attack.The UK, which notably did not add its name to the Salt Typhoon advisory, has fought a long-running battle with technology companies over encryption. Last year, the National Crime Agency singled out Meta for criticism over its plans to introduce end-to-end encryption on its Facebook Messenger and Instagram services.And in 2024, the government failed to ease industry concerns that the spy clause in the Online Safety Bill, which aims to crack down on child abuse and other harmful online content, would fundamentally weaken end-to-end encrypted services.Claims by a junior minister to the House of Lords, that there is no intention by the government to weaken the encryption technology used by platforms, did little to reassure tech companies.Jurgita Miseviciute, head of public policy at Proton, an encrypted communications provider, said that the move against Apple would create a dangerous precedent.Backdoors to encryption that only let the good guys in are impossible. Regardless of intent, compromising encryption creates vulnerabilities that are sure to be exploited not just by authorities beyond the UK, but by malicious actors as well, she said."Removing access to end-to-end encryption in the UK for people's files would be a huge step backwards that would create a two-tier system, erode trust, and expose British users to surveillance and cyber threats," she added.Matthew Hodgson, CEO of Element, said that the compromise of the US telecoms network by Salt Typhoon showed that surveillance back doors were a catastrophically flawed idea.Apple should withdraw from the UK rather than comply with this order, and make it clear that becoming complicit in a surveillance state is a line they will not cross, he said.Robin Wilton, senior director for the Internet Society, a global non-profit, said that it was beyond disappointing that the UK government was using the Investigatory Powers Act to break end-to-end encryption for Apples cloud service.It is stunning that just days after the UKs National Audit Office released a report that the "cyber threat to the UK government is severe, the UK government would launch an attempt to weaken the security and privacy of a service that its citizens, including government employees, rely on, he added.
    0 Comments ·0 Shares ·69 Views
  • Self-healing networks: The next evolution in network management
    www.computerweekly.com
    While artificial intelligence (AI) poses many risks for networks of all sizes, it also highlights the pitfalls of traditional network management in addressing real-time demands and unanticipated challenges not to mention risks such as static configurations, manual interventions and reactive troubleshooting have become liabilities in the era of rapid technological advancement.Self-healing networks represent a seismic shift in the paradigm, offering a solution that promises to boost security and make teams more efficient. But how exactly do they make this possible?In this article, we dissect the promise of uptime and reliability, and discuss how self-healing networks redefine the role of IT operations in digital transformation.Usually, network issues entail downtime until human intervention determines the cause and internal processes are altered. Of course, this results in lost time and money.On the other hand, self-healing networks are engineered to detect, diagnose and resolve issues autonomously, often before they are even apparent to users or administrators.This proactive approach is underpinned by large language models (LLMs) and proactive systems that gather analytics, analyse them and act on them. As a result, this empowers the network to anticipate disruptions, execute corrective actions and continuously optimise its performance.The best way to describe self-healing networks would be to view them as combat medics if they get injured, theyre capable of patching themselves up, right? Certainly.Moreover, a self-healing network is composed of distributed communication protocols. But what makes them able to self-diagnose without affecting uptime? It hinges on:Real-time monitoring: Constant surveillance of traffic patterns, resource utilisation and device health. This lets monitoring systems determine the desired state of things, enabling them to immediately spring to action once theres a deviation from usual metrics.Predictive analytics: Using historical data and ML to forecast potential failures and pre-emptively address them. The self-healing network can adjust itself during times of higher traffic or if an anomaly has been detected in similar networks elsewhere.Automated recovery: In addition, these networks can take steps autonomously, such as dynamic rerouting, load balancing and isolating compromised nodes to maintain service integrity.Continuous learning: Once an incident occurs, self-healing networks analyse it and store it in its database. Any lessons are added to protocols, reinforcing feedback loops that refine the systems response, enabling it to adapt to new threats and conditions.Its clear that self-healing networks are suffering from the same issues as other new tech price and complexity. Nevertheless, organisations are opting for it due to:Always-on infrastructureIn industries where downtime translates directly to lost revenue or compromised safety, self-healing networks are a game-changer. With automated recovery processes, they eliminate delays associated with human intervention.Think of HIPAA server hosting environments, for example, which can benefit from these capabilities by mitigating risks of overloading and ensuring seamless application delivery. Otherwise, patient data leaks and other issues might be left unmitigated.Strengthened security postureWhether its due to the proliferation of AI or the decentralisation of hacking collectives, network security is no longer a static discipline. Threats evolve dynamically, often exploiting fleeting vulnerabilities.For this purpose, self-healing networks enhance security by detecting anomalies, isolating potential breaches and patching vulnerabilities on the fly. This is particularly useful for Wi-Fi security, which is often the target of attacks. Instead of being left to its devices, any Wi-Fi network becomes significantly more robust with autonomous monitoring and response systems.Operational efficiency and cost savingsTraditional network management is resource-intensive, requiring constant attention from skilled personnel. Not to mention, incidents result in downtime and teams focusing on rectifying the situation instead of improving security altogether.Due to their ability to offload routine tasks to automated systems, self-healing networks free up IT teams to focus on strategic initiatives. The cost savings in terms of reduced downtime, minimised hardware failures and streamlined operations are substantial.According to some estimates, this particular application of AI can reduce costs by up to 40%. This amount is bound to increase with scaling up.While every organisation or service provider can benefit from self-healing networks, there are three main applications for this innovation:Large-scale enterprisesFor multinational corporations, managing a sprawling network across continents is a Herculean task. Self-healing networks simplify this complexity by ensuring uniform policies, consistent performance and real-time adaptability.When integrating resource-heavy solutions like high-definition camera systems, these networks can dynamically allocate bandwidth and storage, maintaining operational efficiency without compromising performance.Likewise, suppose a particular part of the network is under attack. In that case, the AI model in charge of decision-making can pull the plug on irrelevant aspects of the system until the issue is resolved.Smart cities and IoT ecosystemsThe rise of smart cities has introduced unprecedented levels of connectivity, for better or worse. Traffic management systems, environmental sensors and public safety networks all depend on uninterrupted communication for the city to function normally.Self-healing networks ensure that disruptions are localised and resolved without cascading failures, allowing cities to operate smoothly even under peak load conditions.Nevertheless, there is still the issue of certain purposes requiring additional security. What if someone hacks into a smart city network and gains access to a residential camera system? If it can be done to water treatment facilities, less essential systems will also be prone to breaches.Healthcare systemsIn 2024, there were more than 600 reported attacks on healthcare companies in the United States alone. This is no surprise, given the healthcare sectors reliance on digital networks for patient records, diagnostics and insurance claims.At the same time, the proliferation of telemedicine makes reliability paramount. In this context, self-healing networks guarantee uninterrupted access to critical systems, safeguarding patient outcomes and reducing the administrative burden on IT departments.Artificial intelligenceAI and machine learning (ML) are foundational to the adaptability of self-healing networks. Algorithms analyse terabytes of data to predict failures, identify inefficiencies and recommend or execute optimal solutions in real time.Lets take a small e-commerce site as an example. If its self-healing network has more than 10 years of data indicating there are more attacks on Christmas Eve, the network can automatically adjust to anticipate breach attempts.Software-defined networking (SDN)SDN separates the networks control plane from its data plane, enabling centralised management. Likewise, its particularly valuable for dynamic resource allocation, as it can automatically adjust bandwidth, reroute traffic and scale resources based on demand.This centralised control improves network visibility, enhances security by enforcing policies in real time and streamlines operations through automation.Edge computing and decentralisationSelf-healing networks enhance the efficacy of local processing by extending intelligence to the edge, enabling real-time monitoring, detection and automatic resolution of network issues without relying on centralised systems. This localised decision-making reduces latency, minimises downtime and ensures continuous operation.In industrial automation, where machinery and sensors must operate with precise timing, any network disruption can halt production and lead to costly downtime. Self-healing networks can quickly identify faults, reroute traffic or isolate malfunctioning components to maintain smooth operations.Similarly, in remote surveillance systems, uninterrupted connectivity is critical for security. Edge-based self-healing capabilities ensure continuous video streaming and rapid fault correction, preventing gaps in monitoring.Automation frameworksAutomation underpins the efficiency of self-healing networks. From orchestrating recovery processes to deploying software updates, automation reduces the margin for error and accelerates response times.Whats truly interesting, however, is that the network itself can become a part of a wider automation workflow.In the beginning, this can be something basic, such as increasing allocated bandwidth at certain times. Later on, an advanced application of a self-healing network can have the underlying AI mitigate breaches, generate reports and notify team members via email or Slack.Deploying self-healing networks requires significant investment in both infrastructure and talent. Organisations must also navigate the complexities of integrating new technologies with legacy systems.Likewise, despite their potential, self-healing networks demand a level of trust that many organisations are hesitant to extend. This means small businesses will be the last to feel the benefits not to mention that concerns about over-reliance on automation and potential failure scenarios remain barriers.Where do we go from here, then? Well, its on the organisations themselves to weigh the benefits against the apartment downsides. Ultimately, we must collectively find the right balance.From ensuring reliable Wi-Fi security in enterprise environments to optimising server hosting performance under heavy loads, self-healing networks are subtly transforming every aspect of connectivity. Their integration into critical systems, such as urban camera surveillance networks, underscores their growing indispensability.Furthermore, the promise of self-healing networks lies not just in their technical sophistication, but in their ability to redefine network management paradigms. If we properly apply and maintain these networks, we can achieve a level of resilience and agility that was once thought impossible.Read more about network security managementA comprehensive and scalable network security management plan is more important than ever in the face of ever-rising threats and attacks orchestrated by bad actors.BT expands its expands managed software-defined wide area network solution with new security service edge capabilities to help businesses transition to a secure access service edge model.Infoblox CEO Scott Harrell discusses the companys new Universal DDI service designed to address the growing challenges of managing network security in hybrid IT environments.
    0 Comments ·0 Shares ·66 Views
  • Ransomware payment value fell over 30% in 2024
    www.computerweekly.com
    The total value of payments made to cyber criminal ransomware gangs fell dramatically in the back half of 2024, and according to statistics released this week by Chainalysis, a supplier of blockchain and crypto services, less than half of victims of recorded incidents even made a payment.Chainalysis found that over 2024 as a whole, ransomware gangs collectively made about $813.6m (652.7m), down from 2023s $1.25bn, and although payments were up by 2.4% in the first half of the year, in the second half, they dropped by 37.5% in the second.Its analysts suggested that both a growing number of law enforcement actions and the effects of international cooperation on ransomware were likely important factors in the fall. Additionally, they said, more victims seem to be refusing to pay.However, wrote the reports authors, this does not mean that cyber criminal operations are shutting up shop.In response, many attackers shifted tactics, with new ransomware strains emerging from rebranded, leaked or purchased code, reflecting a more adaptive and agile threat environment, they said.Ransomware operations have also become faster, with negotiations often beginning within hours of data exfiltration.Read more about ransomwareA ban on ransomware payments by UK government departments will be extended to cover organisations such as local councils, schools and the NHSshould new government proposals move forward.NCA-led Operation Destabilise disrupts Russian crime networks that funded the drugs and firearms trade in the UK, helped Russian oligarchs duck sanctions, and laundered money stolen from the NHS and othersby ransomware gangs.An individual associated with the LockBit ransomware gang has broken cover to tease details of a new phase of the cyber criminal operation's activity, which they claim isset to begin in February 2025.Coveware senior director of incident response Lizzie Cookson, who shared insight with the Chainalysis team for the report, said the market had never really recovered following the downfall of the LockBit and ALPHV/BlackCat gangs.We saw a rise in lone actors, but we did not see any group(s) swiftly absorb their market share, as we had seen happen after prior high-profile takedowns and closures, said Cookson. The current ransomware ecosystem is infused with a lot of newcomers who tend to focus efforts on the small- to mid-size markets, which in turn are associated with more modest ransom demands.Improved cyber security hygiene and resiliency may also be playing a role here. The increased profile of ransomware attacks in daily discourse means organisations are investing more and better in defensive countermeasures, and hence find themselves better able to resist cyber criminal demands, negotiate to reduce the final payments, or explore other options such as ignoring the gangs and restoring from backups when they get hit.Christian Geyer, founder and CEO atActfore, a Washington DC-area cyber forensics specialist, said: Organisations have increasingly implemented comprehensive data backup solutions, so the business can rapidly recover their systems through a wipe-and-restore process.Many are becoming more tech-driven when it comes to incident response services, enabling them to identify the breached data much faster, he told Computer Weekly. Digital forensics is not only becoming more advanced and precise, but data mining services and incident response are evolving to be more efficient and proactive. Technology is allowing organisations to better understand the contents of the stolen data before proceeding down the road of ransom payment.Geyer also said victims may be resisting demands out of concern over the ethical and legal ramifications of sending large ransomware payments to unknown, unidentified actors.For instance, if the threat actor is a foreign nation-state sponsored terrorist group, then it could be seen as illegal to be paying money to those adversaries, he said. The playing field becomes more level when you have more data to make decisions about whether to pay or not.Chainalysiss insight into how cyber criminals exploit the world of crypto in their attacks may also explain some of the changes. The team said they observed significant changes in how ransomware gangs off-ramp their funds, with a significant decline in the use of so-called mixers in 2024 likely testament to the impact of sanctions and police action.A far higher proportion of ransomware funds are now flowing through centralised exchanges, and personal wallets, while cross-chain bridges are replacing mixers as a means of obscuring where the money is heading.The use of personal crypto wallets is particularly interesting, said Chainalysis, and likely a big factor in the decline.Curiously, ransomware operators, a primarily financially motivated group, are abstaining from cashing out more than ever, they said. We attribute this largely to increased caution and uncertainty amid what is probably perceived as law enforcements unpredictable and decisive actions targeting individuals and services participating in or facilitating ransomware laundering, resulting in insecurity among threat actors about where they can safely put their funds.Finally, Jon Miller, CEO and co-founder of ransomware prevention specialist Halcyon, suggested there may be another factor to partially explain the decline.2024 was a major election year in the US, with a lot at stake for nation-states like Russia, who give safe harbour to ransomware operators, he said.The 2022 lull has in part been attributed to Russia redirecting some cyber criminal resources to conduct more state-supported operations against Ukraine and their western supporters, so this decline in payments could also be in part the result of the most talented ransomware operators being yet again pulled off their cyber criminal activities to support Russian state priorities around the US election, so the drop was most precipitous in the second half of the year.
    0 Comments ·0 Shares ·66 Views
  • US lawmakers move to ban DeepSeek AI tool
    www.computerweekly.com
    US lawmakers in Washington DC have this week moved to enact a national ban on the use of DeepSeek, the breakout Chinese generative artificial intelligence (GenAI) tool that sprang to prominence and wiped billions off the value of US tech companies at the end of January.The No DeepSeek on Government Devices Act is a bipartisan piece of legislation introduced by Democratic congressman Josh Gottheimer and his Republican counterpart, Darin LaHood, who represent districts in the states of New Jersey and Illinois respectively.The legislation will seek to ban the use and download of DeepSeeks AI software on government devices. Several other countries have already taken such steps, including the Australian government, which blocked access to DeepSeek on all government devices on national security grounds, and Taiwan. The Italian data protection authority has announced limitations on the processing of Italian users data by DeepSeek, and other countries are also considering action.In the US itself, several bodies have already moved to ban the application, including the state of Texas, which is now restricting its use on state-owned devices, and the US Navy.Similar to the controversial TikTok ban currently on hold for 75 days following an executive order signed by President Trump, the USs attempts to restrict the use of DeepSeek reflect the Western blocs long-held concerns over the ability of the Chinese government to co-opt any user data at will from technology organisations.The national security threat that DeepSeek a CCP [Chinese Communist Party]-affiliated company poses to the United States is alarming, said LaHood. DeepSeeks generative AI program acquires the data of US users and stores the information for unidentified use by the CCP.Under no circumstances can we allow a CCP company to obtain sensitive government or personal data. This common-sense, bipartisan piece of legislation will ban the app from federal workers phones while closing backdoor operations the company seeks to exploit for access. It is critical that Congress safeguard Americans data and continue to ensure American leadership in AI.Read more about DeepSeekDeepSeek, a Chinese AI firm, is disrupting the industry with its low-cost, open source large language models,challenging US tech giants.The introduction of DeepSeeks GenAI models has been met with fervour, but security issues have createdapparent challenges for the Chinese startup.DeepSeek, which gained popularity recently for its AI platform, did not specify the cause of large-scale malicious attacks,which continue to disrupt new account registrations.Gottheimer added: The Chinese Communist Party has made it abundantly clear that it will exploit any tool at its disposal to undermine our national security, spew harmful disinformation and collect data on Americans. Now, we have deeply disturbing evidence that they are using DeepSeek to steal the sensitive data of US citizens.This is a five alarm national security fire. We must get to the bottom of DeepSeeks malign activities. We simply cant risk the CCP infiltrating the devices of our government officials and jeopardising our national security. Weve seen Chinas playbook before with TikTok, and we cannot allow it to happen again.Meanwhile, a separate bill the Decoupling Americas Artificial Intelligence Capabilities from China Act introduced by Republican senator Josh Hawley, who represents Missouri and is often outspoken on tech and privacy issues in the US, seeks to penalise the importation of technology or intellectual property developed in China, accompanied by penalties including up to 20 years in prison, and fines of up to $100m for organisations that violate it. Hawleys bill does not explicitly mention DeepSeek.There has been a significant level of nervousness around the use of non-allied technology in government and military settings going back many years. Huawei is just one example, said Mel Morris, CEO of AI research engine Corpora.ai.One could argue that this is just a prudent measure to ensure that devices cannot be compromised by a potential adversary. In the event of a conflict, there are no rules, so whatever assurance or confidence levels might exist would likely go out of the window. I suppose the old adage all's fair. On the flip side, running an air-gapped service using DeepSeek, many would argue, could be made to be safe and considered so.Ilia Kolochenko, ImmuniWeb CEO and BCS fellow, said that even though the risks stemming from the use of DeepSeek may be reasonable and justified, politicians risked missing the forest for the trees and should extend their thinking beyond China.Numerous other GenAI vendors from different countries as well as global SaaS platforms, which are now rapidly integrating GenAI capabilities oftentimes without properly assessing the related risks have similar or even bigger problems, he said.WhilstDeepSeeks risks should certainly not be discounted or underestimated, we should remember the fundamental risks and problems of all other GenAI vendors. Many of them unwarrantedly scrapped proprietary and copyrighted content from the internet to train their powerful LLMs without ever asking for permission from content creators or copyright owners now vigorously denying any wrongdoing under varying untenable pretexts.The unfolding DeepSeek incident shall not be exploited as a convenient reason to suddenly forget about serious violations and AI-related risks posed by other GenAI vendors, said Kolochenko.
    0 Comments ·0 Shares ·65 Views
  • Secure software procurement in 2025: A call for accountability
    www.computerweekly.com
    The software security landscape is at an interesting juncture. As Jen Easterly, the former director of the Cybersecurity and Infrastructure Security Agency (CISA), pointed out, there is a lesson to be drawn from the automotive industry of the 1960s. Its approach to improving car safety by building better designs including seatbelts, crumple zones, and reinforced frames proved far more effective at saving lives than responding to accidents after they occurred. Software providers need to take the same approach and deliver secure solutions by design, moving from reactive risk management to taking proactive accountability for tackling escalating cyber threats. That will require a clear understanding of the fragmented and overlapping nature of many of the industry standards, adopting innovative tools like Cyber Protection Level Agreements (CPLAs), and ensuring through-life service management.There have been a rapid expansion in the number of security frameworks across the world in recent years, including ISO 27001 (including ISO/IEC27034), NIST, OWASP, and the EU Cyber Resilience Act. Many organisations, often driven by regulatory, client or customer pressures, have tried to follow these standards. However, for those operating across multiple geographic jurisdictions, these overlapping and, at times, conflicting standards make it difficult to comply. When it comes to software procurement, many CISOs are struggling with supplier evaluation and worried about the likelihood of gaps in security.Without a unified approach to standards, organisations risk exposure to vulnerabilities that exploit gaps. Supply chain attacks, like the SolarWinds Sunburst breach or the CrowdStrike software update incident, are reminders of whats at stake: operational disruption, legal action, and damage to stakeholder trust. More consistent standards would not only mitigate these risks but simplify compliance.CPLAs offer a practical solution by formalising suppliers' security commitments within procurement contracts. They provide a way to ensure that secure software suppliers are thinking about compliance with the many cyber security standards both current and future. Modelled on Service Level Agreements (SLAs), CPLAs define measurable standards, such as vulnerability assessments, patching timelines, and incident reporting protocols to create clear and enforceable obligations.Ambiguity in supplier commitments often leads to preventable risks, but by specifying requirements drawn from the applicable standards and regulation, CPLAs create accountability. This prevents suppliers from cutting corners and ensures a consistent level of protection.CPLAs should specify:Time-to-patch guarantees: Critical vulnerabilities patched within 72 hours.Software Bill of Materials (SBOM) transparency: Full disclosure of software components, including third-party libraries.Incident response KPIs: Defined recovery time objectives and reporting obligations for breaches.Lifecycle commitments: Ongoing updates and end-of-life transition plans.By setting out clear, enforceable targets for their suppliers, organisations should see reductions in downtime, minimised attack vectors, and fewer incidents.Secure software procurement requires ongoing management. This can be achieved with through-life service management, which includes regular audits, vulnerability monitoring, and clearly defined end-of-life plans to manage security from acquisition to decommissioning. Without through-life management, organisations risk inheriting unsupported or insecure software, leading to operational vulnerabilities and escalating costs.These risks underline the need to embed security into procurement and make sure it is seen as a continuous process, not a one-off task. This starts by aligning procurement with long-term security goals and requiring vendors to demonstrate secure-by-design principles. CPLAs should be integrated into contracts and vendors SBOMs and secure development practices evaluated as part of the process.At the point of service transition, software must be validated through rigorous testing, such as penetration tests. Once the service is then in operation, there needs to be monitoring of performance against CPLA metrics. All this should be accompanied by a focus on continual service improvement to leverage incident reviews to learn lessons for future contracts.Embedding these principles puts organisations in a stronger position when negotiating with suppliers.The Security Think Tank on secure softwareTyler Shields, ESG: 'Unsafe At Any Speed'. Comparing automobiles to code risk.Aditya K Sood, Aryaka:Vigilant buyers are the best recipe for accountable suppliers.Artificial intelligence (AI) can also play a pivotal role in navigating this fragmented security landscape. Current processes for assessing and harmonising standards are manual, inconsistent, and error prone. AI tools equipped with natural language processing can map overlaps between standards, creating unified requirements that are traceable to the original frameworks, saving time for procurement teams. Emerging real-time compliance monitoring tools have the potential to enforce security obligations automatically, reducing human error and increasing efficiency.While CPLAs and AI tools offer internal solutions, systemic change requires collaboration. Buyer consortia and regulatory alignment, as seen with the EU Cyber Resilience Act, can establish universal security baselines.This kind of collaboration reduces duplication, streamlines compliance, and lowers costs for suppliers and buyers alike. Universal standards create a level playing field, making it easier for organisations to identify secure and reliable vendors.Secure software procurement in 2025 is vital. By unifying fragmented standards, enforcing supplier accountability through CPLAs, and adopting through-life service management, organisations can mitigate risks and improve resilience. The stakes are high, but so are the opportunities. Acting decisively now can protect organisations from cyber threats and reshape the software industry into one that prioritises security as much as innovation.Robert Campbell, is a cyber security expert at PA Consulting
    0 Comments ·0 Shares ·63 Views
  • DACHs venture capital prominence: an outlier, or the start of a golden age?
    www.computerweekly.com
    Switzerland can officially boast the worlds fastest growing venture capital (VC) ecosystem, with only Dubai and Singapore preventing Austria from making it a DACH double at the top. An exciting tech sector has undoubtedly contributed to these impressive regional rankings, but those at the heart of both ecosystems have slightly differing views on how sustainable the growth is.Pitchbooks annual Global venture capital ecosystem rankings for 2024 revealed the perhaps surprising ascent of Austria and especially Switzerland. Despite both countries sitting much further down the broader development scores, the country growth index framed both countries among the top tier, with Switzerland at the summit.At first glance, the news reflects a concerted period of success for startups and scaleups in the DACH region, which would seem to bode well for the future too.Yet growth is relative. And regardless of the positive grounding and traits that both share, there are differing opinions on whether this growth will now drive Switzerland and Austria to the top of the VC Development rankings as more entrenched and embedded powerhouses.Switzerland and Austria have fostered thriving startup ecosystems, largely thanks to their strong academic hubs, such as ETH Zurich, says Markus Gleim, principal at renowned global VC fund Northzone.These institutions have a consistent track record of producing top-tier talent and cutting-edge spin-outs such as AI innovators LogicStar. And, as a result, both nations have been able to carve out strong niches in deep technological sectors like medtech and cleantech.However, while signs of growth have been promising, to sustain this momentum both regions will need to create an environment that allows their emerging cohort of startups to become stable, high-growth companies.Addressing late-stage funding gaps and leveraging cross-border partnerships within the DACH region will help startups scale internationally. Even closer collaborations between universities and industry will also help prolong the growth of these startup hubs, he adds.Gleims vision for that transition from scaleup to global, high-growth company can, in part, be remedied through these more practical, collaborative steps. However, Felix Ohswald, CEO and co-founder at education gamechanger GoStudent, believes there is also a mindset element to address.To this end, he poses that the success of GoStudent itself has partially contributed to an outlier period which has launched the companys native Austria towards the top of the VC growth rankings.I was a bit surprised when I read that Austria is one of the fastest growing countries in this context, he says. It needs to be understood in terms of your starting size, so when you then have some very big financing rounds for companies in a smaller ecosystem, the relative growth is proportionally higher. You can also then see this doesnt equate to the wider development scores. The 2018-2024 period has been a bit of an outlier.Ohswald is quick to emphasise that this isnt to undermine the success of companies that have risen to prominence during this period, or even to downplay the success of the regional VC ecosystem in capitalising on these opportunities. His only concern is that the default DACH mindset may now prevent the region from leveraging these successes and to demand more sustainable competitiveness.These past few years in both Austria and Switzerland it has been proved how many great people, ideas and companies there are in these environments, he adds. But that just means we can grow and ignite more of these companies more frequently. It all starts with the drive and mindset.This needs to come from the very top, with country leaders who are proud to not only support and spark new companies, but to push them to be the best globally.This is independent of socio-economic circumstances and, even to an extent, from the leading academic frameworks we have here, Ohswald adds. Mindset means having something in you to build better, faster. Less red tape and bureaucracy can certainly help to then speed things up, and why not elevate our education institutions even further?What this ranking shows, if nothing else, is that we have all the ingredients here to be the best. So, lets not be shy in aiming for that. Not everybody has to be good at everything. The mindset shift Id be looking for is to promote an elite ecosystem that encourages the best minds here, that pushes the best to the top, and that then has all those favourable conditions to push these cool ideas out to the world to benefit as many people as possible.While everyone is keen to capitalise on recent growth, there are some within the DACH ecosystem who foresaw this golden age long before. In particular, Andre Retterath, partner at Earlybird Venture Capital, predicted a European-wide purple patch driven by talent, collaboration, funding, success stories and interestingly mindset.The rankings and Switzerland being at the top in particular, came as no real surprise to me, he says. I completely agree with the relativity element as they both have small denominators in terms of funding, against a situation where lots of big funding then came into the country.What doesnt surprise me, however, is that the money did come. The region houses so much great research, access to industry and especially big tech, and is in close proximity to ecosystems in Germany, France and the UK. The only surprise is that so little funding came into the ecosystem before.Retterath has comprehensively explored and dissected the notion of ecosystems, and deduces that a successful golden age relies on different components coming together in harmony: Smart minds, seamless business formulation, and a collaborative environment.Beyond these factors, hes also keen not to compare countries such as Switzerland and Austria to the likes of Dubai and Singapore, who split the two DACH entries in Pitchbooks Growth scores.The other element a successful VC ecosystem needs is its own personality, he affirms. What makes us great? Look at ETH Zurich or EPFL, the Swiss Federal Technology Institute of Lausanne. These are top academic institutions. Then look at the regions history with big tech, or with finance, or medtech. Look at their socio-economic groundings. Its a great place to live or to relocate to very clean, neutral, highly ambitious.To me, the strength of the DACH VC ecosystem is the DACH region itself. Its own personality and capabilities. Singapore and Dubai are great for their own reasons, but Id say our fundamentals are as naturally suited to startup success and VC activity, as any.The potential of DACHs VC ecosystem is doubtless. From those within and those looking in, the only disagreement is around whether the recent ascent in the global context was inevitable or not. For some, like Gleim and Ohswald, recent success stories are a sign of what could be, but are in danger of remaining outliers if both Austria and Switzerland dont actively strive to convert growth into longer-term development.For others, DACH has been a sleeping giant, with its tech prowess and VC support finally combining to put Austria and Switzerland in the global consciousness.Alan Poensgen, partner at Antler, a global VC fund targeting zero day investments, concludes: Both the Swiss and Austrian ecosystems have always been promising, yet, historically, many successful founders left once their startups reached a certain stage other ecosystems with proximity to larger pools of talent or capital were too attractive.Lately, weve seen the flywheel in both ecosystems begin to spin faster: more innovation, more exits, more liquidity, human capital moving through, Zurich outperforming even Cambridge when it comes to unicorn founders.Regardless of what the future has in store, everyone agrees that there is reason to be excited right now. All the ingredients are there. The question is, has the golden age only just begun?Read more about tech innovation in Austria and Switzerland
    0 Comments ·0 Shares ·59 Views
  • Tech job postings dropped in 2024, according to research
    www.computerweekly.com
    Worawut - stock.adobe.comNewsTech job postings dropped in 2024, according to research Job postings for several IT roles returned to pre-pandemic levels last year, dropping when compared with 2023 ByClare McDonald,Business Editor Published: 07 Feb 2025 10:30 The number of advertised IT jobs dropped year-on-year in 2024, according to research by the Recruitment and Employment Federation (REC).Figures from REC saw a drop in postings across most tech roles, showing a return to pre-pandemic hiring levels.REC isnt the only firm noticing these changes, with Matt Monette, country lead at Deel, seeing similar patterns.After years of aggressive tech hiring, widespread layoffs over the past two years have made companies more cautious about rebuilding headcount, leading to a decline in job postings for certain roles, he said.That said, [Deels] data shows a growing demand for specialist professional services roles. For example, we saw a 74% increase in hiring of accountants globally, suggesting businesses are prioritising financial stability in an uncertain economic climate.The tech hiring landscape has seen peaks and troughs over the past five years during the pandemic, demand for IT professionals rose as the world turned to technology to complete everyday tasks from home due to global lockdowns, followed by the great resignation, where swathes of tech workers left their jobs looking for new opportunities both in and outside the sector.Though REC figures suggest a drop-off in this uptick in demand for tech professionals, interest in tech talent is still high, as rapid development and adoption of technology forces businesses and individuals to arm themselves with the necessary digital capabilities required in the modern world.Read more about tech hiringMost businesses now have a CISO, but perceptions of what CISOs are supposed to do, and confusion over the value they offer, may be holding back harmonious relationsWomen make up a small proportion of the tech sector despite accounting for almost half of the UK workforce but are efforts to attract more women into the sector a distraction from trying to keep the women the sector already has?RECs research, which partners with labour market data and analytics firm Lightcast to collate data from thousands of job board sites, found most IT professions had at least a 40% year-on-year drop in advertised roles in 2024, with posts for roles in IT user support technicians suffering the largest drop, at 47.1%.Demand for IT networking professionals declined by 40.7% in 2024, from 60,967 advertised jobs in 2023 to 36,172 last year, and advertisements for management consultant/business analysts and web designers both saw a drop of almost 42% in the same period.While programmers and software developers are still some of the most sought-after talent in the tech sector, with 192,261 advertised roles in 2024, this was still a 44.8% year-on-year drop compared with the 348,446 postings the previous year.There are many reasons why these declines in advertised jobs may be taking place a lack of skilled workers to fill tech roles has made hiring the available tech talent more competitive. This has created a trend towards training talent internally to ensure access to workers with the right skills.Budget concerns as a result of the unpredictable economic climate have also had many employers dialling back on tech projects, in turn leaving workers worried about redundancies and businesses putting hiring on pause.But REC chief executive Neil Carberry claimed the hiring landscape in the tech sector will bounce back once the economy begins to recover and investment in technologies such as artificial intelligence increases.It was a particularly difficult year for IT professionals looking for new jobs a complete reverse of demand during and just after the pandemic, which was very high, he said. These roles will bounce back quickly as companies invest in IT transformation during an economic recovery.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueSharp Europe's Bold Ambitions in the IT Services Sector Quocirca InsightsWhy are we waiting? Cliff Saran's Enterprise blogView All Blogs
    0 Comments ·0 Shares ·63 Views
  • UKs Cyber Monitoring Centre begins incident classification work
    www.computerweekly.com
    Skrzewiak - stock.adobe.comNewsUKs Cyber Monitoring Centre begins incident classification workThe Cyber Monitoring Centre will work to categorise major incidents against a newly developed scale to help organisations better understand the nature of systemic cyber attacks and learn from their impactByAlex Scroxton,Security EditorPublished: 06 Feb 2025 17:18 The Cyber Monitoring Centre (CMC), a new UK-based project designed to independently declare and classify systemic cyber attacks using a unique classification scale with the objective of helping organisations understand the nature of systemic security incidents with widespread impacts, has formally begun its work.Initially a joint project between law firm Weightmans and insurer CFC, the CMCs objective is to declare and classify systemic incidents on a scale of one through five, where one is the least severe type of incident and five the most dangerous and disruptive. It was initially designed as an aid to the insurance industry, but the results of its work will be freely available to all security risk owners.It hopes to bring greater clarity and transparency to complex incidents, and help organisations better react to them and prepare for future ones.The risk of major cyber events is greater now than at any time in the past as UK organisations have become increasingly reliant on technology. The CMC has the potential to help businesses and individuals better understand the implications of cyber events, mitigate their impact on peoples lives, and improve cyber resilience and response plans, said CMC CEO Will Mayes.When a systemic incident defined by the CMC as one with a financial impact greater than 100m, affecting multiple organisations, and where there is data or information available to enable assessments the CMCs Technical Committee, which is led by former National Cyber Security Centre (NCSC) chief executive Ciaran Martin, will measure key factors against the CMCs core framework to make an effective judgement as to the incidents classification.These factors are:External polling on an incident, for which it is partnering with the Office for National Statistics (ONS) and the British Chambers of Commerce;Observable technical indicators and incident data drawn from, for example, news reports, NHS or ONS data, and partnerships with third parties such as risk analytics house Parametrix, among others;And modelling against previous incidents, such as 2024s CrowdStrike outage, and through conversations with individuals involved in the incident, such as victims, incident response and cyber forensics teams, lawyers, insurance claims handlers and industry bodies. I have no doubt the CMC will improve the way we tackle, learn from, and recover from cyber incidents. If we crack this, and Im confident that we will, it could be a huge boost to cyber security efforts Ciaran Martin, Cyber Monitoring CentreThe CMC said the target timeframe to categorise an event against these criteria will be 30 days, although this is not set in stone. Each published categorisation will be supported by an event report that will summarise the committees analysis and provide additional insights from its work.Committee chair Martin said that up to now, measuring the severity of cyber security incidents had been a big challenge.This could be a huge leap forward [and] I have no doubt the CMC will improve the way we tackle, learn from, and recover from cyber incidents. If we crack this, and Im confident that we will, ultimately it could be a huge boost to cyber security efforts not just here but internationally too, he said.Mayes added: I would also like to acknowledge the support from a wide range of world-leading experts who have contributed so much time and expertise to help establish the CMC, and continue to provide data and insights during events. Their ongoing support will be vital and we look forward to adding further expertise to our growing cohort of partners in the months and years ahead.Read more about cyber incident responseWhat goes into a good incident response plan, and what steps should security professionals take to ensure they are appropriately prepared for the almost inevitable attack, and secure buy-in from organisational leadership?Organisations need to take a focused approach to gain visibility into targeted threats for cyber-risk mitigation and incident response.The high-rolling city of Las Vegas experiences unique cyber security challenges rarely seen elsewhere. CIO Mike Sherwood reveals how he turned to Darktrace to help address incidents quicker and with confidence.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueWhy are we waiting? Cliff Saran's Enterprise blogData Engineering - Patronus AI: Building robust evaluation frameworks for AI accuracy CW Developer NetworkView All Blogs
    0 Comments ·0 Shares ·64 Views
  • Data maturity survey finds a quarter of organisations with no strategy
    www.computerweekly.com
    Bacho Foto - stock.adobe.comNewsData maturity survey finds a quarter of organisations with no strategyOne quarter of organisations surveyed for the third Carruthers and Jackson Data Maturity Index found to have no data strategy as they increase artificial intelligence engagementByBrian McKenna,Enterprise Applications EditorPublished: 06 Feb 2025 17:40 According to research published by data management consultancy Carruthers and Jackson, 26% of organisations, mostly in the UK and US, lack a formal data strategy, and 39% have little or nothing in the way of data governance frameworks, but are increasing their use of artificial intelligence (AI) regardless.Nevertheless, data leaders surveyed in the consultancys third Data Maturity Index study are testifying to an evolution from one-size-fits-all data governance frameworks to ones that are more tailored to departments in their organisations.The survey found 37% of data leaders reporting the adoption of multiple governance frameworks, a rise from 31% in 2023.Carruthers and Jackson co-founder and chief executive Caroline Carruthers said in a briefing with Computer Weekly: That is important because that way we can have more confidence that the right data is being looked after in the right way. With one-size-fits-all you tend to come down to the lowest common denominator, which means youre not treating your crown jewels of data in a way that gives it the respect it deserves.The study was carried out in November 2024 among almost 200 data leaders, drawn from the community the consultants have developed through the chief data officer (CDO) summer school Carruthers and her colleague Peter Jackson have been running since 2018. Carruthers confirmed she interviewed 10 of the CDOs separate to the survey.The annual poll of hundreds of global data leaders reveals that over a quarter (26%) of organisations still operate without a formal data strategy, and 39% report little or no governance framework.In a statement accompanying the report, Carruthers and Jackson said the study points to persistent gaps in foundational data management practices. The rapid adoption of AI is further complicating the data landscape, as in just 12 months, the share of organisations not using AI has plummeted by 20%. Today, just 7% of organisations now report no AI usage, a significant drop from 26% last year.Carruthers said a positive benefit of the increasing prominence of AI in organisations is that it has brought to the fore ethical considerations connected with data use. But, the consultancy said although 44% of organisations have seen a moderate rise in ethical discussions around AI, only 13% have formalised these conversations into structured policies. Simultaneously, it said while 53% of organisations report an increase in AI usage, more than half (57%) admit that most employees still lack data literacy.Carruthers commented: An AI Paradox has been created, as the use of AI tools in organisations has surged in the last year, yet employees lack the data literacy to use them effectively, as their fundamental understanding of data remains largely unchanged from last year. Overcoming this requires tailored, scalable training and AI-focused upskilling. With one-size-fits-all you tend to come down to the lowest common denominator, which means youre not treating your crown jewels of data in a way that gives it the respect it deserves Caroline Carruthers, Carruthers and JacksonSimultaneously, the adoption of AI means data needs to be cleaner, and teams need to raise their data standards, otherwise the transformative benefits of new technology simply wont be realised. Encouragingly, more leaders are recognising the need for flexible, department-specific governance to drive data maturity.We are seeing the data landscape begin to evolve, as data leaders are grappling with greater complexity and a deeper appreciation of the requirement for taking a tailored approach to data management. Were also seeing the role of data leader develop too rather than having a CDO who knows everything, were seeing the emergence of data lineage experts, data governance experts, data observability specialists and so on.In the report, Andrew Lunt, data management director at Carruthers and Jackson, is quoted as saying: Its still worrying to see that almost 40% of organisations have little to no data governance in place. I think my message to the ones that havent embarked on this journey yet is think big but start small. In most cases, problems with your data cant be solved overnight. Weve helped organisations to understand how to approach data governance at high level in a few hours and then helped them start small and build up to something very capable.In terms of AI, another consultant at the firm, Ashley Cairns, reported: Weve already worked with a number of clients where employees have inadvertently put customer or company data into public AI tools without knowing this was even possible. There has to be more training around AI tools to avoid this situation.Among the CDOs quoted in the report is David Prime, head of strategy, data and insights at the Football Association. He said: Data strategy and governance fameworks are critical elements of a high-performing data capability. However, the results of this report are no surprise to me, the high-profile and matrixed nature of data make aligning on these key elements difficult. It is critical to not adopt a one-size-fits-all approach and find solutions that fit the specific opportunities and challenges of your organisation.Read more about data literacy and maturityHow to measure data literacy in the AI era: Improving data literacy as an individual or organisation starts with measuring where you are. An AI and data framework provides six criteria to track progress.A data governance maturity model identifies where current operations are lacking and how to make improvements that better protect and use data.Author and data management consultant Caroline Carruthers argues that data leaders need to tap into their creativity to make their analytical work pay off.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueWhy are we waiting? Cliff Saran's Enterprise blogData Engineering - Patronus AI: Building robust evaluation frameworks for AI accuracy CW Developer NetworkView All Blogs
    0 Comments ·0 Shares ·66 Views
  • As Java turns 30, developers switch to OpenJDK
    www.computerweekly.com
    The latest State of Java report from Azul Systems shows that the 30-year-old programming language has moved with the times and is being used for advanced applications like adding artificial intelligence (AI).More than 2,000 Java users were surveyed for this years report. Half (50%) of them were found to be building AI-enabled functionality that uses Java, surpassing the use of other popular languages, such as Python, that are more culturally associated with AI. This, according to Azul Systems, highlights Javas fit-for-purpose nature, offering scalability, extensive libraries and seamless integration with existing enterprise systems.Java uses a runtime platform and software development environment known as the Java Development Kit (JDK) for developing and running applications. It is one of the main languages used to build enterprise systems.While Oracle sells the JDK commercially as Oracle JDK, there is also an open source version called OpenJDK.All of us who are involved in OpenJDK are advancing Java so that it can much more rapidly integrate with AI capabilities, said Scott Sellers, CEO of Azul Systems.According to Sellers, a lot of traditional applications developers are using Java to build their AI-enabled applications, by using application programming interfaces (APIs) to send queries to the large language model (LLM). Good old Java is the best out there because of its scale, resiliency and security Scott Sellers, Azul SystemsThis is a very different approach to that taken by data scientists, who need to run ad hoc queries on the data using a language such as Python.Production-level applications need to handle hundreds of thousands and millions of users simultaneously, and good old Java is the best out there because of its scale, resiliency and security, said Sellers.He also pointed out that Java has been tried and tested over the decades, which makes it an extremely stable and well understood platform for running enterprise applications.However, cost has become a barrier for some organisations, given the licensing changes Oracle has made to Oracle JDK. According to research from Gartner, this makes it two to five times more costly than the subscription model it replaces.Gartners 3 steps to manage exposure for Oracle Java SE licensing report, published at the end of January, notes: If anyone in your organisation has downloaded any Oracle Java SE updates since April 2019, you probably need a subscription and you may have a compliance risk. You may determine that you want an Oracle Java SE subscription if you need a commercial support agreement, particularly if you are using a very old or new release of Java, such as Java 7 or 21.Beyond the Java subscription cost, Azuls survey shows that some organisations are choosing not to buy maintenance and support for Oracle JDK. Of the survey participants who do not pay for Java support, 21% cited expense as a deterrent, 31% said it was not a priority, and a significant 52% believed they simply did not need it.According to Azul Systems, this divide highlights the trade-offs organisations face between upfront costs and the long-term value of secure, reliable application performance, particularly in environments where stability and security are non-negotiable.Azul Systems believes the growing dissatisfaction reflects a pressing need for cost-effective concerns about affordability, fuelled in part by organisations re-examining their long-term strategies for managing Java licensing and support costs, driving a search for more predictable and sustainable options.In Azuls previous survey, 72% of Oracle Java users were already considering a switch to another JDK provider. That has surged to 88% in the latest Azul survey.Although 88% are considering switching from Oracle and 82% are concerned about the Oracle Java pricing, Sellers said some of the respondents were not directly impacted by the price increases, since the cost of Java is in someone elses budget. A developer whose whole life is about Java may not see a direct budget impact because someone else is paying the licence fee, he said.Read more about OpenJDKOracle Java licensing explained addressing complexity, cost and audits: Now that Oracle Java SE is sold as a subscription, organisations using the Oracle JDK have a minefield of licences to navigate.Tips for migrating to OpenJDK: Interest in OpenJDK, and in commercial support for it, has intensified as Oracle Java SE becomes increasingly expensive.Application owners are also not the people who ultimately pay for Java. The cost tends to be hidden since it is considered infrastructure, in the same way that facilities and internet access are budgeted as infrastructure. Software and infrastructure costs that are shared across applications cannot be managed by application owners, which means, according to Sellers, that they focus on other ways to reduce costs, rather than looking at the Oracle JDK licence bill.The survey, according to Sellers, showed that users realise they do not need to use Oracle JDK because the same functionality is available from OpenJDK. Why would you choose something thats commercially licensed with restrictions as opposed to open source?It tends to be the head of IT or CIO who ultimately makes the decision and is able to force a change.Sellers said Oracle is extremely aggressive in terms of audits, where it often demands usage reports from users. If you dont want to deal with software audits, then you may as well just get off Oracle and move on to something thats inherently open source and doesnt require commercial software licensing, he added.One of Azul Systems focus areas is helping enterprises understand their inventory of Oracle Java and working with them to deliver what Sellers calls a like-for-like replacement. This can be particularly difficult since many different versions of Java may be in use, each of which needs replacing with the correct version of OpenJDK to ensure Java applications that rely on a specific version of the JDK do not break.One of the challenges that exists when an organisation is looking to move off Oracle Java is that Oracle provides about 1,000 updates a quarter, said Sellers.This is further complicated by the fact that there may be patches for specific major releases of Java and minor releases. Unless you have a like-for-like equivalent for all of those different versions and subversions youre using from Oracle and youre trying to move, you can run into incompatibility problems, and that can be challenging.Give its extensive footprint, Java is set to play a major role in enterprise IT for many years to come. However, IT leaders are highly likely to switch from Oracle JDK to cheaper options.
    0 Comments ·0 Shares ·67 Views
  • Samsungs on-device AI: A Computer Weekly Downtime Upload podcast
    www.computerweekly.com
    Andrii IURLOV - stock.adobe.comSamsungs on-device AI: A Computer Weekly Downtime Upload podcastByCliff Saran, Managing EditorListen to this podcastWe speak to the co-founder of Oxford Semantic Technologies, which has developed the AI for the new Samsung Galaxy S25PodcastA few weeks ago, Ian Horrocks took to the stage at an event which featured DeepMinds co-founder, Demis Hassabis, to help Samsung launch its latest phone, the Galaxy S25.He says: It was pretty amazing for me as the sort of humble academic, finding myself in that setting. But it was of course great as well and a culmination of many, many years of research.Horrocks is co-founder of Oxford Semantic Technologies, a startup, spun out of Oxford University, which has developed a knowledge graph AI system called RDFox, that powers the artificial intelligence (AI ) in the latest Samsung Galaxy S25.Samsung acquired Oxford Semantic Technologies in July 2024 and the Galaxy S25 is the first new smartphone from the manufacturer to include AI post acquisition.Explaining the work with Samsung, Horrocks says: One of the reasons why Samsung was so excited about our knowledge graph system is the fact that it can actually run on the phone. You can build it with a relatively small footprint and relatively small compute requirement.Among the benefits of using on-device AI is, as Horrocks points out is: You do need to move potentially sensitive personal data off into the cloud. You can do everything on your own device, so you're in control. The AI on the phone cant use what it can't see and isn't sharing your sensitive personal data.Computer Weekly spoke to Horrocks the week the world of AI was disrupted by Chinas DeepSeek. While there are many questions over how it was developed, he says: I still think it's very interesting how it challenges the orthodox view that generative AI is just all about compute power for training and inference.Horrocks field of expertise is knowledge representation and reasoning. During the time he has worked in this area, Horrocks says: I soon learned there that the impact of algorithms and optimisations can vastly outstrip the impact of better hardware.Among the challenges of knowledge-based AI is what Horrocks calls the scalability of logical reasoning. I was working with a brilliant colleague at Oxford called Boris Motik, who had this idea that to address this problem using a combination of modern computer architecture with some very clever, novel data structures and algorithms.Horrocks says Motiks approach did not focus just on the computational power available in new hardware. He also had some brilliant algorithmic ideas and demonstrated that this could produce a really scalable graph knowledge-based and reasoning system.After joining forces with Motik, the pair faced the challenge of moving from academic research to the next level. He says: We really needed to start a company that could provide a more robust implementations with all of the kind of bells and whistles thatreal industry users require so we joined forces with another colleague, Bernardo Cuenca Grau, and with some more entrepreneurial people who knew how to set up and run a company.For Horrocks, DeepSeek effectively shows the world that there are other ways to achieve results with AI, which do not need to follow-up the same path as the current batch of large language models. I was always a bit sceptical about the idea that we basically know how to do AI. The problem has been sort solved and all we need to do is crank up the hardware and invest billions in equipment, he adds.
    0 Comments ·0 Shares ·64 Views
  • Swedish commission delivers roadmap to drive artificial intelligence reforms
    www.computerweekly.com
    Sweden is poised to regain its status as the frontier Nordic nation for next-generation technology development, following the governments endorsement of a strategic roadmap and a 1.5bn increase in spending on artificial intelligence (AI).Increased state support for AI follows the release of a landmark report, produced by a government-appointed commission, that sounded the alarm of the risks attached to the country falling further behind China, India and the US in the so-called AI race.The commissions AI roadmap for Sweden (AI-RFS) report underlines the critical need for the country to scale-up state partnerships with the private sector.A standout message in the report warned that Sweden risked being left behind in the AI race if the government failed to act urgently to bridge the AI gap.The AI-RFS report comprises 75 separate proposals, including a submission that the state invest an additional 1.5bn into AI development, innovation and technology usability programmes over the next five years to 2029.Measures advanced in the AI-RFS report include a proposal that urges the government to adopt a crisis mode approach to AI. The commission wants it to establish a special task force under the supervision of the prime ministers office to fast-track the implementation of critical AI measures raised in the report.Additionally, the commission advocates the adoption by government of a radical AI-for-all reform that seeks to ensure that every household, business, research organisation and citizen in Sweden has free access to AI tools.The democratisation of AI proposal aims to both reshape how the general public value AI and elevate the technology to a central position in Swedish society.AI-for-all reform, once implemented, will allow free access to AI-driven tools such as ChatGPT, Gemini and Claude.Public access will be routed through a state-managed AI hub, where users will be able to log in and gain access to paid versions of advanced AI tools for limited periods.The AI-RFS report labels AI as having immense potential to unlock human creativity and drive innovation.The combination of human intelligence and AI can produce higher-quality work and faster, the report stated. This allows for new forms of creativity and innovation, which are crucial for Sweden s economic future. AI can strengthen Sweden s ability to manage large-scale societal transitions. By adopting the right strategies and approach, Sweden has the potential to lead the world in AI adoption just as it once led in personal computing and internet speed.Read more about artificial intelligence in the NordicsDenmark is gearing upto become a trailblazing nation in the development and use of artificial intelligence technologies.Universities and technology businesses in Nordic countries are working cross-border as part of two pan-Nordic organisations.Artificial intelligence partnership is designed to reduce the risk of major IT outagesand keep the banks customers happy.The commission consulted over 150 organisations across the private and public sectors in drafting the report, drawing from the core areas of business, industry, legislators and the public sector. Senior members of the high-powered commission include leading multinationals Volvo, Vattenfall, McKinsey and Ericsson.Theres an urgent need for Sweden to close the AI gap, and reinstate its reputation as a major global technology player, said commission chairman Carl-Henric Svanberg, a former CEO of Ericsson.He lamented that Sweden s global AI capability benchmark ranking had dropped significantly in recent years, due to insufficient AI development activity and investment by both the public and private sectors.The necessity for political-led action is reflected by the commission completing the report over six months ahead of the deadline set by the government, said Svanberg. Sweden is lagging behind in AI. As a result, the need for political action is urgent. The corrective measures proposed in the report are designed for fast uptake. Many, which are concrete and costed proposals, are ready for the government to roll out very quickly.Over 20 of the proposed measures in the AI-RFS report are intended for further investigation, said Svanberg. It is important that investigations into these specific constructive measures are not delayed unnecessarily.The report concludes that fast-tracking the roll-out of proposed high-priority measures can be achieved with a high degree of confidence on the basis of Swedish societys openness to change and positive attitude towards embracing new technology.The stark messaging in the AI report amounts to a consequential wake-up call for Swedens legislators and technology sector players alike, said Magnus Tyreman, chair of the Stockholm School of Economics (Handelshgskola i Stockholm) and a commission member. A significant issue for Sweden is that we are currently trailing in fundamental technological areas like cloud, next-generation software, and connectivity, he said. It is in AI technology where we find ourselves furthest behind. This is not a good place to be. We must reverse this situation.The AI-RFS report paints a grim picture regarding the development and status of AI in both Sweden and Europe. The report describes a mini crisis of under-investment and technological advances in AI in EU member states generally, and bemoans the glaring gap in economic growth and value creation between the EU and global rivals the US, India and China.Sweden may be ahead on AI compared to many parts of Europe, but its far behind the US and China especially, said Tyreman. On the positive side, Sweden has the conditions needed to improve its position. What we most require is clear, cohesive political leadership with much faster and bolder decision-making than what we have today.The AI-RFS report advocates boosting general knowledge around AI while driving transformation through significant national investments in world-class research, combined with broader access to computational power and data. It proposes the development of a shared AI infrastructure to transform public services and a regulatory framework to advance innovation and entrepreneurship.Responding to the commission, the government said it intends to integrate critical measures outlined to reenergise the national AI strategy. The original National AI Strategy (NAIS), which was launched in May 2018, failed to deliver on the scope of its initial promise to create a basis for future policy actions and priorities in the AI domain.In particular, the NAIS fell short of reaching agreed AI development targets in the key areas of education and training, research, innovation, AI use, AI infrastructure, and legislative framework. The tight budgetary constraints under which the NAIS has operated meant it was unable to fully take advantage of AIs many opportunities.The Swedish governments appointment of a so-called council of inquiry, to investigate the best methods to accelerate deployment of 5G and fibre across Sweden, is also expected to add momentum and value to innovation-led national AI expansion and reforms.Delivered to the ministry of public administration and digital policy (PA-DP), the commissions AI report will now undergo a comprehensive review ahead of prompt government action and initiatives during the course of 2025, said PA-DP minister Erik Slottner.We share the reports sense of urgency, he said. AI is a competitive global race, and we do not want to be left behind. We must act now. Unfortunately, as other countries advanced rapidly, Sweden missed important opportunities in the early stages of the AI revolution. We need to undo this negative and become a leading world technology actor again.
    0 Comments ·0 Shares ·65 Views
  • Met Police spied on BBC journalists' phone data for PSNI, MPs told
    www.computerweekly.com
    The PSNI had sought the support of the Met police as far back as 2011 to monitor journalists working for the BBC in Belfast, MPs on the Northern Ireland Affairs Committee were told.Belfast journalists Barry McCaffrey and Trevor Birney, told the committee that there were suspicions that other police forces in the UK were also monitoring journalists phones.They were giving evidence after a tribunal ruled in December that the PSNI and the Metropolitan Police had unlawfully placed them under surveillance in an attempt to identify confidential sources.Evidence disclosed at the Investigatory Powers Tribunal last year, showed that during a four month period in 2011 over 4,000 phone calls and text messages were being monitored by the Met for the PSNI, Birney told the committee.Basically, a UK police force was spying on the state broadcaster, the BBC and its journalists and sharing that unlawful surveillance data with at least two other UK police forces, he added.Trevor Birney told the MPs that he believed the PSNIs practice of trying to uncover Police whistleblowers began when a former Met police chief took over as Chief Constable of the then Royal Ulster Constabulary, in 2002.Hugh Orde introduced a policy to stop leaks by making it an offence for police officers to talk to journalists without the agreement of senior officers.But what started as a defensive operation to crack down on Police officers leaking to the press, turned into an offensive operation that also monitored journalists to find out if police officers were among their confidential sources, he said.McCaffrey said that phone data showed that the Met police had monitored phone calls made by journalists to other journalists.That's not a defensive operation, that's an offensive operation. That's spying on journalists to identify their sources, he said. By 2011 the PSNI were breaking rules on an industrial scale, he claimed.The journalists claimed the PSNI had repeatedly sought to bypass regulations designed to protect the confidentiality of journalists and lawyers.In 2013, for example, Barry McCaffrey had called the PSNIs press office to ask if they were investigating an allegation of corruption.That was a simple question. Are you investigating an allegation of corruption? Within 40 hours, Barry McCaffrey was turned into a criminal suspect, Birney told the MPs.In December 2024, the Investigatory Powers Tribunal (IPT) found that Trevor Birney and McCaffrey had themselves been placed under unlawful surveillance by two UK police forces, which spied on their phone communications and suspected confidential sources.The PSNI commissioned Angus McCullough KC in June last year to investigate allegations of unlawful surveillance of journalists, lawyers and other groups.Birney told the MPs, We don't believe that the review goes far enough. We think the remit is far too narrow. And we think that Angus McCullough, despite being a very experienced and knowledgeable KC doesn't have the tools to get to the bottom of what's going on here.Birney told the cross-party group of MPs that one of the problems of the review was that it had an arbitrary cut-off date of 2011.That isn't going to get to the bottom of where the spying operations emanated from, who ordered it, why and what would be the culture that led to the incidents that we've seen at the IPT (Investigatory Powers Tribunal).Another problem with the review, the committee heard, was that it didnt have the power to look at the role played by other state institutions in monitoring journalists.The IPT disclosed in October that former BBC journalist Vincent Kerney had been subject to surveillance at the same time as Barry McCaffrey in 2011.A barrister for MI5 and GCHQ told the IPT after a secret court hearing that MI5 would need a number of months to unearth documentation related to BBC journalism in Belfast and would need to hire security-cleared lawyer to do so.That indicated that there was an enormous amount of information that MI5 held on the BBC and its journalists, said Birney.The committee also heard that the Tory MP David Davis, had written to all police forces in the UK to ask if theyd been doing the same thing as the PSNI, but had been met with silence, suggesting that other forces may also be monitoring journalists.Live interception outside scopeThe MPs heard that the McCullough review is unable to investigate whether journalists were subject to live interception of their phone calls or text messages, leaving a black hole in the review.If journalists are being spied on on a daily basis or phone calls are being listened to on a daily basis, the McCullough review cant tell us that, said McCaffrey.He called on Jon Boutcher, the current chief constable of the PSNI to cooperate with the review, to ensure that McCullough gets access to every file and every record and that there is no obfuscation or delay.The MPs heard that it was Durham police, not the PSNI that made the most important disclosures to the IPT about surveillance on journalists, including extracts from PSNIs own files.Seamus Dooley, assistant general secretary of the NUJ, said that the PSNI had engaged in a form of judicial strip tease.Every day you walked in [to the IPT], there was a new little piece [of information] presented. I am an experienced journalist, editor and court reporter and I have never seen evidence presented in that sort of manner before, he said.McCaffrey said that it was extremely difficult to trust the PSNI to be fully open with the McCullough review. There had been an incredible amount of delay, obfuscation and denial by the PSNI, he said.McCaffrey said that trust in the PSNI was being further undermined by a whispering campaign which eight years later still continues.When we were first arrested, someone within the PSNI leadership was briefing that anybody who supported us, whether it was the Irish government or political parties or trade unions would be left with egg on their face, he said.This was the phrase that we kept on hearing again and again from different parties, different organisations, he said.The Belfast based journalists, told the committee that a public inquiry would be the only way to get to the bottom of what they say is a culture of contempt for journalists, lawyers, activists and institutions of state within the PSNI.Any public inquiry must be far broader in scope, and look not only at the PSNI but also the Met, because of their recent history of unlawful spying on BBC journalists, the MPs were told.Samus Dooley told the committee, that the surveillance of journalists was having a chilling effect on press freedom, as journalists werent able to assure their sources that they could protect them.Dooley told the committee that the PSNI appeared to think of journalists as the enemy, think that journalists are criminal, and that any activity which seeks to shine a light is automatically a crime.He said its the mindset, which is the problem here the word that kept coming back to me as I sat in the IPT was contempt, contempt for journalists, contempt for lawyers, contempt for due process.PSNI says it will cost millions to delete unlawfully collected phone dataThe Police Service of Northern Ireland has told journalists that it would cost over 5 million to delete data it obtained from unlawfully monitoring their phones a cross-party group of MPs heard today.Journalists Trevor Birney and Barry McCaffrey told MPs on the Northern Ireland Affairs committee that the PSNI had unlawfully captured data from McCaffreys phone on multiple occasions.PSNI officers also unlawfully seized computer equipment and phones from the journalists homes and office.Birney and McCaffrey were unlawfully arrested in 2018 after they produced a documentary film, No Stone Unturned, exposing police collusion in the paramilitary murder of six innocent Catholics.The MPs were told that the arrests were part of a failed attempt to discredit the Police Ombudsman, by attempting to show that an employee had leaked a confidential report to the two journalists.Trevor Birney told the MPs that material about No Stone Unturned occupied only 0.6% of the film production companys server but the PSNI downloaded its entire contents, obtaining information on every investigation undertaken by the company.They were walking past desks taking notebooks relating to clerical abuse, saying that might be of interest, thank you very much, he said.Police also searched through Birneys daughters personal belongings, confiscating her pink iPhone.The journalists reached an agreement with the PSNI to put checks and balances in place to prevent the data being accessed.The PSNI has historic systems, said Birney. It cant be deleted because we understand it is stored on Microfiche, he added.Its not acceptable that we have been told to wait ten years [for our data to be destroyed],he said. This is a major issue for GDPR and data protection.The incident raised concerns that other police forces in the UK might be in the same position as the PSNI in not being able to destroy unlawfully gathered data.Read more about Barry McCaffrey and Trevor Birneys case against PSNIOver 40 journalists and lawyers submit evidence to PSNI surveillance inquiryConservative MP adds to calls for public inquiry over PSNI police spying.Tribunal criticises PSNI and Met Police for spying operation to identify journalists sources.Detective wrongly claimed journalists solicitor attempted to buy gun, surveillance tribunal hears.Ex-PSNI officer deeply angered by comments made by a former detective at a tribunal investigating allegations of unlawful surveillance against journalists.Detective reported journalists lawyers to regulator in unlawful PSNI surveillance case.Lawyers and journalists seeking payback over police phone surveillance, claims former detective.We need a judge-led inquiry into police spying on journalists and lawyers.Former assistant chief constable, Alan McQuillan, claims the PSNI used a dedicated laptop to access the phone communications data of hundreds of lawyers and journalists.Northern Irish police used covert powers to monitor over 300 journalists.Police chief commissions independent review of surveillance against journalists and lawyers.Police accessed phone records of trouble-making journalists.BBC instructs lawyers over allegations of police surveillance of journalist.The Policing Board of Northern Ireland has asked the Police Service of Northern Ireland to produce a public report on its use of covert surveillance powers against journalists and lawyers after it gave utterly vague answers.PSNI chief constable Jon Boutcher has agreed to provide a report on police surveillance of journalists and lawyers to Northern Irelands policing watchdog but denies industrial use of surveillance powers.Report reveals Northern Ireland police put up to 18 journalists and lawyers under surveillance.Three police forces took part in surveillance operations between 2011 and 2018 to identify sources that leaked information to journalists Trevor Birney and Barry McCaffrey, the Investigatory Powers Tribunal hears.Amnesty International and the Committee on the Administration of Justice have asked Northern Irelands policing watchdog to open an inquiry into the Police Service of Northern Irelands use of surveillance powers against journalists.Britains most secret court is to hear claims that UK authorities unlawfully targeted two journalists in a covert surveillance operation after they exposed the failure of police in Northern Ireland to investigate paramilitary killings.The Police Service of Northern Ireland is unable to delete terabytes of unlawfully seized data taken from journalists who exposed police failings in the investigation of the Loughinisland sectarian murders.The Investigatory Powers Tribunal has agreed to investigate complaints by Northern Ireland investigative journalists Trevor Birney and Barry McCaffrey that they were unlawfully placed under surveillance.
    0 Comments ·0 Shares ·56 Views
  • Google Cloud revenue soars as Alphabet continues to ride AI wave
    www.computerweekly.com
    NewsGoogle Cloud revenue soars as Alphabet continues to ride AI waveAlphabets fourth-quarter results revealed a 30% increase in revenue for the companys cloud arm, fuelled by demand for its artificial intelligence offeringsByCaroline Donnelly,Senior Editor, UKPublished: 05 Feb 2025 16:59 Alphabet CEO Sundar Pichai has committed to continue investing in its cloud arm, as the firms fourth-quarter results confirmed Google Clouds revenue increased by 30% to $12bn in the three months to 31 December 2024.The year-on-year uptick in Google Clouds quarterly revenue was led by growth in the companys core cloud infrastructure offerings, as well as growing demand for its artificial intelligence (AI) infrastructure and generative AI (GenAI) propositions, said Alphabet in its financial statement.The company previously forecast that it would end 2024 with its combined Cloud and YouTube divisions on an annual revenue run rate of more than $100bn, but the performance of both has exceeded expectations, said Pichai.Our AI-powered Google Cloud portfolio is seeing stronger customer demand, and YouTube continues to be the leader in streaming watch-time and podcasts, said Pichai, in a statement.Together, [Google] Cloud and YouTube exited 2024 at an annual revenue run rate of $110bn we are confident of the opportunities ahead, and to accelerate our progress, we expect to invest approximately $75bn in capital expenditure in 2025.On a conference call, to discuss the companys financial results in more detail, Pichai said the company was rolling out products to the market faster than ever before, which is reflected in the growing product usage and revenue it is seeing.Our sophisticated global network of cloud regions and datacentres provides a powerful foundation for us and our customers, directly driving revenue, he said, in comments transcribed by Seeking Alpha.We have a unique advantage, because we develop every component of our technology stack, including hardware, compilers, models and products. This approach allows us to drive efficiencies at every level, from training and serving, to developer productivity.The knock-on effect of this is that the company was able to build 11 new cloud regions and datacentre campuses in the US and the rest of the world in 2024, while making incremental improvements in the performance of the hardware that sits within these facilities during this time period.Google datacentres deliver nearly [four times] more computing power per unit of electricity compared to just five years ago. These efficiencies, coupled with the scalability, cost and performance we offer, are why organisations increasingly choose Google Clouds platform, Pichai continued.In fact, today, cloud customers consume more than [eight times] the compute capacity for training and inferencing compared to 18 months ago. Well continue to invest in our cloud business to ensure we can address the increase in customer demand.On that point, Pichai said the number of first-time commitments it received from new Google Cloud customers in 2024 was more than double the number it got in 2023.We also deepened [our existing] customer relationships, he added. Last year, we closed several strategic deals over $1bn, and the number of deals over $250m doubled from the prior year.Lee Sustar, principal analyst at IT market watcher Forrester, said the way Pichai flagged the performance of Google Cloud and YouTube in its results suggests he is trying to signal to the market and investors that Alphabets fortunes are not solely dependent on the companys search advertising business.If Google Cloud were a separate company, its AI-driven earnings growth would simply please investors and confirm to customers that the company is a long-term and innovative player for the enterprise IT market, said Sustar.However, Google Cloud is part of a complex feedback loop to the rest of the Alphabet portfolio that also requires big investments in AI, such as the ad-driven Search business and YouTube.Thats apparently why Alphabet CEO Sundar Pichai packaged Cloud and YouTube as a combined success story, citing an annual run rate of $110bn a message to investors that Alphabet is less dependent on search advertising than in the past.Enterprise business customers looking for long-term bets on AI cloud services will have to take a closer look and assess the investment plans forGoogle Cloud and is competitors, added Sustar.Read more cloud and AI newsGoogle Clouds secure AI framework thats integrated into its Vertex AI platform offers practical tools and guidance to manage the lifecycle, data governance and operational risks of AI.MPs launch inquiry into the use of artificial intelligence technologies in the financial services sector.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueData engineering - Stack Overflow: Building the foundations of AIintelligence CW Developer NetworkAmazefit T-Rex 3 Inspect-a-GadgetView All Blogs
    0 Comments ·0 Shares ·71 Views
  • State of Open Con 25: Why public sector needs an open approach
    www.computerweekly.com
    Ivan - stock.adobe.comNewsState of Open Con 25: Why public sector needs an open approachOpenness, open source and open data were among the topics discussed at OpenUKs State of Open Con 25 in LondonByCliff Saran,Managing EditorPublished: 05 Feb 2025 15:15 Earlier this week, people attending OpenUKs State of Open Con were presented with a vision of open government that has the potential to improve efficiency and transparency.Given Channel 4s recent study that found young people are democratically disengaged and increasingly shifting towards authoritarianism, speakers in government and the civil service discussed the importance of open data to improve transparency in government.Lord Nat Wei said he was not surprised that young people in the UK want to vote for dictators, who he said show them they can cut through what seems to be many of the annoyances people face in day-to-day life when dealing with the public sector.We need to go upstream, both in the way we execute current government functions and how we participate [with the people public policy affects], he said.An example of an open approach is the Caddy AI large language model (LLM) assistant, which came about through a collaboration between i.AI and Citizens Advice Stockport, Oldham, Rochdale and Trafford (Casort).The implementation of Caddy at Citizens Advice uses a range of public resources from Gov.UK and Citizens Advice, as well as the latters proprietary advice content.It has been designed as a scalable framework, suitable for integration across government and into existing chat environments. From a technology implementation perspective, Caddy AI uses open-source technologies for backend operations.Opening up the UK governmentEmily Middleton, director general for digital centre design at the Department for Science, Innovation and Technology (DSIT) used her opening remarks to discuss the benefits of joined-up government, which works when there is more openness. Ministers have been clear that a modern digital government should have joined-up services that are easy to use, she said.People shouldnt have to work out what benefits they should apply for and they shouldnt have to remember what steps the government needs them to take, said Middleton. Its still too difficult to collaborate and responsibly exchange data across public sector organisations. She added that the benefits of addressing a joined-up government go far beyond improving the system experience.One of the areas the government is set to make more open is in procurement. Lindsay Maguire, deputy director of procurement reform at Government Commercial, described the challenges procurement teams around the country face as quite rigid, which she said creates problems for marketplaces. According to Maguire, open procurement is more flexible and offers greater levels of transparency, which benefits public sector purchasing.Drawing on the success Ukraine has achieved in open procurement, she said: Ukraine is a great case study on opening up transparency.This requires open data and legislation for the procurement process. Without legislation, Middleton said: Were not going to be able to encourage all of the procurement teams across the UK to provide us with their procurement data.From 24 February 2025, open data standards will be embedded in public sector procurement legislation. Maguires team collaborated with the Open Contracting Partnership to develop the Open Contracting Data standard.Discussing the collaboration, Gavin Hayman, executive director at Open Contracting Partnership, said: Were really helping the government to understand whom theyre buying from, how much and what kind of outcomes theyre getting for their money.Read more open government storiesConservative peer urges government not to limit open source AI: The chair of the Lords Communications and Digital Select Committee calls for greater support for SMEs, competition and economic dynamism as artificial intelligence policies are developed.The challenges of open source in government: Public sector bodies may find their policy decisions are stymied due to the inflexibility of the software they deploy. Is open source the answer?Hayman believes this level of understanding needs to be part of the workflow of planning and delivering public contracts. The most important stage is actually not the procuring stage, where you have an idea of what you want to buy, but in planning.The process, according to Hayman, needs to start at the point a public sector organisation decides what it needs to do or the problem it needs to solve.Maguire pointed out that there are over 1,000 procurement frameworks in the UK, which all cover what she described as slightly different sectors, with slightly different suppliers.Publishing this data enables people working in public sector procurement to see where similar items are being purchased and query differences in prices for the same product.According to Maguire, the new open approach to procurement, which involves publishing data on what the public sector wants to buy, enables small businesses to engage in the process. Thats good for economic growth, she added.Another benefit is that the data can be used to achieve commercial outcomes. For instance, when several departments appear to be buying very similar products, a procurement team can use the data to negotiate the best price. Maguire also believes an open procurement process can enable procurement teams to identify supply chain risks far easier and root out potential corruption.Government as an open systemDuring the opening keynote at the conference, Wei proposed that if the government could be treated like an open system, it would be possible to simulate government departments, enabling policymakers to trial alternative approaches. As a technologist, investor and legislator, Ive often asked myself, what if we treat our own government like a system? he said.Wei added that such an approach would allow citizens and civil servants to contribute ideas and code to improve these digital departments. This goes beyond code and includes broadening the diversity of ideas policymakers can draw on.Treating the government like a system may seem a radical idea, but mapping workflows and processes so they can be managed through a system is fundamental to how most businesses operate. For the speakers at Open Con 25, it needs to be an open system with open data.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueData engineering - Stack Overflow: Building the foundations of AIintelligence CW Developer NetworkAmazefit T-Rex 3 Inspect-a-GadgetView All Blogs
    0 Comments ·0 Shares ·66 Views
  • Crowd-source testing helps drive Webex accessibility
    www.computerweekly.com
    Sikov - stock.adobe.comNewsCrowd-source testing helps drive Webex accessibilityConferencing software Webex has a number of accessibility features built-in and has worked with Applause to test how well these workByCliff Saran,Managing EditorPublished: 05 Feb 2025 16:16 Applause, which provides digital quality and crowd-sourced testing, has worked with Cisco to ensure ongoing accessibility assessments of Webex Suiteand to achieve consistent conformance with Web Content Accessibility Guidelines (WCAG) and Federal Communications Commission (FCC) standards for eight Webex products to date.Cisco said it has tripled its team of accessibility champions, who ensure inclusivity is baked into the overall product development process, which has helped the company to reduce software development time and reduce bugs.At Webex, we focus on people and improving their collaboration experiences. This focus fuels our innovation to remove the barriers of geography, language, personality and familiarity with technology, said Travis Isaacs, chief design officer at Webex by Cisco.Applause runs an independent community of software testers. This, it said, offers a global perspectives with access to numerous devices, operating systems and platform configurations to reflect Ciscos broad user base. It also recruits testers with permanent disabilities, such as blindness, deafness or mobility or cognitive differences, as well as temporary disabilities from injury or illness and degenerative conditions.At Applause, we want to make sure apps, devices and experiences are not just functional and intuitive, but that they are also enjoyable and perform optimally for everyone, everywhere, said Bob Farrell, vice-president of solution delivery and accessibility at Applause.Our ability to engage experts and end users in our community who can provide highly relevant, actionable feedback drives comprehensive quality and speed, with fewer and fewer bugs to address before launch. With a knowledge base and training to complement accessibility assessments, user experience research and testing, weve been able to create a programme that is thriving.In July 2024, Computer Weekly spoke to Suleyman Gokyigit, CIO at Fire, a US organisation which defends the rights of free speech, who is completely blind, about the need to improve accessibility. Gokyigit said that artificial intelligence (AI) offers an opportunitiy to boost the user experience in software, which helps to improve accessibility. He believes AIs potential reaches beyond making software usable for people with disabilities.The ability to have an actual conversation or being able to control your computer by speaking to it makes a lot of sense, he said.In 2023, Cisco began working with Voiceitt, an AI-powered program for people with speech impediments. The technology for Voiceitt learns the unique speech patterns of each user, which is then used in WebEx to help ensure what people say is better understood by others. It also offers real-time captioning and transcription.According to Webexs Accessibility in the Webex app document, the product offers a high contrast mode, custom layouts and keyboard shortcuts for users who are visually impaired. Webex also supports the use of screen readers, withCisco saying it is committed to the continuous expansion of WebEx compatibility with screen readers.For users with hearing impairments, it provides closed captions, and Cisco enables Webex conference organisers to assign interpreters including sign language to a meeting. Webex said the app enables users to customise a view for deaf and hard-of-hearing users to ensure the interpreter video is always visible.Read more about digital employee experienceRedefining DEX -Is user sentiment a requirement: DEX technologies focus on ensuring users have the tools and high-performing technology they need. But is it a good idea to rely on user sentiment as a universal metric?How Toyota is transforming its digital employee experience - The traditional way of handling IT issues through helpdesk tickets generally delivers an unsatisfactory user experience something Toyota is looking to rectify in its business this year.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueData engineering - Stack Overflow: Building the foundations of AIintelligence CW Developer NetworkAmazefit T-Rex 3 Inspect-a-GadgetView All Blogs
    0 Comments ·0 Shares ·66 Views
  • Youth activists protest Meta over mental health impacts
    www.computerweekly.com
    Youth activists have gathered outside Metas London offices to protest how the company allegedly exploits younger users for profit at the expense of their mental health, as part of a wider series of actions against the corporate and structural drivers of mental illness. On 4 February 2025, around 20 youth activists with experience of online harms staged a protest outside Metas Euston offices where they unfurled banners reading Meta profiting from our misery and held a die-in to draw attention to the ways in which the social media giants business model contributes to negative mental health outcomes.Coordinated by Just Treatment a patient-led health justice organisation and led by young people aged 18 to 30 with direct lived experience of mental illness (including PTSD, anorexia, anxiety and depression), the action marks the launch of their Mad Youth Organise (MYO) campaign, which is intended to highlight the role corporate power plays in creating the conditions that young people are forced to live under.During the protest outside Meta offices which forced its employees into using the back entrances activists highlighted a number of issues to Computer Weekly, including problems around cyber bullying, content that promotes suicide or self-harm being served to vulnerable people, the various forms of dysmorphia caused by the constant barrage of unrealistic body images, and the general lack of safeguarding for young people.All of this, they said, is underpinned by addictive algorithms that are designed to keep people hooked on the platform and the harmful content its feeding them.They highlighted how, in the US, Meta is currently being sued in 41 states for allegedly building addictive features into its Instagram and Facebookplatforms.Computer Weekly contacted Meta about the protest and the protestors claims, but received no response by time of publication.Speaking with Computer Weekly about how she has been personally affected by social media use from a young age, one of the MYO campaigners, Gigi El-Halaby, said it fuelled her descent into anorexia and depression, and ultimately led to a suicide attempt.I used to spend hours on social media already as a vulnerable teenager, and it was hours of social comparison and really unrealistic beauty standards, and also a lot of harassment, cyber bullying and really harmful content that really fuelled my distress, she said, adding that in practice, social media is isolating people from one another and undermining their chances of forming the genuine community and connection that is needed to improve many young peoples mental health.I know Im not alone in this. I know that millions of young people feel the same way, and theres no accountability at all from the social media firms that are actually driving this crisis.To help alleviate the pressures placed on young people by Meta and other social media companies, the activists are calling on the UK government to force these companies to pay financial compensation that can be used to fund timely and appropriate mental health care for young people across Britain; where hundreds of thousands of children are stuck on long waiting lists to access vital services.I want social media platforms to stop pretending like they care about our mental health, and actually start trying to put their money where their mouth is, because the profit margins they hold are so monumental, said El-Halaby, who described the levy being proposed as a social media tax that can help young people get the support they need before they reach a crisis point.However, their calls for financial compensation are not limited to tech firms, and also extend to a range of companies in other sectors they say are negatively shaping the conditions under which young people have to live, including property developers, fossil fuel giants and private healthcare providers.The MYO campaigns week of action (which coincides with Childrens Mental Health Week) will also see the activists target Priory Roehampton, a privately run mental health hospital, over the poor care NHS inpatients receive in comparison with privately paying patients; the Home Builders Federation, a trade association representing private sector homebuilders in England and Wales; and an undisclosed big oil corporation.According to a manifesto published by the group, current conversations around mental health prioritise personal responsibility and self-management over collective action and system change, despite the fact that mental illness is largely a response to the brutal, traumatising and destructive conditions created under capitalism.Highlighting the collective social experiences of MYO participants and other young people, the manifesto adds that they have grown up under the suffocating weight of austerity; are trapped in a cycle of zero-hour contracts; have never had stable access to housing; and are ultimately the generation that will have to live with the consequences of climate change.Regarding big techspecifically, it added that these firms profit from our suffering, fears and insecurities by using algorithms that push us deeper into patterns of consumption and mental dependency, creating an immense mental toll.AI and algorithms are more than just tools to improve tech giants services; theyre a system of psychological manipulation designed to amplify our desires, keep us hooked, and, most importantly, keep us consuming, wrote MYO, noting that this model means young people are shown the most sensational and extreme content to keep them online and generating revenue for social media firms. Thats why, to tackle the dangerous impacts of big techcompanies, we must focus on breaking the monopoly power they wield.The MYO campaign has already attracted support from Labour MP Nadia Whittome, who said its no wonder that one in five young people have a probable mental disorder when their lives are getting harder and harder across the board.The culprits of the youth mental health crisis are clear: years of austerity, combined with unaccountable corporations exploiting young people for profit, facilitated by government policymaking that focuses on the demands of businesses but fails to meet the needs of young people, she said.From property developers to Big Oil companies, these corporations should be paying financial compensation for the harm they are doing. I will be working with my parliamentary colleagues to do this and take meaningful action to end the crisis our young people are facing.Read more about online harms and social mediaSchools go smartphone-free to address online harms: Schools are implementing smartphone-free policies in an attempt to curb students exposure to online harms, but teachers and parents are worried the Online Safety Act will only partially address concerns.EU law could usher in transformative change to digital ecosystems: The EUs Digital Fairness Act aims to end exploitative practices online and enhance consumer protection, but civil society groups say it must address the power and information asymmetries between users and tech firms.Ofcom publishes Illegal Harms Codes of Practice: The codes of practice and guidance from Ofcom outline the steps online services providers can take to protect their users from illegal harms.Speaking with Computer Weekly about the action outside Meta, Emma Hughes, head of organising and campaigns at Just Treatment, also highlighted that the harms being proliferated online by the largely unchecked business models of social media firms is a monopoly issue of unaccountable power.On top of making them pay compensation for the harms caused, she said it would like to see some government regulation to break up these big tech monopolies, and force them to publish their algorithms so we can have a better understanding [of how they work].Pointing to the tech oligarchy that has rallied around US president Donald Trump in the wake of his election, Hughes added that its an issue of power they have so much influence that unless youre actually tackling the business model and the structures that give them very opaque, unaccountable power, its going to be very difficult to regulate them into better behaviour.She said that while the UK governments Online Safety Act is helpful in some ways, it fundamentally does not go far enough in tackling the root of the problem, which is the business models of social media and tech firms.Hughes concluded that while the web of corporate interests and economic policies making life difficult for young people are certainly powerful, strategic, determined interventions led by those most affected such as the MYO campaign can be effective in producing change.Its not going to be an easy overnight win, but you can achieve changes with actually quite small resources, she said, highlighting a previous campaign that Just Treatment ran in collaboration with families and other grassroots organisations that forced pharmaceutical firm Vertex to lower its cystic fibrosis drug prices after initially trying to increase its charges to the NHS.We forced that corporation to drop their prices by shaming them and forcing governments to look into it, so there are tools available to challenge corporate power, said Hughes.
    0 Comments ·0 Shares ·70 Views
  • MPs to scrutinise use of artificial intelligence in the finance sector
    www.computerweekly.com
    The Parliamentary Treasury Committee has called on consumers, finance firms and IT suppliers to provide evidence to support an inquiry into the use of artificial intelligence (AI) technology in the finance sector.AI is used widely in the finance sector where automation is rife. This includes chatbots supporting customers and even the use of AI in making trading decisions.Banks and other finance firms have the money, IT skills and business case to increase the use of AI. In fact, the Bank of England figures recently revealed that 75% of finance firms are already using AI, with a further 10% planning to use it over the next three years.But left to their own devices, banks will push the technology to breaking point, according to one senior IT professional in the UK finance sector, who said: With AI, they have got their teeth into it, and theyre thinking, We can automate loads of stuff and save a load of money with branches, head offices or staff until it goes wrong.The Treasury Committee inquiry goes beyond banks and includes the wider finance sector such as insurance and pensions. There is a balance that must be reached because AI has government support, but there are great risks associated with its use in the finance sector.The committee of MPs may explore how AI is currently used by finance firms and what opportunities it brings for innovation, could consider the potential impact on employment in the sector, and could review how AI might jeopardise financial stability. It might also question whether there are increased cyber security risks.MP Meg Hillier, Labour (Co-op) MP for Hackney South and Shoreditch, who chairs the committee, said that governments have been clear that they intend to support the increased use of AI in the economy.My committee wants to understand what that will look like for the financial services sector and how the [finance sector] might change in the coming years as that transformation gathers pace, she said.Its critically important the [finance sector] can capitalise on innovations in AI and continue to be a world leader in finance, she added, but warned of the risks associated with unfettered use of AI. We must also be mindful of ensuring there are adequate safeguards in place to mitigate the associated risks, particularly for customers.The call for evidence, which is open until March 17, asks: How can government and financial regulators strike the right balance between seizing the opportunities of AI, but at the same time protecting consumers and mitigating against any threats to financial stability?The Parliamentary Treasury Committee call for evidence seeks to know:How is AI currently used in different sectors of financial services and how is this likely to change over the next 10 years?To what extent can AI improve productivity in financial services?What are the risks to financial stability arising from AI and how can they be mitigated?What are the benefits and risks to consumers arising from AI, particularly for vulnerable consumers?The human cost of increased take-up of AI includes huge job cuts as an increasing range of human roles are automated, resulting in job losses and less human contact for consumers in their everyday banking.A recent Bloomberg Intelligence report predicts 200,000 middle and back-office jobs will be lost to AI, while UK banks expand the use of technology such as AI to replace people and branches on the high street.There are also risks that automation can increase the likelihood of another financial crash as automated systems accelerate trading.The financial services regulator is working with stakeholders to help ensure that AI is taken up in a way that benefits the industry, but negates risks.During an international financial conferencein Hong Kong, deputy governor of financial stability at the Bank of England Sarah Breeden said that regulation must stay ahead of AI take-up. To this end, the regulator plans a consortium where private sector finance organisations and AI experts can provide knowledge on the technologysbenefits to the sectorand to manage risk.Read more about AI in banking
    0 Comments ·0 Shares ·38 Views
  • MoD set to develop 50m data analytics platform with Kainos
    www.computerweekly.com
    dambuster - stock.adobe.comNewsMoD set to develop 50m data analytics platform with KainosThe Ministry of Defence has chosen IT services provider Kainos to develop its 50m data analytics platform across all armed services, over a three-year programmeByBrian McKenna,Enterprise Applications EditorPublished: 05 Feb 2025 9:45 The Ministry of Defence (MoD) aims to develop its Defence Data Analytics Platform (DDAP) over the next three years, with IT services firm Kainos, in a contract worth 50m.Kainos will provide support for users across the MoD, including the Royal Navy, British Army, Royal Air Force and other MoD support organisations, to change over to a developed version of the platform.The DDAP is billed by the Defence Ministry as a secure data and analytics system, launched by Defence Digital, which is part of the MoDs Strategic Command. Defence Digitals chief information officer is Charlie Forte, and its budget is 2bn, with 2,400 personnel.The MoD launched the DDAP in January 2023, with an 8.625m contract awarded to Cognizant.The DDAPs aim is to support the Data Strategy for Defence, as outlined under the prime ministership of Boris Johnson. The goals of that strategy included, by 2025, for data to be curated, integrated, and human and machine-ready for exploitation in the battlespace and for it to be in second place only to Army, Royal Navy and RAF personnel.The platform is built on Amazon Web Services, and its stated aim is to democratise data access, standardise approaches and tooling, and encourage interoperability and sharing of best practice across the MoD.The MoD already holds excellent analytical minds and capabilities across defence, said an MoD spokesperson, quoted by Kainos. The evolution of DDAP will encourage minds to further unite under one true enterprise system, to help meet the ambitions of the Data Strategy for Defence for standardised, assured and efficient data analytics, as well as accelerate the delivery of new technology capabilities including AI innovation. Kainos brings the technical expertise to evolve the platform, as well as invaluable digital experience. As a partner, Kainos will help us create a data-first mindset and skills across UK defence.Read more about technology strategy at the Ministry of DefenceMoD sets out strategy to develop military AI with private sector.Ministry of Defence releases defence data management strategy.Government announces data strategy for defence.Kainos will provide platform management, maintenance and support for DDAP, and also be responsible for data integration across the MoD. A goal of the programme is to limit the duplication of analytics initiatives across the organisation.On the people side, it has worked with the Defence Digital organisation to create, according to a Kainos statement, skilled data teams to design and roll out end user applications on DDAP to support analytics and bespoke use cases.That activity continues under the new contract, and the vendor will liaise with end users on their specific data analytics requirements.It is a great privilege to expand our partnership with the MoD to drive forward an evolution of how data analytics is used across UK defence, said Brendan Mooney, CEO of Kainos. DDAP is a pioneering initiative which we will help to strengthen by introducing new approaches to data management, tooling and governance, and importantly by working closely with personnel across the MoD to understand their specific data needs.By continuously evolving the platform, DDAP remains at the forefront of data and AI analytics, maximising the value extracted from data assets, and supporting the overall digital vision of the MoD.Kainos has already started work with the MoD under the new contract, which ends on 6 January 2028.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueA driving licence in an app is a boost for digital identity - but not for angry suppliers Computer Weekly Editors BlogRedis restriction reaction: Percona offers support for Valkey Open Source InsiderView All Blogs
    0 Comments ·0 Shares ·72 Views
  • Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud market
    www.computerweekly.com
    Amazon Web Services (AWS) and Microsoft have not taken kindly to the UK competition watchdogs proposal to take a targeted approach to levelling the playing field for smaller providers operating in the UK cloud services market.The published provisional findings from the Competition and Markets Authoritys (CMA) ongoing investigation into how the UK cloud infrastructure services market operates describes the sector as a two-horse race, with AWS and Microsoft way ahead of the chasing pack.The suppliers are described by the CMA as having a significant unilateral market power, which is harming competition in the UK cloud infrastructure services market by making it harder for alternative cloud providers to gain and grow a footing in it.To address this, it is being proposed that the CMAs board draws on powers given to it through the roll-out of the Digital Markets, Competition and Consumers (DMCC) Act 2024 on 1 January 2025 that could see it impose legally-binding, pro-competition conduct requirements on both firms.This course of action would see AWS and Microsoft marked out as suppliers with strategic market status (SMS), which is a designation reserved for suppliers whose actions have the potential to tip a market in their favour because of the hold they have on it. We consider that measures aimed at AWS and Microsoft would address market-wide concerns by directly benefiting the majority of UK customers and producing wider, indirect effects by altering the competitive conditions or other providers, the CMA said in its provisional findings report.AWS responded in a statement to Computer Weekly by describing the CMAs proposed targeted interventions as unwarranted.Microsoft hit back by similarly stating the CMA was wrong for suggesting this remedy, before seemingly referencing the governments recent calls for regulators to take a pro-growth approach to the work they do. Measures aimed at AWS and Microsoft would address market-wide concerns by directly benefiting the majority of UK customers and producing wider, indirect effects by altering the competitive conditions or other providers CMA reportFor context, the statement references the CMAs criticism of Microsofts decision to charge customers more for running its software namely Windows Server and SQL Server in its competitors clouds.The draft report should be focused on paving the way for the UKs AI [artificial intelligence]-powered future, not fixating on legacy products launched in the last century, said Rima Alaily, corporate vice-president and deputy general counsel in the competition law group at Microsoft.Microsofts cloud licensing practices are under scrutiny from regulators across the world, not just the CMA, and are also subject of a legal challenge in the UK.On the matter, the CMA said: We have provisionally found that Microsoft has the ability and incentive to partially foreclose AWS and Google [from the market] using the relevant Microsoft software products and that its conduct is harming competition in cloud services.With the CMAs provisional findings now out in the open, all participants in the UK cloud infrastructure services market have until 25 February 2025 to provide feedback on its initial conclusions, with the watchdogs final judgment set to drop by 4 August 2025.The CMAs provisional conclusion that AWS and Microsoft have a dominant hold on the UK cloud infrastructure services market, and that targeted interventions might be needed to ensure competition in this sector works as it should, is the headline from its provisional findings.However, the CMAs January document dump of provisional findings also provides an insight into other features of how the cloud market functions that could have an adverse effect on competition (AEC). Not all of these, though, are worthy of regulatory intervention.The offering of committed spend discounts is a feature of how the cloud market functions that does influence customer choice, the CMAs seven-page provisional findings document stated, but rival firms can profitably compete against these so no intervention will be required here.The report also detailed several features of the market that could be subject to regulatory corrective action from the CMA once its investigation finally concludes.For example, the provisional findings document said there are significant barriers to entry and expansion in the cloud services market, due to the significant capital investment needed in fixed assets such as datacentres and networking kit to stand up a cloud infrastructure.And due to the economies of scale providers such as AWS and Microsoft operate at, the ongoing costs of running these fixed assets is lower for them than it would be for a smaller cloud provider.The largest cloud providers are making very large investments to expand their services in coming years, and while this investment can have pro-competitive effects and benefit cloud customers ... it may also deter market entry or expansion by potential rivals, the CMA summary document stated.The broad product portfolios of AWS, Microsoft and Google in both IaaS [infrastructure as a service] and PaaS [platform as a service] are also likely to contribute to barriers to entry and expansion as range of services is an important consideration for customers when selecting a cloud provider. In the cloud services markets, we consider that detriment may manifest itself in terms of UK customers paying higher prices for these services than they would if the markets were more competitive CMA reportOn a related point, the report stated that cloud customers face technical barriers and interoperability issues that can prove off-putting when trying to mix and match services from competing cloud providers to create a multicloud setup. This limits customers ability and incentive to exercise choice of cloud provider, the report continued.The charging of egress fees, which effectively penalise customers for wanting to shift their data from one cloud provider to another, has a similarly negative effect on the willingness of organisations to switch suppliers, the report continued.We consider that the AECs we have provisionally found may be expected to result in substantial customer detriment in cloud services in the UK, in terms of a material impact on customers ability to switch, multicloud and exercise choice over their provider, which may ultimately be expected to impact the price and quality of cloud services, the report continued.In the cloud services markets, we consider that detriment may manifest itself in terms of UK customers paying higher prices for these services than they would if the markets were more competitive.To ensure competition in the cloud market operates as it should, the CMA board is being asked to consider taking targeted action against Microsoft and AWS, as permitted by the newly introduced DMCC Act.We consider that measures aimed at AWS and Microsoft would address market-wide concerns by directly benefiting the majority of UK customers and producing wider indirect effects by altering the competitive conditions for the other providers, said the CMA.The CMAs proposal to take targeted action against AWS and Microsoft has been warmly welcomed by tech industry watchers, including international competition law expert Niamh Christina Gleeson.In her view, this approach will mark AWS and Microsoft out as being the only two providers in the market with the power to engage in anti-competitive behaviour, which will benefit the markets smaller providers from a competitive standpoint.None of the proposed remedies will apply to any other providers and this gives a certain commercial freedom to all other operators in the UK cloud markets, she said.Conferring SMS on AWS and Microsoft also means both firms will have a special responsibility placed on them, because of the dominant hold they have on the market, to behave in a pro-competitive way at all times.Behaviour that is permitted by a firm with less market share is considered abuse [for an SMS company], she said. This is important for all future behaviour in these markets and how much freedom they have to engage in commercial practices that are permitted to other providers.As such, this could have implications for Microsoft and AWS when it comes to offering discounts to customers and charging egress fees, she added. Mark Crane, competition partner at legal firm Addleshaw Goddard, said making it possible for AWS and Microsofts alleged anti-competitive behaviour to be subject to targeted intervention should give their customers assurance about any concerns they have about how either firm treats them. Many businesses now routinely use cloud services, and the CMAs designation process is likely to provide an avenue for a range of concerns to be aired in a much more focused way, and to address particular behaviours, said Crane.Users of cloud services are preparing for this process in earnest, considering how to engage with the new regulatory regime and how this might help to change the playing field in cloud services.Crane added: Against a backdrop of intense scrutiny of the CMA and its role in facilitating growth, both users and providers of cloud services will be advocating in favour of regulatory interventions which support their own approach. The CMA will certainly have a balancing act to undertake and face pressure to do so efficiently and at speed.Nicky Stewart, senior advisor to public cloud competition champion the Open Cloud Coalition, said once the consultation on the proposals closes she hopes the CMA will waste no time in wrapping up its investigation and bringing its proposals to bear. Every month that passes without action is another missed opportunity for innovation and UK economic growth, she said. Every pound spent on restrictive licensing markups and egress fees is a pound not spent on growing the UKs economy.Read more about the CMA cloud competition probeAfter regulator Ofcom raised red flags about the anti-competitive behaviour of Amazon Web Services and Microsoft, the UK cloud market was referred to the Competition and Markets Authority heres why.The CMA has published the summary hearings from Microsoft, AWS and Google, revealing that all three had quite a lot to say on the Redmond software giants cloud licensing practices.
    0 Comments ·0 Shares ·78 Views
  • DSIT issues guidance to support public sector hosting of cloud workloads in overseas datacentres
    www.computerweekly.com
    adam121 - stock.adobe.comNewsDSIT issues guidance to support public sector hosting of cloud workloads in overseas datacentresThe Department for Science, Innovation and Technology has issued guidance to support public sector organisations that want to host workloads and applications in overseas datacentres for cost and resilience reasonsByCaroline Donnelly,Senior Editor, UKPublished: 05 Feb 2025 0:01 The Department for Science, Innovation and Technology (DSIT) has issued guidance to support public sector bodies that want to host their applications and workloads in overseas datacentres.The government departments overseas data guidance features a recommendation for public sector organisations to adopt a multi-region approach for workload resilience reasons, while acknowledging that this might mean making use of cloud services hosted outside of the UK.We recommend that organisations adopt a multi-region approach in which they make controlled, considered use of regions in a way which is compatible with UK law, the guidance document, seen by Computer Weekly, stated. This guidance reinforces existing legislation and policy: this is not a change of policy.Where the latter point is concerned, the guidance reiterated that government data that is classified at official level can be stored and processed in overseas datacentres and cloud regions when satisfactory legal, data protection and security practices are in place.It continued: There is no universal requirement for government data classified as official to be physically located in the UK.The guidance document stated that many public sector organisations are already taking advantage of SaaS [software-as-a-service] products which are not exclusively UK hosted, operated and supported, and DSIT said the guidance is intended to support more of them in doing that.Limiting themselves to using UK-only hosted services means public sector organisations might be missing out on better priced and more technologically advanced services that are only available in certain geographies, the guidance stated.It can be prohibitive for smaller vendors to provide an entire capability within every geography worldwide because of the level of expense and complexity, it said.[Furthermore] your disaster response requirements may mean the current distribution of public cloud regions in the UK is not sufficient to meet your recovery objectives and so you may consider using an overseas region to meet your resilience requirements in certain scenarios.In a statement, announcing the guidances release, DSIT said encouraging more of the public sector to entrust their data to overseas entities will boost competition and the resilience of their offerings, without compromising the UKs strict data and security protections.On this point, the guidance has been created collaboratively with input from DSIT, The Central Digital and Data Office, the National Cyber Security Centre, the Government Security Group and the Department for Business and Trade.This aims to boost competition, so the public sector can negotiate lower prices for their use of cloud tech, said the DSIT statement.It will also make digital systems more resilient by spreading IT infrastructure used by critical services such as the NHS and emergency response across different regions.According to artificial intelligence and digital government minister Feryal Clark, the guidance is intended to reverse the problem of crucial public services being held back by poor technology and outdated guidance that has left the public sector trailing the private sector in innovation terms.By embracing global innovation, we are making sure our public services have access to more tools to drive innovation and improve their services, while also building resilience and lowering costs as we look to put technology to work in the public sector, she said.This guidance will help to ensure that security and compliance are not afterthoughts, but core elements of our digital transformation journey.Read more government cloud storiesMicrosofts hold on government IT is under scrutiny, following a disclosure to a Scottish policing body that saw the software giant advise that it cannot guarantee data sovereignty in its cloud-based Microsoft 365 suiteMore than a year has passed since UKCloud went into liquidation its former chairman outlines his concerns about the economic impact of the UK cloud market floundering in the face of the US hyperscalers.In The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueA driving licence in an app is a boost for digital identity - but not for angry suppliers Computer Weekly Editors BlogRedis restriction reaction: Percona offers support for Valkey Open Source InsiderView All Blogs
    0 Comments ·0 Shares ·73 Views
  • All change: Weighing up the options for enterprises as open source licences evolve
    www.computerweekly.com
    Potentially heralding a fundamental shift in definitions of open source, HashiCorp moved to more restrictive licensing for infrastructure-as-code (IaaS) tool Terraform in 2023. The Cloud Native Computing Foundation (CNCF) has since cited more risk and pressure to evaluate options, especially for single-supplier open source offerings.However, Amanda Brock, chief executive officer at open source-championing non-profit OpenUK, argues that datacentres should be using more open source-related tech, not least because it could slash carbon emissions.When youre able to use the specs openly and collaboratively to build something, you also can use the data that enables you to know when and how you can best use power, she says.Although big companies have been accused of strip-mining open source, not giving back enough of a share of the revenue, Brock says it seems unlikely that licensing requirements might change to deter datacentres from open source.Im not going to say theres no risk [of this], but weve also seen one of the first companies to move to Elastic come full circle. Having moved away from open source, then having been able to resolve things with AWS, who they were blaming, Brock says. So, I think its a smaller problem than the impression given.The critical shift in 2024 might be the counter-move to forking the result of the HashiCorp move was a fork to OpenTofu. That said, it has always been difficult for firms that have set up as open source to stay pure and true to that ideal, she adds.If youre using a project thats got a number of big corporates using it, were increasingly going to see anybody who risks shifting their licence hammered by forks. That can completely change the marketplace, she says.Forking was once considered the nuclear option for avoiding a specific leadership direction on software. Taking the same code, devs create a branch in the repository, and the organisation essentially goes in two directions at that point, with maintaining, updating and similar tasks done in the individual projects. Sometimes this does not work, and the projects re-emerge later on.It is a lot of work, and a big deal. In the 30 years or so...of open source, there have only been something like half a dozen successful forks, Brock says, citing Amazon Web Services (AWS) and Elasticsearch, Redis and Valkey. OpenUKs February event OpenCon may discuss related issues.Peter Zaitsev, founder of open source database supplier Percona, broadly agrees. Some folks may just have to pay up but that is often not how it happens for important open source projects. Alternatives are being created.Even if suppliers do pull a fast one on projects critical for the open source ecosystem, with strong communities the fork option will simply become more popular in the next couple of years, citing the Elastic drama which ended with re-releasing under a more restricted open source licence.In the case of Red Hat Enterprise, the move fed further development in enterprise Linux alternatives, he adds.The likes of PostGreSQL may not have all Oracles features but can still cover off most organisational needs. And for many users of WordPress or similar, whether its actually open source likely does not matter, Zaitsev points out.Colin Eberhardt, chief technology officer at software consultancy Scott Logic, is willing to bet that 70-95% of datacentre software is already open source related, given the prevalence in standard enterprise software.Even in investment banks, roughly 70% applies and theyre pretty careful about the code they run, Eberhardt says. You write a small amount of code that sits on top of a largely open source stack these days, regardless of industry.Cloud infrastructure, software and platform engagements may have a lot more code running, of course, but also a regular colocation-type datacentre is likely simpler. Any resulting problems from licensing challenges can be resolved in multiple ways not least because licence changes to free and permissive open source only roll forwards, he says.Yes, they can then change the licence and say, from this point onwards, youre not free to use it, you must meet these conditions or pay this money, Eberhardt says. But there are high-profile cases of forks occurring, including OpenTofu, because of arguments about licensing and who makes the money.If it really is open source, organisations remain free to look after it themselves if they have the capability. And, at the same time, there are bigger risks open source use can expose organisations to, because, as Eberhardt adds, with a lot of open source software, there are no future obligations.For instance, a poorly maintained open source project is an attack vector. At the same time, most of the supply chainattacks which have begun to multiply in recent years are not random, but targeted.If I wanted to do an interesting attack, Id look at something used in infrastructure projects that would get me into datacentres, banks and things like that and take everybody down, says Eberhardt. Concerns about licences are not wrong, but relatively minor.What should open source users be doing, then?Eberhardt says organisations need to better understand their open source usage, especially if they rely on it. Is it run in a sustainable way? Is there a single-person dependency somewhere? Licensing is the easy box to check.I have worked on projects where there was a framework that they picked that was a core component, and we looked at it and it was only maintained by one person. And I asked whether anyone knows who that person is, he says.Consider popular Linux Foundation projects and subgroups, for example, and work out any related risks and how to mitigate them. Could the organisation maintain the setup if it fell apart? Are sections of code interchangeable?For large organisations, requirements might be fairly rigid. Smaller firms might suffer more from developers making unilateral decisions on downloading things to patch something over or the like. Part of the answer there is to ensure everything is properly and fully documented.Im amazed that people dont actually know what code theyre using, where its come from. So, thats definitely the first step, or if youre releasing an enterprise application, understand what code youre actually using. Learn a bit more about that code, he says.If 90-odd percent of the code youre running was written by someone else and given to you for free, you need to invest some time into understanding the dynamics of that relationship.Jad Jebara, co-founder, president and CEO of cloud-based datacentre infrastructure management (DCIM) company Hyperview, underlines that open source and open source standards have been instrumental for innovation. So, its not about ditching anything open source to avoid related risk.For so much open source now, its supported commercially, and there are reasons why, including that not everybody has staff to do all the techie stuff, he says.Without open source, the internet as we know it, the infrastructure, the digital economy doesnt exist. So, now on the hardware level, with the Open Compute project, you need scale to manufacture the hardware standards, but it drives innovation, sustainability and the density in the datacentres and that will never change.Which is not to say the developer compensation model does not resent problems. Going from an open source towards for profit changes the use model and makes it harder to understand the intricacies of data security, data residency and financial remuneration, Jebara says.Initiatives such as the Apache and Linux foundations that drive cloud native do good work, but its not enough from a licensing perspective, or for cyber security and vulnerability management. User organisations must ensure they know whats inside the tech they use: Is anything end of life, or end of service or support, for example?Therefore, depending on the business model, more stringent licensing can be a net benefit to the user, especially when certain assurances are part of a more commercial package.Its really about the allocating of your resources. And not every datacentre is created equal, Jebara says. But most of it is not going anywhere, including from the datacentres.Read more about open sourceLinux Foundations decision to ban Russian maintainers has the potential to destroy open sources global collaboration model.As open source matures, the Cloud Native Computing Foundation is grappling with issues ranging from licence rug-pulling and the rise of artificial intelligence to the changing dynamics of open source contributions.
    0 Comments ·0 Shares ·102 Views
  • Quantum datacentre deployments: How they are supporting evolving compute projects
    www.computerweekly.com
    Put lots of qubits together and you have quantum computing, requiring datacentres that can support it.A quantum datacentre though is not just about having a building with a quantum computer in it. And there remain questions about what quantum computers should look like and how they should connect in the datacentre context, says Andrew Lord, senior manager, optical and quantum networks research at BT Group.A datacentre in essence delivers connectivity, often connecting multiple people or customers with racks, compute equipment and the rest. Adding quantum compute as part of an overall compute resource that answers questions should help with certain challenges, although plenty of uncertainty remains.Also focused on the long-term challenges of quantum datacentres and their connection at BT is specialist optical and quantum research professional Emilio Hugues-Salas. Solving for the hard physics around qubit-based quantum computer language versus classical computings zeros and ones could take five to 10 years, even though the work is moving quite fast.A qubit is the quantum version of a binary bit of zeros and ones in classical computing. Its a two-state or two-level quantum mechanical system. When measured, electrons can have apparent spin up and spin down with certain probabilities attached.Looking at definitions, at the requirements and architecture of a quantum datacentre, suggests an architecture that enables secure access to quantum compute, where you not only have the traditional GPUs but quantum processing units you can use depending on requirement and application, Hugues-Salas says.But its like back when we didnt know what the internet was for, adds Lord. Honestly, a lot is about making quantum available just to see what people will do with it anything from finding new drugs to modelling basic molecules or photosynthesis.In such energy-hungry areas just one major breakthrough could prove the value and be worth the effort because its too costly with current technology.For example, Id like to minimise energy consumption in the BT network, but its just too hard and will take too long. Then, by the time Ive done it, things have changed, says Lord. And you might only need to access a quantum computer for a few crucial seconds to do it.BT also works on quantum networking and computing, including anything quantum that might overlap with BT interests, such as trialling quantum interconnect into regular Equinix datacentres in London.The latter is a near-commercial grade project thats mainly about secure linking. Customers can then put their data onto the cloud securely, do operations and connect multiple their own customers via the cloud, Lord says.When it comes to quantum computing in general, Owen Rogers, senior research director of cloud computing at Uptime Intelligence, has an analogy: imagine you have a plastic combination padlock, but to unlock the padlock without the code, you do not want to have to try every possible combination.But lets say that on the combinational rings, theres a tiny bit of metal that happens to be on the correct number. Conceivably you might simply wave a magnet across the padlock to correctly interlock and unlock the correct numbers, Rogers says. Quantum computing is really a way of solving things in parallel, using the uncertainty of what particles do.Remaining challenges, though, are multi-faceted. Obviously, quantum algorithms require special skills, and then there are technical and engineering problems.For instance, the more qubits you have, the more susceptible you are to noise. A sensitive individual particle must be kept in a state where you can control it. That means a cooling requirement, even cryogenic cooling.You have to remove as much interference as possible, says Rogers. And the datacentre has to have those abilities.Quantum computing research today is costly with only a small chance of success, but the benefits payoff if quantum computing can be made to work might be astronomical. For instance, if a team can quickly solve something that would have been impossible before or without making assumptions.However, we might reach a certain level of qubits and then the interference is exponentially worse, and we just cant increase them, for example, says Rogers.In the UK, multiple projects in development include five new quantum research hubs announced in July 2024. Among these is Heriot-Watt Universitys Integrated Quantum Networks (IQN) hub. The idea is that quantum internet linking quantum computers could deliver massive compute, leveraging quantum entanglement and memory.Another is the industry-partnered QCI3 hub at Oxford University. QCI3 researches interconnected and integrated quantum computing implementations, eyeing an estimated potential $1.3tn market for quantum ML and neural networks by 2030.Dominik Andrzejczuk, CEO at QDC.ai with investments in two Oxford quantum hardware companies trapped-ions technology based Oxford Ionics, and full-stack photonics focused Orca computing confirms that the engineering challenges are taking time to solve.That said, ion-trap architectures are good at controlling very high-quality qubits with the same CMOS fabrication techniques as superconducting qubits, with Andrzejczuk adding: That means that potentially they could scale easily.His background in physics drew him to quantum, he tells Computer Weekly, but theres a schism in the quantum sector between scientific computing, leaning into operations research from first principles and Silicon Valley AIs work.With artificial intelligence (AI), you take a bunch of random data and a machine figures out the function. Its strength is also its weakness, because you need so much hardware to work fast.However, as has become apparent with certain large language models (LLMs), this doesnt scale well, with Andrzejczuk pointing out: OpenAI is burning billions of dollars every single year.A scientific computing approach starts from the other end, with a physicist or mathematician examining the dataset, then developing the function to fit onto that data and then indicating the function, parameters and constraints to the machine.Related operations research can be highly specific to use cases in areas such as logistics with myriad variables and constraints thats harder for machine learning.One perfect use case is airlines and transport. If youre delayed or cancelled, you have to call somebody to rebook your ticket. The magnitude of data for an ML algorithm to solve that is astronomical, Andrzejczuk says.Representing an optimisation problem in a classical computer can be simple, with integer values. But in a quantum context, quadratic, unconstrained binary optimisation means your variables have to be binary, rather than integers.Think of 600 trucks as some sort of sequence of ones and zeros, Andrzejczuk says. Then we need an extraction layer. We need to convert a problem that is, lets say, semantically written in plain English, into some sort of binary code. Those tools just dont exist right now.Further investment in the billions of dollars are still needed to push it forward, but if it works, everybody wins, Andrzejczuk adds.Jerry Chow, IBM fellow and director of quantum systems and runtime technology, agrees that its early days they are really more than physics experiments, and progress is being made with deployable computational tools and quantum-centric supercomputing:Right now [at IBM], we are exclusively putting out systems of 100 qubits or more. And were in a world thats starved of compute.IBM operates 14 utility-scale quantum systems, including quantum datacentres in Poughkeepsie, New York and Ehningen, Germany as well as dedicated systems colocated with its clients.IBMs Quantum Network comprises about 250 enterprises, research institutions, startups, universities and industry leaders, including 80 in Europe. Its quantum roadmap factors in the time predicted to solve remaining challenges out to 2033.The point here is that quantum does certain workloads very differently, says Chow. We see that Quantum Network as how well find and use these tools for quantum advantage.Multiple solutions will be combined, including QPUs, GPUs and CPUs. Hosting at Poughkeepsie has comprised several to double-digit numbers of quantum computers, varying by the processor used.At what we call utility scale, certainly there are quantum circuits beyond exact simulation with CPU or GPU resources. The next best ways of handling some of these circuits, in fact, are with maybe some tensor network methods or some kinds of approximate computing methods that leverage high performance nodes, says Chow.Performance depends on qubit numbers or scale, speed, and quality or error rate accuracy of execution of quantum circuits. Users can try 127-qubit systems free; IBM offers 10 minutes a month of Quantum Platform execution time, with systems, documentation and learning resources.IBM is hoping thereby to foster scientific and even business related demos delivering speed, accuracy or cost-effectiveness, not to mention ecosystem development in train from the domain-specific Qiskit function service to third-party middleware-type integrations.Read more about quantum computingA 4,000m2 facility based at the Harwell Science and Innovation Campus is to house a new national quantum facility equipped with 12 quantum computers.Five university hubs are receiving funding to support the development ofquantum applicationsthat can support healthcare and businesses.
    0 Comments ·0 Shares ·96 Views
  • IT supplier helps address overwhelming volumes of data from space
    www.computerweekly.com
    CGI continues to build on its 12-year relationship with the European Space Agency (ESA), which is helping improve scientific access to data about Earth sent from space.As part of a Serco-led consortium, delivering ESAs Advanced Data Access and Processing Services for Collaborative Earth Observation Ground Segment (Ascend) project, CGI will provide engineering and cyber security services.But its work on the Multi-Mission Algorithm and Analysis Platform (MAAP) within Ascend, which provides scientists with access to NASA and ESA Earth biomass data, is just part of CGIs work in the space industry.Supporting ESA, which is partly funded by the UK Space Agency, accounts for about half of CGIs Earth observation work, and involves processing satellite imagery. Its 400 UK-based staff, who are focused on the space sector, also support satellite communications companies from their operations in Leatherhead and Bristol.Jaime Reed, vice-president of consulting services, space data platforms and applications at CGI, told Computer Weekly the suppliers work with ESA is predominantly environmental monitoring, such as processing meteorological data. It is involved in controlling satellites, receiving and processing data from them, and providing the software onboard.Reed has a background in physics, with a doctorate in atmospheric physics that looked at how the atmosphere works and how it is measured.He then worked as an engineer developing cryogenic coolers for satellites, before moving to Airbus, where he worked on satellites. My background was in the space industry, designing and building satellites, and working on the data that comes from them, said Reed.He now applies his knowledge to CGIs satellite and data processing work. Most weather forecasts are highly dependent on data taken by satellites, which was what I was working on, said Reed.Reed said while CGI is a big IT company doing all the things a normal IT company does, it has specific expertise in the space industry.Other CGI customers using space data include the Met Office, but most are relatively small organisations, he added.Read more about IT in spaceThe European Space Agency collects data from millions of miles away. It must get storage right when theres no chance of a U-turn to take more photos or re-record data.European and UK space agencies are partnering with mobile operators and universities to develop the connectivity needed for connected and autonomous vehicles.Cloud and database service provider teams with leading satellite constellation to enable high-speed connectivity.Just over half of CGIs space business is in satellite communications, and many of the big satellite companies are customers.We write the software that controls the satellites and we provide the managed IT services around that. Its all datacentres really. The control segment of the satellite is generally all computer programs that are doing orbit predictions, moving data around, receiving files from the satellites and processing those files.Everything comes into datacentres and is processed and sent to scientists.The first big deal between ESA and CGI was in 2012, with CGI maintaining the software that processes the data for its environmental monitoring satellite and research satellites.Reed said the volume of data being received and processed has massively increased since then because there are a lot more satellites being built by ESA and they are more complicated, turning in more data.It is this dramatic increase in data that has led to CGIs latest project with ESA, working on the Ascend programme, which Reed said is a starting point to look at new ways of providing data to scientists.The amount of data is becoming overwhelming and it is challenging to make the data available to users, said Reed. It used to be a case that researchers could just go to an FTP site, download some of the data and work on it at their workstation, but now youre talking a terabyte for just one file.Everything is now moving to the cloud, using scalable, cloud-based environments.A key aspect of the work today is enabling a collaborative virtual research environment, where rather than researchers downloading files, they take their expertise to where the data is. They dont have to transfer terabytes of files themselves. They bring their algorithms and then they can share their environments within that research environment with other researchers, said Reed.Collaboration is essential in the Earth observation field, with initiatives within ESA programmes shared with NASA, for example. We can bring the UK-based and Europe-based researchers to collaborate globally with cutting-edge scientists around the world, he said.
    0 Comments ·0 Shares ·87 Views
  • Can AI identify financially vulnerable people better than humans?
    www.computerweekly.com
    LariBat - AdobeNewsCan AI identify financially vulnerable people better than humans?Research from customer experience firm Nice finds that AI can identify financially vulnerable people better than humans and offer more comfortable channels for financial problem solvingByBrian McKenna,Enterprise Applications EditorPublished: 04 Feb 2025 13:28 Some 35 million adults in the UK are financially vulnerable, according to FCA criteria. And 37% of financially vulnerable consumers want greater investment in AI-powered chatbots to help them with financial problems.This is according to a recent report from customer experience (CX) firm Nice, The changing face of vulnerability, carried out by Focal Data, which found a rise in the number of British people who identify themselves as vulnerable, hitting 19% an increase of over one million people since 2024, when the first study was done.But, assessed against Financial Conduct Authority (FCA) criteria, two-thirds of UK adults 35 million people are potentially vulnerable, often without being aware.The first study was undertaken by market research firm FocalData among 2,042 UK adults in November 2023, then repeated among 2,021 in November 2024.The FCA defines a vulnerable consumer as somebody who, due to their personal circumstances, is especially susceptible to harm, particularly when a company is not acting with appropriate levels of care. It identifies four factors which may increase the risk of vulnerability: poor health; experiencing a distressing life event, such as a bereavement; low resilience; and low capability such as financial, digital or language skills, or learning difficulties.Vulnerable consumers are increasingly reliant on digital channels for support, according to the Nice report. Over a third (37%) said they prefer organisations to invest in better digital services like AI-powered chatbots over traditional in-person services such as real-life branches, surpassing the general populations demand for digital services (33%).Richard Bassett, vice-president of digital and analytics at Nice, said: The findings pose a considerable challenge for UK organisations, particularly given regulations like the FCAs Consumer Duty Act or Ofgems Vulnerability Strategy. Vulnerability stems from an increasing range of factors from financial pressures to personal challenges making it harder to recognise, even for themselves.Subtle cues, such as mentions of stress or relationship breakdowns, often surface during customer service interactions but are easily missed or affected by bias, particularly with human agents. AI and automation provide a critical solution. By analysing customer service data, AI can detect vulnerability during every interaction and provide agents with real-time guidance ensuring no one is overlooked.The anonymity offered by digital channels can be especially empowering for vulnerable individuals who may feel uncomfortable discussing sensitive issues face-to-face. This presents a significant opportunity for UK organisations to leverage AI-powered chatbots and virtual agents to help vulnerable customers resolve their issues quickly and accurately.However, caution is essential. These solutions must be able to detect subtle vulnerability cues and respond appropriately, seamlessly escalate to a human agent or the correct workflow with full context preserved. These insights should be used alongside data from voice channels to enhance agent training and support, he added.It may seem counter-intuitive that AI could spot signs of vulnerability that a human agent in a contact centre is likely to miss. In a briefing with Computer Weekly, Bassett said: There are certain situations or circumstances where people do want the confidence of a human. If you have a bot thats diagnosing you, that might be something of a challenge to accept compared with a doctor.Having said that, if youve got financial difficulties, theres a lot of people who are embarrassed on the back of that, even ashamed of it. They havent got the confidence to admit that to somebody in the voice world, but on a chat, or even with a bot, they are okay to have those conversations.According to Bassett, customers, especially younger ones, feel more comfortable disclosing details of financial worries to a chatbot than to a live human.The study found that younger adults, especially those under 34, are at the forefront of being self-aware, with 31% identifying as vulnerable compared to 19% across all age groups. They are also more comfortable discussing mental health with customer service agents.Darren Rushworth, president of Nice, said: The increasing self-awareness among younger consumers is a promising step toward more open communication. However, organisations cannot depend solely on self-identification, as it overlooks those who are unaware of their vulnerability or choose to hide it out of fear, embarrassment or shame.Even when warning signs are flagged, they are often missed if advisers lack the confidence or tools to respond effectively. Worse yet, even when advisers take appropriate action, these cases can still fall through the cracks if workflows and knowledge across customer service are not properly connected.Financial pressures, particularly rising energy and utility costs, continue to weigh heavily on UK households. Some 35% of potentially vulnerable consumers anticipate reducing or stopping heating and hot water usage in 2025 due to financial strain, according to the study.In addition to financial concerns (21%), many consumers feel uncomfortable discussing other causes of vulnerability, such as mental health (34%) and relationship breakdowns (28%), with human customer service agents.Rushworth added: UK organisations especially energy providers must adopt AI-powered solutions that subtly build customer confidence, such as self-service to help consumers easily find critical information when in need.AI-powered guidance during interactions ensures agents provide accurate, empathetic support in real time. Automation can ensure compliance and events that include vulnerability appropriately routed into the correct processes.Read more about AI in customer experienceIn The Current Issue:Forrester: Why digitisation needs strong data engineering skillsLabours first digital government strategy: Is it dj vu or something new?Download Current IssueA driving licence in an app is a boost for digital identity - but not for angry suppliers Computer Weekly Editors BlogRedis restriction reaction: Percona offers support for Valkey Open Source InsiderView All Blogs
    0 Comments ·0 Shares ·71 Views
  • Unsafe At Any Speed. Comparing automobiles to code risk
    www.computerweekly.com
    Outgoing CISA chief Jen Easterly recently compared secure software development to automotive safety, arguing that we were at an inflection point similar to 1965 when Ralph Nader published the book Unsafe At Any Speed. The book spurred public outrage over road safety, which helped foster the widespread adoption of innovative vehicle safety measures.After reading Jens posts on the topic, the open question remains: are we really at a point where we can use outrage against insecure software risk to drive meaningful change? Lets see if we can objectively view this question and determine what needs to happen in 2025 alongside continued public pressure. For CISO and IT software buyers, besides your day-to-day improvement, demanding that the software you purchase is secure, and your vendors pledge to secure by design principles, what should you focus on in 2025 to help move the needle?The comparison between automobile risk in the 1960s and software risk in the modern era is evident. Rapid technology innovation has resulted in unsafe products being used daily. The software world, much like the automotive industry in the 60s, prioritises speed of release, features and functionality, and pushing the boundaries and limits of safe usage, all in the name of selling more products and innovating new offerings faster than your competitors. Software developers face continued pressure to release products as quickly as possible, often at the expense of the security of the code. Security is perceived as slowing down the development cycle and is often a bolt-on after the fact.To quote the great Charlie Munger, Show me the incentive, and Ill show you the outcome. Software developers dont write secure code because they have no incentive to do so. To make matters worse, the companies they work for have very little incentive to focus on the security of their products either. IT and CISO buyers have been purchasing insecure code for as long as code has existed.he automobile industry had a similar problem cars purchased to that point were not bought because they were safe. People werent going to the automobile dealerships (did they have dealerships in the 60s?) and asking questions about the vehicle's safety rating or if it had a collapsable steering column and padded dashboard. They purchased vehicles based on the look, the style, the top speed and acceleration, and most importantly, the joy they received while operating the new mode of transportation. Safety was not a required feature and was thus an afterthought. This is almost identical to how we have built and purchased software up through today. We prioritise and purchase based on the value we get from the software, with little to no interest in how secure it is.There is no incentive to prioritise security when thats not what buyers demand. The automobile industry required consumer recognition of the problem, resulting in an outcry for better safety standards before automobile manufacturers would waste their time on product safety features.The software industry hasnt learned from the automobile industrys past for a few key reasons. First of all, when you get into a car accident, there is a nontrivial chance of the loss of life. People die when there are no safety features present in their cars, and even a small number of deaths is unacceptable to the public. The consequences of a motor vehicle accident were immediate and visceral and left a lasting impression in the minds of those who were in one or saw one. Software is different. When the software on your TV, for example, breaks, you just reboot it.Until recently, in the worst case, the vast majority of software flaws resulted in the compromise of some anonymous corporate entity with little to no bearing on the populace. Sure, there might be a very small chance that their accounts were compromised, money directly stolen from them, or fraud perpetrated against them, but most of the consumer world believes that that wont happen to me, and by the law of large numbers, they are probably right. And if it does, they are insured, covered for loss, and, most of the time, only suffer from having to jump through lots of hoops to regain what theyve lost.Because of this laissez-faire attitude toward the risks, it is much easier for the software industry to ignore the problem, writing it off as a cost of doing business. In other words, there is no demand for change.In addition to the difference in risk between automobiles and software vulnerabilities, the complexity of the software landscape dwarfs that of the automobile of the 1960s. If we could quickly implement four to six software processes and fix the entire global software risk register, I promise we would. The problem is way more complex and challenging to fix than the less than 10 large auto manufacturers of the 1960s had to figure out. If only 10 software development firms existed today, it would be easier to mandate change.However, software is literally in everything that we touch. From IoT devices to home appliances to childrens toys - software has eaten the world, making securing that software a much more difficult task to complete. Automobile builders had to make a few changes to their products and were ready to sell. They didnt have to change the entire manufacturing world for every product available to the public to move towards safety. Here is where the comparison falls short.This incredible level of complexity begs the question of who is responsible for fixing the problem. In May 2024, there was a major push for software vendors to sign the Secure by Design (SbD) pledge. Currently, over 250 companies have committed to following secure by design principles and ensuring that their software is created with high-security standards at every step of the development process.I love the Secure By Design pledge, but 250 companies is a drop in the bucket; according to CyberDB there are over 3,500 cyber security companies alone. These are just the companies that are working to secure our daily lives. 250 signatures is a mere blip compared to the number of companies in the United States. Some research claims over 33 million businesses in 2024 in the US alone, the bulk of which are small businesses. The difficult problem is getting to the tipping point required for businesses across the US, and the world, to realise that the risk is too high and demand change from their software vendors. Research from the University of Pennsylvanias Annenberg School for Communication and the School of Engineering and Applied Science shows that approximately 25% of a population is required to hit the tipping point for large-scale social change. We arent even close.What I think we should be thinking about isnt how we fix code security problems better or faster, but instead, how we get to the tipping point where the incentive structure changes and the software of the world begins to fix itself. If we think of it this way, we quickly see that change will only come when a groundswell of buyer demand and government-mandated regulations are implemented.Given the negative outlook and tempered expectations that Ive presented, you probably regret reading this article. Id love for you to leave with the opposite idea in your brain and maybe approach 2025 as the year in which the software industry can move closer to the tipping point for building software more securely.Similar to the issues discussed in Unsafe at Any Speed, companies that write software of any kind will continue to push back on ownership of the problem and attempt to deflect or ignore responsibility for any health, safety, and security issues that appear. As software is used increasingly in life-and-death situations such as healthcare, automotive, emergency communications, etc. the business and buyer demand for less risk will increase.If we are loud enough, at some point, software liability will switch to those who are building the product, and when it does, the incentive structures will change, and companies will pay much more attention to the risks to their own business. Sadly, I dont think well ever get to the point where businesses care enough about code security to prioritise it just because its the right thing to do for their customers. Instead, to achieve our goals, we have to make it imperative to the health and success of their business to write more secure code, and the only way to do this is to be VERY LOUD and DEMAND CHANGE.The Security Think Tank on secure softwareAditya K Sood, Aryaka: Vigilant buyers are the best recipe for accountable suppliers.So, what can you do as a CISO and IT software buyer in 2025 to help move the needle and grow toward the secure code tipping point? First and foremost, we need each of you to be educated on the risks of software flaws and help articulate these issues to the developers of software that you purchase or license. Users and developers must be more aware of the importance of security and the potential consequences of software vulnerabilities to both the company that sells the software and the people that use it.Second, and likely more critical, you must push your government representatives and agencies to become more educated on the topic and build stronger regulations and standards for secure software creation. The automobile would have never become more secure if government agencies hadnt stepped in and put regulations and standards in place that demanded that motor vehicles have at least a minimum level of safety. We have to have regulations in the software world that place the tipping point within reach, and its up to the buyers and users of software to push the government on this front. Liability must shift back to the builder, which only happens when the government gets involved. Itll take an army, but if we scream and yell loud enough over time, purchase only software that is written securely, and make a significant enough level of noise, we can continue to slowly march toward improvement.Just as Unsafe At Any Speed was a wake-up call for the automotive industry, a growing awareness of software security issues and the impact of vulnerabilities to human safety and health is building pressure for a similar reckoning in the software world. We must keep moving toward a Secure At All Speeds software development world.I dont think well see the tipping point hit in 2025, but each of us must approach this change with a uniform rallying cry to build the volume required to be heard at the highest levels. Thank you, Jen Easterly and the CISA team, for the ground work youve done towards this movement, and I hope 2025 is the year where we will work together to take daily steps toward success.
    0 Comments ·0 Shares ·77 Views
  • The need for strong data engineering skills
    www.computerweekly.com
    CW+ Premium Content/Computer WeeklyThank you for joining!Access your Pro+ Content below.4 February 2025The need for strong data engineering skillsIn this weeks Computer Weekly, our latest buyers guide examines best practice in data engineering and the importance of data skills. Labour announced its first digital government strategy but will it be more successful than years of failed plans that came before? We also look back at the networking challenges that faced Orange during last years Paris Olympics. Read the issue now.Access this CW+ Content for Free!Already a member? Login hereFeaturesin this issueForrester: Why digitisation needs strong data engineering skillsbyZeid KhaterHow do enterprises become adaptive? If you cant measure it responsively, you cant manage it as an adaptive enterpriseLabours first digital government strategy: Is it dj vu or something new?byBryan GlickLabour won the 2024 general election on a platform of change and its technology cheerleader insists the new digital government strategy is all about change. Have we heard it all before or is this time really different?View Computer Weekly ArchivesNext IssueMore CW+ ContentView All
    0 Comments ·0 Shares ·97 Views
  • Addressing the legacy: Modernising creaky cloud infrastructures for data benefits
    www.computerweekly.com
    Despite rising demand for data-driven insights, a high share of mission-critical IT is either approaching or already at end of life.One 3,200 executive survey from infrastructure services provider Kyndryl found the proportion could be as high as 44%. The same polling suggested many organisations that have invested in data-infrastructure modernisation are not yet seeing a return on investment (ROI) from what can prove to be a costly exercise.Of course, for major projects of any kind it usually pays to think through priorities in advance, says Paul Henninger, partner and head of technology and data at KPMG UK.An obvious first step is to discuss and decide on key business outcomes to figure out what the organisation seeks to achieve in future. After all, that is what IT is for and if it does not assist that, any money spent is likely down the drain.Look at desired business outcomes and then decide how to proceed, says Henninger. What do you really need to fix, and how should you ensure you are ready for AI[artificial intelligence] in particular? Identify use cases and specific objectives.Data activities with value drive a specific business outcome; successful modernisation starts with business outcomes and works backwards from there. Thats true regardless of technologies and technicalities, he says.Petra Goude, global practice leader, core enterprise and cloud at Kyndryl, shares this view, while advising enterprises to triage their situation and prioritise critical changes.Dont make it all or nothing, she says. If we fail to meet an ROI, we fail. Doesnt matter if we achieve when its too expensive. Therefore, focus on business outcome.Goude notes that many organisations have regretted going all in on tech modernisations, such as when making a binary choice on cloud versus on-premise that ultimately blew budgets.Kyndryls survey also found technology outpacing training, with about 40% of surveyed leaders reporting skills gaps hindering modernisation.If youre not ready, you say modernisation can solve this, but if you dont have the future skills, it doesnt matter what you do, says Goude.Seth Ravin, CEO, president and chair at enterprise software and support services provider Rimini Street, adds that a lack of enterprise architects, data scientistsor integrators rather than programmers can also prove restrictive.Its tough to structure data in big data sets without understanding how that data is connected and structured, really understanding how to get the most out of data, Ravin says. We need people who can tie programs together using integration tool sets.When people see layoffs, only about half are typically about cost-cutting the other half is rotation for needed skills, moving people out and bringing people in with new skill sets, he adds.Once an enterprise has agreed, defined and described relevant business outcomes, then ask what data will be needed to achieve that, and how to collect, manage and control it.This way it is possible to minimise what would otherwise potentially result in an overwhelming or expensive volume of data to store, analyse and maintain.Data modernisation for data modernisations sake can have you in one of those hype cycles, Henninger adds.Often, its about acquiring a 360-degree view of the customer, yet organisations may fail to examine this data problem end-to-end. Instead, many simply add ERP, CRM or other IT solutions.For example, you might find you cannot answer a seemingly simple question about current employee numbers because when you talk to different functional stakeholders, the concept of employee varies.The number of employees for payroll purposes can be different from the number of employees for legal reasons, or the number when it comes to holiday pay, Henninger adds.Enterprises do not want to be in a position where they are trying to answer six different questions, and trying to fudge an average answer among them. That means ending up with a data set and a complicated, expensive data infrastructure built for everybody and useful for no one. That happens over and over again, Henninger says.Modernising data infrastructure is crucial partly because of the role that trust and security now play around data use in general.Partly, the artificial intelligence challenge makes it quite a lot easier to access and interrogate data sets, including potentially people you dont want, Henninger says. But on some level, the degree to which data was disorganised and trapped in documents was a natural form of security in the past.Previously, even if someone got into the network, they would still have to read the documents but this is much easier for everyone with AI, including malicious actors.The Kyndryl polling also reported that 65% of executives worry about cyber attacks, but just 30% say they feel ready to manage that risk.Organisations must be able to use their data confidently and measure the value of doing so, including identifying and setting appropriate metrics. Then when you can measure it properly, you can quantify progress or triage further intervention successfully, adds Henninger.Once an enterprise knows what data they need, who controls it and how it is maintained over time, they can start to work out the infrastructure needed for necessary analyses.Goude prescribes thinking about it as the right workload in the right platform. Revisit each application and decide what they want to do: speed up, reduce cost, or whatever. Some might not even need to be maintained.A heavily transactional system in a bank, for instance, might skyrocket costs without adding value. In that case, it can make sense to decouple the data from the transaction, perhaps moving the data elsewhere. That might in turn offer different capabilities for cloud analytics or AI.You might enhance applications without completely redoing them. Or you might reinvent business processes, Goude says. If you do one approach on everything, you likely wont optimise.Henninger says that beyond a vanishingly small number of compute-intense analytical problems, technology questions for the infrastructure side of data modernisation have largely become software questions.Its more about business intelligence (BI) than AI and advanced analytics management reporting and the resources needed and it is less about how data is stored or queried, streamed or in tables, but creating the right controls and incentives to actively manage the datasets.Problems arise despite getting the data right, because there is drift, or it degrades or the person managing it leaves. Then data is unreliable and the whole system breaks down, and the organisation goes back into silos, says Henninger.Modern data infrastructure resembles lots of other things: likely cloud-based, he says.98% of what compute is needed for decision-making is in the realm of reasonably available commodity hardware.Ravin says it is also important to retain some budget for innovation and not spend it all upgrading multiple software packages.On this point, it is important to consider all the software and its true useful life. Then start making decision on investment in automations and productivity versus upgrades or migrations.Software vendors may say the ERP has to be changed up every three to five years, but thats a work project for everybody, Ravin says. Individual usage analysis might reveal its good for much longer.A rule of thumb is to spend no more than 60-70% of annual budget on operations, and leave 30-40% for innovation. Otherwise, youre dead, Ravin adds. Costs are up. You cant sell for more because of competition, and the place that gets squeezed is profits.Gartner has estimated that perhaps 90% of budgets go on keeping the lights on, with just 10% of modernisation or innovation, he says, adding that thisis similar for resource-strapped SMEs: SMEs tend to have fewer software products, but their needs are still pretty extensive, especially if working outside their home country. The cost of admin is getting higher.He suggests reconsidering the need to be in the cloud at all, especially without bursty, elastic demand, and particularly with equipment costs for on-premise increasingly leveraged.Weve seen all these companies that were cloud-first, finding out that theyre saving millions of dollars bringing it back, he says. Cloud is not always the answer.In Kyndryls survey, 76% of businesses reported investing in AI and machine learning, but only 42% had so far seen positive ROI. Yet benefits are there to be had.Kyndryl sees potential for automated resolution of up to 30% of IT issues, up from 8%, for instance saving massively on maintenance and downtime.Data-infrastructure modernisation of an ageing estate requires carefully examining every investment choice through the lens of ROI-driving business outcomes.You could easily spend enough money to actually have just about the perfect data set, but that could be incredibly expensive, Henninger says. Sure, the data infrastructure might be worth it, but only if it solves the right problem.Read more about legacy infrastructure modernisation projectsAt AWS Re:Invent 2024, CEO Matt Garman shared details of how its GenAI technologies are helping enterprises accelerate the pace of Microsoft and VMware datacentre migrations.VMware is not going away anytime soon. While some IT leaders may be feeling the pain of Broadcoms changes, they still need to seek a long-term plan.
    0 Comments ·0 Shares ·99 Views
More Stories