


Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
1 people like this
566 Posts
2 Photos
0 Videos
0
Reviews
Share
Share this page
Recent Updates
-
Gmail bubble encryption may be an S/MIME killer, says Googlewww.computerweekly.comGoogle is this week unveiling an enhanced client-side encryption (CSE) standard across its widely-used Gmail service which marks its 21st birthday on 1 April that it hopes may render the long-in-the-tooth Secure/Multipurpose Internet Mail Extensions (S/MIME) standard for end-to-end encrypted email (E2EE) obsolete once and for all.S/MIME is used for public-key encryption and signing of MIME data and was originally developed by RSA many years ago. Today, although S/MIME functionality is widely used, it is not always enabled by default for most email services and it only works when both sending and receiving parties meet the standard.This is because both IT teams need to acquire and manage the needed certificates and deploy them to each user, added to which users then have to figure out whether they and the recipient have S/MIME set up and then exchange certificates before they can exchange encrypted emails.And while alternatives such as built-in features from email providers or point solutions exist, they suffer from similar drawbacks.To Googles mind, this limits the use of E2EE to organisations that have significant IT resources to call on and strong use cases for sending encrypted mail, and even then they can frequently only do so using workarounds that create fragmented, limited and sub-optimal experiences for everyone involved.When you talk to any IT admins, theyll tell you a few things about encryption, said Neil Kumaran, group product manager for Gmail security at Google. First, they will probably tell you that for some subset of their data, they need to be fully encrypted in some way usually because of regulatory obligation and maybe because of contractual obligation.The second thing theyll tell you is that the current state of encryption is super hard to implement across the email ecosystem. And even if they implement some of these solutions outside of ideal use cases, there are usually holes in their encryption posture. The TLDR is this is widely felt across our customer base.Google said its solution to this effectively democratises encryption while requiring minimal effort for both IT teams and users, abstracting away old headaches associated with encryption while enhancing data control, privacy and sovereignty.Googles solution is a new encryption model that it said removes the need for complex certificate requirements or complex admin rights and enables users to send fully-encrypted messages to any user on any platform.The idea is that we are creating sort of a protective bubble for emails that feels automatic to the point that it just feels like normal email, Julian Duplant, Gmail security product manager, told Computer Weekly. Weve created a service that makes the organisations that use this functionality become the total gatekeeper for that data.With the new bubble technology, Google said it is first putting control of the certificates, or keys, needed to encrypt or decrypt messages into the hands of its customers, relinquishing its own ability to access the messages for good.Second, it is giving them control of the user directory that decides who has access to the keys.Third, it has created a new guest functionality where customers can automatically generate temporary accounts in their organisation for external recipients to access and decrypt the message subject to the customers rules.What that looks like as a functionality is, if youre sending to a recipient that has Gmail, whether its Workspace or Consumer, theyre going to be able automatically decrypt that message based on the organisations rules. [But] if the organisation is any other email provider in the world, theyre going to receive is an email notification saying Julian has sent you an encrypted message, click here to read it, said Duplant.When the user clicks that message, the browser will open and they will see a safe Gmail interface where they can decrypt the message and write their own reply. The best part about it is were doing this in a way that doesnt require S/MIME. All that certificate exchange that would have happened before no longer has to be done. It feels automatic, and it gives customers the ability to have their own sort of safe space and control of that data.It is also important to note that when the recipient has S/MIME configured, Gmail will still send the email via S/MIME as it already does.Google believes this approach offers a more comprehensive encryption solution for its customers, which has the beneficial side effect of reducing friction and lowering the barrier to doing cyber security effectively.Another side effect of this approach to client-side encryption, said Google, is that in making its customers the ultimate arbiters of who can access their email data, it can help them safeguard themselves against, for example, unwarranted intrusions by governments demanding the service provider hand over the data.Google said this will hopefully heighten customer compliance with data sovereignty regulations, export controls and other requirements such as HIPAA in the US.The new technology is available today in beta for organisations using Gmail internally, but in the coming weeks users will be able to send E2EE emails to any Gmail inbox and to any email inbox later in the year. More information is available from Google and organisations can sign up here for the beta programme.Read more about encryptionEuropean telco standards body ETSI launches its first post-quantum cryptography cyber standard, covering the security of critical data and communications.The NCSC urges service providers, large organisations and critical sectors to start thinking today about how they will migrate to post-quantum cryptography over the next decade.Russia is using phishing attacks to compromise encrypted Signal Messenger services used by targets in Ukraine. Experts warn that other encrypted app users are at risk.0 Comments ·0 Shares ·10 Views
-
Nvidia tackles graphics processing unit hoggingwww.computerweekly.comTommy Lee Walker - stock.adobe.cNewsNvidia tackles graphics processing unit hoggingPeople may try to lock up GPU resources even if they dont need them all day but not anymore, thanks to Nvidia KAI SchedulerByCliff Saran,Managing EditorPublished: 01 Apr 2025 16:00 Nvidia has made its KAI Scheduler, a Kubernetes-native graphics processing unit (GPU) scheduling tool, available as open source under the Apache 2.0 licence.KAI Scheduler, which is part of the Nvidia Run:ai platform, is designed to manage artificial intelligence (AI) workloads on GPUs and central processing units (CPUs). According to Nvidia, KAI is able to manage fluctuating GPU demands and reduced wait times for compute access. It also offers resource guarantees or GPU allocation.The GitHub repository for KAI Scheduler said it supports the entire AI lifecycle, from small, interactive jobs that require minimal resources to large training and inference, all in the same cluster. Nvidia said it ensures optimal resource allocation while maintaining resource fairness between the different applications that require access to GPUs.The tool allows administrators of Kubernetes clusters to dynamically allocate GPU resources to workloads, and can run alongside other schedulers installed on a Kubernetes cluster.You might need only one GPU for interactive work (for example, for data exploration) and then suddenly require several GPUs for distributed training or multiple experiments, Ronen Dar, vice-president of software systems at Nvidia, and Ekin Karabulut, an Nvidia data scientist, wrote in a blog post. Traditional schedulers struggle with such variability.They said the KAI Scheduler continuously recalculates fair-share values, and adjusts quotas and limits in real time, automatically matching the current workload demands. According to Dar and Karabulut, this dynamic approach helps ensure efficient GPU allocation without constant manual intervention from administrators.They also said that for machine learning engineers, the scheduler reduces wait times by combining what they call gang scheduling, GPU sharing and a hierarchical queuing system that enables users to submit batches of jobs. The jobs are launched as soon as resources are available and in alignment with priorities and fairness, Dar and Karabulut wrote.Read more stories about GPUsAMD teams up with Rapt AI to boost GPU performance for AI: AMD is partnering with Rapt AI to focus on workload management and performance optimisation when running AI models on AMDs Instinct GPUs.GPU scarcity shifts focus to GPUaaS: High GPU costs and scarcity drive users to GPUaaS for AI workloads. But businesses should assess needs before investing.To optimise for fluctuating demand of GPU and CPU resources, Dar and Karabulut said that KAI Scheduler uses what Nvidia calls bin packing and consolidation. They said this maximises compute utilisation by combating resource fragmentation, and achieves this by packing smaller tasks into partially used GPUs and CPUs.Dar and Karabulut said it also addresses node fragmentation by reallocating tasks across nodes. The other technique used in KAI Scheduler is spreading workloads across nodes or GPUs and CPUs to minimise the per-node load and maximise resource availability per workload.In a further practice, Nvidia said KAI Scheduler also handles when shared clusters are deployed. According to Dar and Karabulut, some researchers secure more GPUs than necessary early in the day to ensure availability throughout. This practice, they said, can lead to underutilised resources, even when other teams still have unused quotas.Nvidia said KAI Scheduler addresses this by enforcing resource guarantees. This approach prevents resource hogging and promotes overall cluster efficiency, Dar and Karabulut added.KAI Scheduler provides what Nvidia calls a built-in podgrouper that automatically detects and connects with tools and frameworks such as Kubeflow, Ray, Argo and the Training Operator, which it said reduces configuration complexity and helps to speed up development.In The Current Issue:Behind the scenes at Amazon UKs robotic-powered warehouseAll change: Weighing up the options for enterprises as open source licences evolveDownload Current IssueDistrust builds between digital ID sector and government amid speculation over 'ID cards by stealth' Computer Weekly Editors BlogAvoiding AI lock-in Cliff Saran's Enterprise blogView All Blogs0 Comments ·0 Shares ·10 Views
-
Post Office Capture and Ecco+ users asked to make contact with Scottish statutory bodywww.computerweekly.comThe Scottish Criminal Cases Review Commission (SCCRC) is attempting to contact any former subpostmasters that could have been prosecuted for unexplained losses on the Post Offices pre-Horizon Capture software.There are former subpostmasters that, like Horizon users, could have been convicted of crimes based on data from these systems.Since the Post Office Horizon scandal hit the mainstream in January 2024 revealing to a wide audience the suffering experienced by subpostmasters who were blamed for errors in the Horizon accounting system users of Post Office software that predated Horizon have come forward, supported by campaigning peer Kevan Jones, to tell their stories, which echoed those of victims of the Horizon scandal.The Criminal Cases Review Commission for England and Wales is now reviewing 21 cases of potential wrongful conviction, put forward by law firm Hudgell Solicitors, where the Capture IT system could be a factor.Capture was a PC-based applicationdeveloped by the Post Office and uploaded onto a personal computer to carry out branch accounts.The software was a standalone system, unlike Horizon, which is a complex, networked system connected to centralised services (see below for timeline of Capture developments since January 2024).The SCCRC is now calling on people that might have been convicted based on Capture accounts to come forward. The commission encourages anyone who believes that their criminal conviction, or that of a relative, might have been affected by the Capture system to make contact with it, it said.The statutory body is also investigating a third Post Office system, known as Ecco+, which was also error-prone. It was thought this system was only used in Crown branches (directly managed by the Post Office) and Crown branches that were taken over by subpostmasters. But Computer Weekly has discovered that Ecco+ could actually be bought by subpostmasters for use in their branches.We are currently investigating possible miscarriages of justice relating to problems with various computer systems used in Post Office branches in the 1990s (Capture, Ecco+), the SCCRC said.Read the SCCRCs related information sheet.Read more about Ecco+In May 2024, Scottish Parliament announced its own legislation to exonerate subpostmasters with convictions based on evidence from the Horizon system.This followed a similar law introduced for England and Wales in March last year that saw over 700 former subpostmasters exonerated.A total of 64 former subpostmasters in Scotland have now had their convictions overturned through the legislation brought through Scottish Parliament.So far, 97 convicted subpostmasters have come forward, and 86 have been assessed, out of which the 64 have been overturned. However, 22 have been rejected and another 11 are still to be assessed.An independent group, fronted by a former Scottish subpostmaster, is also calling on users of any of the Post Office systems to come forward to tell their stories, and for support in seeking justice and redress.The Scottish Postmasters for Justice and Redress, as the group is known, will officially launch tomorrow at Scottish Parliament. It was set up by Rab Thomson, a former subpostmaster of a branch near Alloa, who had a wrongful theft conviction overturned last year.The group has the support of former Scottish Nationalist Party MP Marion Fellows, who was chair of the All-Party Post Office Parliamentary Group, and Calum Greenhow, the current CEO of the National Federation of Subpostmasters.Timeline of how Capture controversy has unravelled since Horizon scandal dramatisation0 Comments ·0 Shares ·6 Views
-
Scottish support group for Post Office scandal victims launchedwww.computerweekly.comScottish support group for Post Office scandal victims launched0 Comments ·0 Shares ·12 Views
-
Interview: Ray McCann, Loan Charge independent review leadwww.computerweekly.comThe government set out plans in the Autumn Budget 2024 to commission another independent review of the Loan Charge policy that in its words will help bring the matter to a close for those affected whilst ensuring fairness for all taxpayers. This description was seized on by contractors in scope of the policy as a positive sign.Its not hard to see why. The policy is a mechanism for HM Revenue & Customs (HMRC) and HM Treasury to recoup tax that government estimates suggest around 50,000 contractors avoided paying by enrolling in loan-based remuneration schemes between 9 December 2010 and 6 April 2019.Computer Weekly has heard and published numerous accounts from IT contractors who participated in these schemes and have been saddled with life-changing tax bills they claim to have no hope or means of paying, since the policy came into effect in April 2019.When the government publicly committed to taking actions to help bring the matter to a close for those affected, there was an expectation among some of those affected that this might result in the Loan Charge being repealed and their tax bills cancelled.That notion was firmly put to bed on 23 January 2025, when the government issued confirmation that the review had been commissioned and that repealing the policy in totality was not what it meant about wanting to bring the matter to a close.Instead, the government said the review would focus on investigating the factors stopping people from settling their Loan Charge liabilities with HMRC and finding ways to help them do so.It also confirmed that HM Treasury had appointed former HMRC assistant director Ray McCann to oversee it. He has also previously served as president of the Chartered Institute of Taxation and has been in private practice for almost 20 years.The reviewer [McCann] is being asked to draw on the available evidence and expertise, engaging with stakeholders as appropriate, to consider in detail the settlement terms available [to those] who have not yet settled and paid their tax liabilities in full to HMRC, and whether HMRCs settlement and debt management processes sufficiently take into account their ability to pay and behaviours, said the government statement.[It will also look into] how that population could now be encouraged to reach a resolution with HMRC; and what decisions would be required to ensure that, as far as possible, any new settlement proposals were properly targeted whilst not imposing significant additional administrative burdens upon HMRC.Once the information about what the review would entail entered the public domain, a wave of criticism was directed at the government from those affected by the policy, with many accusing the government of offering false hope with its promise the review would bring the Loan Charge matter to a close for them.Campaign groups have also claimed the review is too narrow in scope, given its focus on what can be done to encourage people to settle their Loan Charge liabilities, rather than examining the reasons why tens of thousands of people joined loan schemes in the first place.During a sit-down with Computer Weekly to discuss his plans for the review in more detail, McCann says the terms of the review are wider than many people suggest.Everything of any significance, so far as the Loan Charge is concerned, happens in the period post-2010, so that means its open to me to look at anything that happens in that period, including the behaviour of the promoters and the behaviour of HMRC, he says.The call for evidence period of the review started on 28 March 2025, with McCann urging those in the policys scope to send him evidence covering three topics: what contractors were told by promoters of these schemes, their experience of dealing with HMRC, and details about how the policy has personally affected them.The rationale behind that, as McCann sees it, is that it would be difficult to see how the Loan Charge can be resolved without having a detailed understanding of how so many people ended up embroiled in loan schemes and why they are finding it so difficult to reach a settlement with HMRC.Another area that McCann plans to explore during his review is HMRCs 2017 assessment of the impact the Loan Charge would have, in which the government tax collection agency stated that it did not foresee the policy having any material impact on the families of those in scope of it. [Repealing the Loan Charge] would be a bad move because whether people realise it or not many individuals have got millions out of loan schemes and paid little or no tax on it Ray McCann, independent Loan Charge reviewThis statement has been openly criticised during the intervening years, as anecdotal accounts from contractors discussing the mental anguish of living with a sizeable Loan Charge-related tax bill hanging over them have emerged. The policy has also been linked to at least 10 suicides to date.Ive been critical of [the HMRC assessment] in the past. Ive criticised that in various formats: on Twitter, in various tax journals, publicly, and so on, says McCann.What is not open to McCann is to make a recommendation in his final report to repeal the Loan Charge policy. And thats not because the contents of it are pre-determined, as some critics of the process have claimed, but because doing so would not be fair to other taxpayers.After all, the government has previously and repeatedly stated that resolving the Loan Charge is a priority, but doing so must happen in a way that ensures fairness for all taxpayers.Its not open to me to recommend that the Loan Charge be repealed, and the government has made clear from the start that repeal was not an option, and equally I dont think it should be. It would be a bad move because whether people realise it or not there are many individuals who have got millions out of loan schemes and paid little or no tax on it, says McCann.Government has a responsibility to the many millions who pay tax and national insurance contributions [NIC] on all of their earnings, and unless this is resolved in a way that is fair to both those affected by the loan charge and the millions of other taxpayers, many would no doubt ask why you and I should pay our tax and national insurance?As previously alluded to, McCann has proven to be a vocal critic of HMRCs handling of the Loan Charge over the years, and was during his time working at the government agency closely involved in its enforcement activities against similar disguised remuneration schemes.Ive been involved in [enforcement action against] loan schemes in one capacity or another for a quarter of a century. When I was in the Revenue [HMRC] in the 1990s, I was one of the first inspectors to take on one of the big employee benefit trusts [EBTs], he says.These trusts are the entities that pay out loans to contractors. In the late 1990s and early 2000s, many large employers in the banking and financial sector used EBTs as a mechanism to pay their employees in loans.One of the last things I did before I left HMRC in 2006 was pre-empt the settlement with several banks in late 2005. One of the banks that I had challenged had put a billion pounds into an employee benefit trust, he says.They had claimed the corporate tax deduction for it, [but] they hadnt deducted PAYE [Pay As You Earn] or NIC, so all told, that group of banks had avoided hundreds of millions in tax and NIC.During the intervening years, the profile of organisations and individuals involved in loan-based remuneration schemes has markedly changed, says McCann, to include white collar workers, such as financial services and IT contractors, before moving down to far lower-paid individuals, such as social workers and NHS staff.The thing that shocks me is how low down the income scale these things have reached. Theyre like a virus. They have gone from the large corporates to the big banks to the middle-sized companies, and then down to various people working offshore, putting together schemes that are ensnaring people who are on just everyday wages, he says.And thats why successive governments have treated this as such a priority because of the threat that they see it being towards the entire PAYE and NICsystem.Loan-based remuneration schemes enable individuals to artificially minimise the amount of employment tax they pay.However, many of the contractors in scope of the Loan Charge policy claim the schemes were marketed as an HMRC-compliant way of bolstering their take-home pay, and that they were assured by respected tax barristers that in the eyes of HMRC they were doing nothing wrong.The way McCann sees it, that explanation only goes so far. Many people will have concerns, even if they get assurance from the promoter. And most of them did get assurances from promoters saying, Its all fine. Its all tried and tested, and HMRC dont mind, he says.But I think there is only so far you can believe that to be the case without evidence, and some of that has already come into the review mailbox.Meanwhile, HMRC maintains that its position on the use of loan remuneration schemes has always been clear, and that it has never given its seal of approval to any such setup.Even if you go back to 2010 and before, HMRCs position on [the use of EBTs] was all over the internet, says McCann. If you did a Google search at the time on EBTs, you might get millions of hits and most of them were about HMRCs view on them.And what this serves to highlight is one of the major difficulties McCann will face in his review: uncovering evidence that supports the argument that contractors are victims of mis-selling when so much time has passed since these schemes were originally being marketed to people.Thats the task before me getting sufficient reliable evidence to show that the promoters are the bad guys that I can put in my review, so Im in a position to put forward the argument that these are the people HMRC should have been clamping down on and where appropriate criticising them for not doing it, he says.This is why it is so important that contractors engage with the review process during the call for evidence period, so their side of the story can be fully put across, he continues.Meanwhile, McCann has been reaching out to contacts he made during his time investigating loan schemes while at HMRC, some of whom used to sell or market these kinds of ideas, to engage in the review too.I dont need everybody to send me details in, because if all 50,000 people in scope of the Loan Charge send me their evidence, this review would take 10 years to complete. But what I do need is enough to get involved that I can sensibly make a case that this is representative of what happened, he says.What I want to be able to do [with this review] is say this is representative of what happened, and its reasonable to conclude that within these types of industries, this is the behaviour [of] the promoters. And up to a point, its reasonable to conclude that the individuals involved, who often did not have independent professional help, were persuaded that this was okay.He also needs contractors to engage in the review by supplying a substantial and significant amount of evidence that proves their claims that their treatment at the hands of HMRC has been unreasonably and manifestly unfair in the eyes of the average person in the street who pays tax and national insurance.The argument youve got to make is that theyre being treated in a way thats unreasonably unfair, and in a way you and I dont support, McCann adds.When the government set out the reviews terms of reference, a group of cross-party MPs who make up the Loan Charge and Taxpayer Fairness All Party Parliamentary Group (APPG) issued a statement brandishing the exercise a farce while calling into question how truly independent the end product would be.This was on the basis that a former HMRC director had been appointed to oversee the review, and as confirmed by the government HMRC and HM Treasury would be permitted to review its contents ahead of publication.It will not change the position people are in, nor review the legislation and whether it was fair and justified. This is not the review that was promised nor the review that is so desperately needed, and the APPG will continue to push for a genuine inquiry into this scandal, said the APPG.Despite the groups vocal critique, McCann says he has been liaising with the APPG in the wake of its statement and has found its members are broadly supportive of what it is he is trying to achieve.He has also been engaging with various stakeholders including noted tax barristers and accounting firms who represent large numbers of the contractors affected by the Loan Charge to compile evidence for the review, including impact statements.Ive got a big data request that Im drafting at the moment to send to HMRC so that I can get proper data the numbers involved, the income spread, how long people have been under inquiry for, and that kind of thing, he says.I had to delay things a bit because the need to be independent means I couldnt use HMRC and Treasury people for support, and there had to be a recruitment process across the whole of the civil service [for people to assist].McCann is acutely aware that the decision to appoint him, a former HMRC inspector, to oversee the review has not gone down well with everyone. There is no way Im going to take instruction from HMRC or the Treasury on how to conduct the review and they have done nothing that could be taken as trying to control the review or its direction Ray McCann, independent Loan Charge reviewSome people have said that Im under the control of the Treasury but there is no way Im going to take instruction from HMRC or the Treasury on how to conduct the review and to be fair to the Treasury and HMRC, they have done nothing that could be taken as trying to control the review or its direction, he says.I obviously must comply with the law on data protection and so on, but Im going to carry out the review as I believe it needs to be done. The minister made clear that my conclusions and recommendations must be made within the constraints of the current fiscal situation, but otherwise its up to me.And for those who have taken issue with an ex-HMRC director conducting a review into an HMRC-backed government policy, McCann says his employment history and experiences should be viewed positively.On the point of independence, I initially thought that should be more of a concern for HMRC than people on the other side of the inquiry, because for eight years Ive been consistently critical of their handling of the Loan Charge, he says.One area that McCann has been particularly and publicly critical about HMRC over is the organisations approach to Loan Charge settlements.I have been pressurising ministers and HMRC for years to develop a better approach to settlements, and I got frustrated with the fact that it never appeared, so I started to publicly criticise them through Twitter and LinkedIn, and in various things I was writing, adds McCann.Almost every article Ive written in the last eight years mentions the Loan Charge to some extent or another, and its always been critical of HMRCs approach to settlements. Ive been consistently critical on that front, [and] Ive made it clear to Parliament, and Ive made it clear to government, that HMRC should have been more realistic when it came to the settlement terms.In terms of what he thinks HMRC should have done differently, McCann says: I have said in the past that HMRC should have offered settlement terms that were sufficiently attractive that it made people want to settle, but what HMRC did was only give the slightest of discounts [to people who wanted to settle] and left them in a position where they did not know how they would pay.It is McCanns hope that when the review concludes which is expected to be later this summer and its contents have been mulled over by the government, contractors will end up with a far more attainable settlement figure.I want to end up with a situation where people get a settlement figure from HMRC that they can look at and say, Well, okay, even if Id rather not pay it, I can pay it, within a reasonable period if necessary. Whereas, presently, people are saying, Id rather not pay it, but even if I did want to pay it, I cant afford to. I want to change that dynamic, says McCann.And in doing that, he hopes this will finally help bring a resolution for the tens of thousands of people who have been living under the shadow of the Loan Charge for the past eight or so years.We can argue that HMRC should have gone after this promoter or that promoter, and all manner of other things to do with the Loan Charge, but that doesnt help someone who is sitting at home worried about the bailiffs coming round, he says.If someones drowning in a river, theyre not going to be helped if people are just standing on the shore arguing about how they got in the river in the first place. They just want someone to rescue them.In the meantime, McCanns priority is getting people affected by the Loan Charge to contribute to the review.I know people are mistrusting [after past reviews]. Whether that mistrust is justified or not, I want them to take a deep breath and engage with this review because something has to come out of it as we all need this resolved, he concludes.Read more about the Loan ChargeTens of thousands of IT contractors have been saddled with life-changing tax billsas a result of a controversial, retroactive government policy and the fall-out from its introduction has been likened to the Post Office IT scandal.After campaigners called for HMRC to pause all of its Loan Charge enforcementactivity until the governments independent review of the policy is complete, Computer Weekly has learned that the agency is accepting requests to pause settlements.0 Comments ·0 Shares ·11 Views
-
Apple devices are at most risk in UK following government backdoor orderwww.computerweekly.comUsers of Apple devices in the UK are at the most risk in the world of being hacked, following a secret government order requiring the tech company to allow backdoor access to its users encrypted data, the House of Lords heard on Monday 31 March.Liberal peer Paul Strasburger pressed the government to answer questions about a decision by the home secretary, Yvette Cooper, to issue a secret notice against Apple.The order, first reported in the Wall Street Journal, extends law enforcement and intelligence services access to encrypted data stored on Apples iCloud to include users of Apples secure Advanced Data Protection (ADP) service.The existence of the order, known as a technical capability notice (TCN), was confirmed when Apple withdrew its ADP service for UK users in February while continuing to provide the service to people overseas. Apple would not have done this unless it felt compelled to do so by a request to insert a backdoor, the US Congress claimed in a letter to the IPT.In questions posed in the House of Lords on Monday, Strasburger said the government had demonstrated its disdain for the privacy and digital security of British citizens and companies by issuing the TCN against Apple.The Liberal peer said the order would introduce weaknesses to encryption on Apple devices that could be exploited by criminals and hostile states.Strong encryption is essential to protect our data and our commerce from attack by organised crime and rogue states, he said. Any weakness inserted into encryption for the benefit of the authorities is also available to those who would do us harm yet that is precisely what the government are demanding from Apple.Apple is challenging the legality of the governments order in the Investigatory Powers Tribunal (IPT), which discussed arguments in a closed-door hearing on 14 March.Civil society groups Privacy International and Liberty, along with two individuals whose security has been impacted by the governments order against Apple, have filed separate legal interventions.Ten newspapers, publishers and broadcasters including Computer Weekly have also filed legal submissions calling for Apples appeal against the widely publicised order to be heard in open court on public interest grounds.Non-affiliated peer Claire Fox said it was not possible for Apple to open doors to its customers data in a way that would ensure that only the police and intelligence services would have access to its users encrypted data.It is obvious that criminals, foreign adversaries and others would exploit that weakness, she said.Fox said it was baffling if the Home Office was choosing to bully tech companies into undermining their users privacy, security, civil liberties and free speech while at the same time seeking to establish the UK as a leading hub for innovation and technology.Liberal democrat peer Tim Clement Jones told the Lords that the government could be in breach of the European Court of Human Rights following a key judgment by the court last year.In the case of Podchasov v Russia, the European Court of Human Rights found that weakening end-to-end encryption or creating backdoors could not be justified under human rights law.Labour peer Toby Harris asked what consideration had been given to the trade-off between the general weakening of security and confidentiality compared with the gains made by the security services in being able to decrypt data stored by Apple.Home Office minister and Labour peer David Hanson repeatedly declined to answer questions from peers, citing national security reasons.We have a long-standing position of protecting privacy while ensuring that action can be taken against child sexual abusers and terrorists, he said.I cannot comment on operational matters today, including neither confirming nor denying the existence of any notices. This has been the long-standing position of successive UK governments for reasons of national security.Conservative peer Daniel Moylan pressed Hanson to comment on Apples decision to publicly withdraw its ADP encryption service from the UK, even if he could not comment on whether a notice had been issued.He also asked the home office minister whether the US and UK governments had any high-level discussions about the order against Apple. Bloomberg reported on 13 March that the US and UK governments were holding private talks in an attempt to resolve US concerns that the UK was trying to force Apple to create a backdoor that would allow the UK access to encrypted data belonging to US citizens.Hanson said he could not comment on the matter.Decisions made by Apple are a matter for Apple, and the removal of any features is a matter for Apple. Again, for reasons of national security I cannot confirm or deny any conversations that we have had or any issues that are undertaken, he saidThe Investigatory Powers Act contained robust safeguards and oversight to protect privacy and ensure that data is obtained only on an exceptional basis and only when necessary and proportionate to do so, he added.A Home Office spokesperson said: We do not comment on operational matters, including, for example, confirming or denying the existence of any such notices.Media companies have asked the Investigatory Powers Tribunal to hold hearings into Apples appeal against the technical capability notice in open court.Separately, Big Brother Watch, Index on Censorship and the Open Rights Group have written an open letter to the tribunal calling for an open court hearing.The media companies challenging the secrecy of Apples appeal in the Investigatory Powers Tribunal are Associated Newspapers Ltd, the British Broadcasting Corporation, Computer Weekly, Financial Times Group, Guardian News & Media, News Group Newspapers, Reuters News and Media, Sky News, Telegraph Media Group and Times Media.Timeline of UK governments order for a backdoor into Apples encrypted iCloud service7 February: Tech companies brace after UK demands backdoor access to Apple cloud The UK has served a notice on Apple demanding backdoor access to encrypted data stored by users anywhere in the world on Apples cloud service.10 February: Apple: British techies to advise on devastating UK global crypto power grab A hitherto unknown British organisation, which even the government may have forgotten about, is about to be drawn into a global technical and financial battle, facing threats from Apple to pull out of the UK.13 February: UK accused of political foreign cyber attack on US after serving secret snooping order on Apple US administration asked to kick UK out of 65-year-old UK-US Five Eyes intelligence sharing agreement after secret order to access encrypted data of Apple users.14 February: Top cryptography experts join calls for UK to drop plans to snoop on Apples encrypted data Some of the worlds leading computer science experts have signed an open letter calling for home secretary Yvette Cooper to drop a controversial secret order to require Apple to provide access to users encrypted data.21 February: Apple withdraws encrypted iCloud storage from UK after government demands backdoor access After the Home Office issued a secret order for Apple to open up a backdoor in its encrypted storage, the tech company has instead chosen to withdraw the service from the UK.26 February: US intelligence chief Tulsi Gabbard probes UK demand for Apples encrypted data 5 March: Apple IPT appeal against backdoor encryption order is test case for bigger targets The Home Office decision to target Apple with an order requiring access to users encrypted data is widely seen as a stalking horse for attacks against encrypted messaging services WhatsApp, Telegram and Signal.11 March: Secret London tribunal to hear appeal in Apple vs government battle over encryption A secret tribunal is due to meet at the High Court in London to hear tech giant Apple appeal against a Home Office order to compromise the encryption of data stored by its customers on the iCloud service worldwide.13 March: US Congress demands UK lifts gag on Apple encryption order Apple and Google have told US lawmakers that they cannot tell Congress whether they have received technical capability notices from the UK.14 March: The Investigatory Powers Tribunal holds a day-long secret hearing into an appeal brought by Apple against a government notice requiring it to provide law enforcement access to data encrypted by its Advanced Data Protection service on the iCloud, despite calls for the hearing to be opened to the public.0 Comments ·0 Shares ·6 Views
-
Inside Amazons robot-powered warehousewww.computerweekly.comCW+ Premium Content/Computer WeeklyThank you for joining!Access your Pro+ Content below.1 April 2025Inside Amazons robot-powered warehouseIn this weeks Computer Weekly, we go behind the scenes at Amazons robot-powered Swindon warehouse to see how AI and humans are working together. We examine the state of open source licensing and find out how its affecting datacentre operators. And we visit a 130-year-old wine and drinks company to find out how technology has brought operations into the modern age. Read the issue now.Access this CW+ Content for Free!Already a member? Login hereFeaturesin this issueBehind the scenes at Amazon UKs robotic-powered warehousebyBen SillitoeRobots, automation, artificial intelligence and people power Computer Weekly tours the tech-heavy Amazon warehouse in SwindonAll change: Weighing up the options for enterprises as open source licences evolvebyFleur DoidgeSoftware suppliers have been rowing back on open-source licensing. Will enterprises with datacentres have to change their software approach?View Computer Weekly ArchivesNext IssueMore CW+ ContentView All0 Comments ·0 Shares ·24 Views
-
Top 1,000 IT service providers in scope of UK cyber billwww.computerweekly.comThe government has set out a series of ambitions and goals for the soon-to-be-introduced Cyber Security and Resilience Bill, including measures to better protect supply chain and operators of critical national services, which besides public services and utilities will now also includes IT service providers and suppliers up to 1,000 of which are likely to fall into the scope of the planned measures and potentially datacentre operators.First trailed in 2024 shortly after Labours General Election victory, the overall aims of the Cyber Security and Resilience Bill are to improve the UKs online defences, protect the public and safeguard growth in line with its wider Plan for Change Policy.The government said its plans would help ensure organisations that provide essential services IT and otherwise across both the public and private sectors are a less tempting target for cyber criminals. It also wants to give the country greater confidence in digital services, which it is relying upon to support its overall economic growth mission.Noting that cyber threats cost the UK over 22bn during the second half of the 2010s, it cited last summers attack on Synnovis that cost the NHS over 32m and suggested that a hypothetical cyber attack focused on energy services in southeast England could wipe over 49bn off the economy.Economic growth is the cornerstone of our Plan for Change, and ensuring the security of the vital services which will deliver that growth is non-negotiable, said Peter Kyle, secretary of state for science, innovation and technology.Attempts to disrupt our way of life and attack our digital economy are only gathering pace, and we will not stand by as these incidents hold our future prosperity hostage. The Cyber Security and Resilience Bill, will help make the UKs digital economy one of the most secure in the world giving us the power to protect our services, our supply chains, and our citizens the first and most important job of any government.Richard Horne, CEO of the National Cyber Security Centre (NCSC), added: The Cyber Security and Resilience Bill is a landmark moment that will ensure we can improve the cyber defences of the critical services on which we rely every day, such as water, power and healthcare. It is a pivotal step toward stronger, more dynamic regulation, one that not only keeps up with emerging threats but also makes it as challenging as possible for our adversaries.By bolstering their cyber defences and engaging with the NCSCs guidance and tools, such as Cyber Assessment Framework, Cyber Essentials and Active Cyber Defence, organisations of all sizes will be better prepared to meet the increasingly sophisticated challenges, he said.As part of the bills progress, the government said it is now exploring measures to take to improve its ability to respond to emerging cyber threats and, critically, to take rapid action to protect national security. This could see the technology secretary granted powers to order regulated organisations to shore up their cyber defences.Also on the table is the possibility of introducing a set of new protections for the UKs 200 largest datacentres. Quite what these measures will entail is yet to be decided, but the government noted that it may look to artificial intelligence (AI) to help bolster the defences of the countrys datacentre estate.Should the proposed bill make it to the statute books, its overall provisions will be largely similar to those already been set out in previous announcements.Besides proposals to mandate ransomware incident reporting that have already been widely discussed and are currently the subject of an ongoing consultation, and widening the variety of organisations subject to cyber regulation, it will also give regulators more tools to improve cyber security and resilience in their specialist areas, and give the government more flexibility to update regulatory frameworks as and when the threat and technology environments evolve.Read more about security legislation and regulationRegulator Ofcom is now able to take enforcement action against platforms under the Online Safety Act if they fail to proactively safeguard against content such as terrorist or child sexual abuse material.The Commons Public Accounts Committee heard government IT leaders respond to recent National Audit Office findings that the governments cyber resilience is under par.With a flourishing technology sector and a direct line to Brussels, investing in Ireland may be a sound bet for UK organisations looking to navigate Europe's transforming cyber landscape.0 Comments ·0 Shares ·19 Views
-
Reassessing UK law enforcement data adequacywww.computerweekly.comThe UK government says reforms to police data protection rules will help simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy.Currently going through the committee stage of Parliamentary scrutiny, the Data Use and Access Bill (DUAB) will amend the UKs implementation of the European Union (EU) Law Enforcement Directive (LED), which is transposed into UK law via the Data Protection Act (DPA) 2018 and represented in Part Three of the act specifically.In combination with the current data handling practices of UK law enforcement bodies, the bills proposed amendments to Part Three could present a challenge for UK data adequacy.The DUAB changes the law to allow routine transfer of data to offshore cloud providers, remove the need for police to log justifications when accessing data, and enable police and intelligence services to share data outside of the LED rules.In June 2021, theEuropean Commission granted data adequacy to the UKfollowing its exit from the EU, allowing the free flow of personal data to and from the bloc to continue, butwarnedthe decision may yet be revoked if future data protection laws diverge significantly from those in Europe.While the government argues that its reforms will simplify police data processing, critics say the proposals represent enough of a divergence from EU law that it will likely undermine the UKs LED adequacy.They add that many of the governments changes to police data protection rules are a response to a widespread lack of compliance with key provisions in the DPA 2018, such as the need to log justifications when accessing data or implement controls that limit the offshoring of sensitive law enforcement data to non-law enforcement bodies, including cloud providers.Computer Weekly contacted the Home Office about every concern raised, and the threat to the UKs LED adequacy created by the governments proposed changes to the law enforcement data protection regime.We have introduced some targeted amendments in the Data Use and Access Bill to improve public trust and to drive up law enforcement efficiency by simplifying the legislation. We are committed to data adequacy and had the UKs adequacy decisions in mind when producing this bill, said a spokesperson.Any changes to our data protection regime must not come at the expense of security, and high standards of protection will continue to be applied.In exiting the EU, the UK became a third country under the blocs rules, which means the European Commission (EC) will have to periodically assess whether the countrys data protection framework and practices provide an essentially equivalent level of protection for EU citizens data.The EC will therefore have to make two separate adequacy determinations under both the General Data Protection Regulation (GDPR) and LED by the end of June 2025.Data protection experts previously claimed to Computer Weekly in February 2021 that any adequacy decision made under the LED would be principally political in nature if it fails to directly address how the data practices of the UKs criminal justice sector and intelligence services undermine the data and fundamental rights of EU citizens. If this is not addressed, they said a positive adequacy decision could be open to legal challenges in the European courts.In October 2024, the UK Parliaments European Affairs Committee (EAC) in a warning about the risks of the UK losing its data adequacy highlighted many of the same issues as the experts Computer Weekly spoke to, noting these would be of interest and potential concern to both the EC and European Court of Justice (CJEU) as they consider the UKs adequacy statuses.This includespotential divergenceon data protection standards that would make it harder for people to exercise their data rights;the possibility that the UK government undermines end-to-end encryption; theindependence and effectiveness of the Information Commissioners Office(ICO); aspects ofthe UKs national security regime under the Investigatory Powers Act 2016, including data collection and retention, surveillance powers and practices, and the role of the Investigatory Powers Tribunal; and any legal cases which provide grounds for concern about UK data protection standards.The EAC also highlighted potential risks posed by onward transfers of data from the UK to other third countries, including under the UK-US Cloud Agreement.However, the EACs findings were published a day before the DUAB was announced, and two days before the text was published online, meaning its inquiry focused on the previous governmentsData Protection and Digital Information (DPDI) Bill which was dropped from the legislative agenda during the UKs pre-general election wash up period.While the ECs adequacy decision will rest on the exact contents of DUAB for which there is still no official Keeling Schedule it will be looking to assess whether the framework provides an essentially equivalent level of data protection for EU citizens data.While some of the more controversial measures contained in the previous DPDI Bill including removing the need for data protection impact assessments and abolishing the dual biometrics and surveillance camera commissioner role have been dropped in the DUAB, many aspects of it have been carried over.There are also a number of new measures that may create fresh adequacy-related problems, particularly changes to the international data transfer regime for police.While an amendment to the DUAB was tabled by Liberal Democrat peer Lord Clement-Jones that would have required the secretary of state to carry out a formal impact assessment of the bill concerning the UKs data adequacy, government ministers argued against it during the Lords first committee stage on 16 December 2024.Responding to Clement-Jones during that debate, Baroness Jones, parliamentary under-secretary of state at the Department for Science, Innovation and Technology (DSIT), said maintaining adequacy was a priority for the government, noting that the free flow of personal data with the EU is vital to research, innovation and safety.For that reason, the government is doing all that it can to support its swift renewal. I reassure noble Lords that the bill has been designed with EU adequacy in mind, she said.The government has incorporated robust safeguards and changed proposals that did not serve our priorities and were of concern to the EU. It is, though, for the EU to undertake its review of the UK, which we are entering into now. On that basis, I suggest to noble Lords that we should respect that process and provide discretion and not interfere while it is underway.A similar position has been adopted by information commissioner John Edwards, who in response to the DUAB said: Whilst ultimately a decision for others, in my view the proposed changes in the bill strike a positive balance and should not present a risk to the UKs adequacy status.However, the position of the UK government and ICO differs significantly from the views of a number of specialists familiar with both the EU LED and the UK DPA Part Three. Computer Weekly contacted the Home Office about what robust safeguards have been put in place, and which DUAB proposals have been changed that were of concern to the EU, but received no response on this point.Chris Pounder director of data protection training firm Amberhawk wrote in a blog post that the DUAB would allow the secretary of state to designate that certain police datasets can become subject to Part Four national security rules, rather than Part Three law enforcement rules, over which the ICO has limited enforcement powers.The proposal has the effect of taking large volumes of personal data out of the UKs data protection regime, he wrote.Part Four processing is also completely separate from the LED or GDPR and has no equivalent in EU law, effectively lifting police data out of the scope of EU law in instances where the secretary of state decides police and intelligence bodies can share the data. The [DUAB] proposal has the effect of taking large volumes of personal data out of the UKs data protection regime Chris Pounder, AmberhawkComputer Weekly contacted the Home Office about the removal of policing data from the data protection regime, but received no on-the-record response on this point.Pounder further noted that while the ICO is being abolished in favour of the Information Commission, the problem remains in the DUAB that the secretary of state will be able to appoint the most important members of the Commission, which has the potential to give them undue influence over the new bodys decision-making processes.The Commission still has to have regard for: the desirability of promoting innovation and competition; the importance of the prevention, investigation, detection and prosecution of criminal offences; and the need to safeguard national security, he wrote. In other words, these regards could fetter decisions to protect the privacy of data subjects.Pounder added the DUAB will also permit the secretary of state to apply a data protection test when considering whether a country, part of a country, or a controller located in a country offers an adequate level of protection.He said the provisions will increase the risk of divergence from EU transfer standards if the EC and UK government have differing views on what adequate means here. Also I dont understand how a country is not deemed adequate, but a controller, processor, or recipient located in that country is, Pounder added.While the UK has already taken steps to award its own law enforcement adequacy to countries not recognised by the EU including the Isle of Man, Jersey and Guernsey the EU has not yet reacted to these changes.Thomas Barrett, a partner at CyXcel who leads the organisations data protection and privacy practice, and has previously advised the Home Office and Ministry of Justice on compliance with the DPA 2018, said there are certain scenarios where specialist police units within forces may have to collaborate with intelligence services for particular operations for example, in terrorism cases where intelligence services have information but no power of arrest as police do adding while it raises red flags I would be surprised how many of these are made.He added that in cases where this power is used, it has the potential to be more targeted, more proportionate, and safer, because only one set of data protection requirements would apply to this processing, rather than potentially three currently.As a result, Barrett said the changes being made to UK law via the DUAB are very unlikely to materially affect the countrys LED adequacy.It would be counter-productive to remove adequacy over such small changes theres so much [law enforcement] cooperation. Looking at the detail, I struggle to see how you really make hay of a lot of it.He said the real risk to LED adequacy therefore lies at the political level, which will be decided between the EC and the UK government.Independent privacy consultant Owen Sayers, a long-term commentator on DPA Part Three compliance issues with more than 25 years of experience in delivering secure solutions to policing and the wider criminal justice sector, said for the first time UK legislation would place individual data processors such as cloud providers on the same broad footing as overseas law enforcement organisations, exempting them from the list of mandatory transfer conditions outlined in Article 39 of the LED.This includes that the transfers be strictly necessary, that no data subject rights override the public interest of the transfer, that transferring to another policing body or competent authority in LED parlance would be ineffective, and that the controller provides specific instructions of how to process the data in that particular case.Under the UKs current law enforcement-specific data protection rules, police data controllers are bound by the DPA 2018s stringent transfer requirements, which fully mirror EU law.This means that, as it stands, each individual law enforcement data controller must ensure that a contract in writing exists between itself and the data processor, which sets out details of the processing, including its duration, nature, and the type and categories of personal data involved. To be valid, the contract or terms of service must be explicit in how they meet the DPA requirements.Police data controllers are also required to ensure the processor seeks and receives permission before transferring data to a third country, for each particular transfer made. This means each transfer must be assessed on a case-by-case basis.Police data controllers are further required to perform a case-by-case analysis and justification for all personal data offshored to such processors, and to report this to the ICO. Although police forces have used Microsoft and Amazon Web Services services for the past six years meaning millions of these transfers will have taken place the ICO revealed in a Freedom of Information (FoI) response to Sayers that only 148 such notifications had been received up to June 2023.As previously reported by Computer Weekly, the use of hyperscalers under current UK law presents a number of data protection concerns, including US government access via the countrys invasive surveillance laws, and an inability to comply with the strict transfer requirements contained within the DPA 2018.In June 2024, Computer Weekly reported details of discussions between Microsoft and Scottish policing bodies obtained via FoI rules in which the tech giant admitted it could not guarantee the sovereignty of UK policing datahosted on its hyperscale public cloud infrastructure.As a result of these FoI responses, Sayers said the law is breached far more often than it is adhered to: The evidence to show that multiple parts of the Part Three legislation are consistently breached or simply ignored by policing and their justice partners is overwhelming. In truth, the number of organisations who do apply the law as its currently written is less than a handful, though those that do so do it very well.Mariano delli Santi, legal and policy officer at the Open Rights Group (ORG), said these issues mean it is an open question whether cloud providers can adhere to Part Three requirements in practice. Given the issues around sovereignty, is a cloud provider able to enforce the contractual agreements entered into with the police? I think thats an issue that would cause concern, he said.Since the re-election of Donald Trump, delli Santi pointed out that the US government has broken several adequacy-related commitments made to the EU around enhancing scrutiny and ensuring the proportionality of their intelligence services operations.The Trump Administration fired members of the Privacy and Civil Liberties Oversight Board, and then doubled down with the Federal Trade Commission. Both bodies were fundamental pieces of the EU-US Data Protection Framework [DPF] which, at this point, is quite certain to be struck down by the CJEU, he said, adding the UK-US Data Bridge, which acts as an extension of the DPF, will also go down if the EU invalidates the framework.It has now become obvious that the EU-US DPF will not last for long, and it has just as obviously become unfeasible to rely on US cloud providers for storing personal data unless you are willing to compromise the security and sovereignty of the data you transfer. Indeed, European lawmakers have already started to discuss this.Based on all the above, it is now a fact that relying on US cloud services constitutes a threat to the sovereignty, security and autonomy of the UK. Until now, this has been treated as a risk-mitigation issue at best, or something to be swept under the carpet at worst.Highlighting the lack of clarity from the UK data regulator around cloud data sovereignty and the applicability of standard contractual clauses in this context, delli Santi said this has created a grey area in which transfers have been allowed to continue.The UK government, on their side, have tried to formalise this approach with the DUAB, which introduces a new data transfer regime specifically designed to accommodate the ICOs tolerant approach toward data transfers that lack effective safeguards, and allow data transfers to countries such as the United States by sidestepping human rights and data security concerns.He added that the UK needs an exit plan to progressively cut reliance on US digital infrastructure and services and we need this plan fast, which includes contingencies to move away holding companies or subsidiaries of US firms geographically based in Europe, which still fall under US jurisdiction. Given the issues around sovereignty, is a cloud provider able to enforce the contractual agreements entered into with the police? I think that would cause concern Mariano delli Santi, Open Rights GroupAny of these companies are under an obligation to cooperate with law enforcement and international security authorities in the United States, which can be ordered to hand over data without necessarily having to tell the contracting party, said delli Santi.According to the governments explanatory notes published for the DUAB in October 2024 (paragraph 1022), Schedule 8 of the bill seeks to widen the transfer conditions by expanding the list of intended recipients to specifically include processors acting on behalf of, and in accordance with a contract with, a controller.It added that while transfers to processors in third countries are currently permissible, this amendment clarifies the existing law and provides legal certainty to UK controllers that they can transfer personal data to their processors operating outside of the UK.The explanatory notes also specify that the DUAB will no longer require controllers to notify the commissioner on each occasion data is transferred; it simply requires notification of the categories of information that will be transferred.Microsoft and Police Scotland case studyThere are long-running concerns that the current rules around data transfers are not being followed by a range of UK police data controllers given the routine nature of data transfers in hyperscale public cloud architecture for processing and support purposes.For example, in April 2023, Computer Weekly revealedtheScottish governments Digital Evidence Sharing Capability(DESC) service contracted to body-worn video provider Axon for delivery and hosted on Microsoft Azure was being piloted by Police Scotland despite a police watchdog raising concerns about how the use of Azure would not be legal.Specifically, the police watchdog said there were a number of other unresolved high risks to data subjects, such as US government access via the Cloud Act, which effectively gives the US government access to any data, stored anywhere, by US corporations in the cloud; Microsofts use of generic, rather than specific, contracts; and Axons inability to comply with contractual clauses around data sovereignty.In June 2024, Computer Weekly reported details of discussions between Microsoft and the Scottish Police Authority (SPA), in which the tech giant admitted it cannot guarantee the sovereignty of UK policing datahosted on its hyperscale public cloud infrastructure.Specifically, it showed that data hosted in Microsoft infrastructure is routinely transferred and processed overseas; that the data processing agreement in place for DESC did not cover UK-specific data protection requirements; and that while the company claimed it has the ability to make technical changes to ensure data protection compliance on transfers, it is only prepared to make these changes for DESC partners and not other policing bodies because no one else had asked.The documents also contain acknowledgements from Microsoft that international data transfers are inherent to its public cloud architecture, and that limiting transfers based on individual approvals by a police force as required under DPA Part Three cannot be operationalised.Computer Weekly also revealed in December 2024 that Axon headquartered in the US and therefore directly subject to its surveillance laws is still in possession of the encryption keys for DESC, opening up the potential for access and transfer of sensitive data without the knowledge or consent of Police Scotland.In June 2021, the European Data Protection Board (EDPB) debunked the idea that encryption is an effective safeguard when the data is either decrypted for processing in the cloud, or the keys are otherwise held by a technology service provider.For example, the EDPB noted that when cloud service providers require access to data in the clear for processing i.e. unencrypted, which is every time they need to process text data because there are currently no technologies that enable in the clear processing on this type of information transport encryption and data-at-rest encryption, even taken together, do not constitute a supplementary measure that ensures an essentially equivalent level of protection if the data importer is in possession of the cryptographic keys.However, Sayers argued that even if the US government does utilise its various surveillance laws to gain access to UK data, the transfers would be unlawful anyway as UK law lays down a series of specific steps that must be followed for each and every transfer of a specific piece of personal data under Part Three.These steps are not being followed, and Microsoft has made clear that they cannot be followed actually, theyve said impossible to operationalise. Because the steps laid down in the DPA 2018 Part Three are not and cannot be followed, that is one of the main reasons why the processing being done on these clouds is in breach of UK law, he said.It makes zero difference if the US government bogeyman tries to use the Cloud Act to look at the data or not, as the data was illegally transferred regardless of the Cloud Act. The steps laid down in the DPA 2018 Part Three are not and cannot be followed [which is] one of the main reasons why the processing being done on these clouds is in breach of UK law Owen Sayers, independent privacy consultantHe added: The intention [of the new DUAB] is to put non-UK processors principally hyperscalers on the same broad legal footing as overseas law enforcement organisations.He pointed out that the bill would enable UK policing bodies to send data overseas to offshore processors with minimal restrictions. The bill actually puts overseas processors above overseas law enforcement processors, in the respect that it completely removes obligations to record what data is transferred to them, inform the ICO or make any assessments as to whether a particular transfer is safe and consider the data subjects rights in advance of sending the data.Sayers added that while these and other changes to Part Three would be directly contradictory to EU law, the most likely outcome would be the CJEU finding that the UK regime falls far below EU standards and thus moves to block UK data transfers.He further added that individual member states may also deem UK laws to be too divergent from their domestic laws to continue to send data, noting the chance of this is high given there are 27 member states, each with their own implementation of the LED.You can 100% use cloud for law enforcement data, but it needs to be sovereign and fully conformant with the law. If you need to change the law to accommodate a specific provider, then youve picked the wrong supplier.Computer Weekly contacted the Home Office about the changes to the law enforcement data transfer regime, and UK policings track record of non-compliance with existing data rules via its use of hyperscalers.A Home Office source told Computer Weekly that the use of cloud providers, in particular, has caused some confusion, and that measures contained within the bill are intended to give law enforcement the confidence to use cloud processors. However, they said the use of cloud services must not come at the expense of security, and high standards of protection will continue to be applied.Clement-Jones highlighted how cloud service providers routinely process data outside the UK and are unable to provide necessary contractual guarantees to policing bodies, as required by Part Three. As a result, their use for law enforcement data processing is, on the face of it, not lawful, he told the House of Lords.He added this non-compliance creates significant financial exposure for the UK, including potential compensation claims from data subjects for distress or loss, something that is exacerbated by the sheer volume of data pressed by law enforcement bodies: If only a small percentage of cases result in claims, the compensation burden could reach hundreds of millions of pounds annually.Clement-Jones concluded that the governments attempts to change the law suggest that past processing on cloud service providers has not been compliant with the relevant data protection laws.As a result, he proposed an amendment to bring attention to the fact that there are systemic issues with UK law enforcements new use of hyperscaler cloud service providers to process personal data, which would strictly limit overseas transfers to law enforcement bodies with a legitimate operating need that is, not cloud service providers.While the Lords were not invited to take a decision on Clement-Joness hyperscaler amendment, government minister Baroness Jones said the DUABs bespoke path for personal data transfers from UK controllers to international processors is crucial [as] we need to ensure that law enforcement can make effective use of them to tackle crime and keep citizens safe. One of the biggest problems in data protection is a lack of understanding and clarity [so] anything that can make it clearer and easier to follow can only be a good fit Thomas Barrett, CyXcelShe added the aim of the DUABs reform around international law enforcement transfers is to provide legal clarity in the bill to law enforcement agencies in the UK so that they can embrace the technology they need and make use of international processors with confidence.She added: Such transfers are already permissible under the legislation, but we know that there is some ambiguity in how the law can be applied in practice. This reform intends to remove those obstacles. The noble Lord would like to refrain from divergence from EU law. I believe that in this bill we have drafted the provisions, including this one, with retaining adequacy in mind.Barrett said the DUAB will clarify the law in ways that make it easier to put in place contractual provisions and other measures that adequately protect the data: One of the biggest problems in data protection generally, but particularly here, is a lack of understanding and a lack of clarity anything that can make it clearer and easier to follow for individuals that have to apply this stuff can only be a good fit.Sayers made a similar argument, noting that while many data protection practitioners believe the EU or UK GDPR to be the gold standard of legislation, they simply fail to recognise that GDPR has a sister piece of legislation in the LED that is sufficiently different that you cannot apply GDPR thinking to it.He added: This is a problem I see day in, day out, where a GDPR hammer is used to try to fix an LED nail, and even the ICO is not immune to confusing the two different sets of laws.According to delli Santi, the approach to transfers under the DUAB as it stands is formalising an approach that has already been changed. He added that given the deep commercial, governmental and cultural ties between the UK and EU, the impact of divergence is amplified significantly.The DUAB as introduced will also seek to remove the statutory logging requirements of Part Three, which would allow police to access personal data from various police databases during investigations, without having to manually record the justification for the search.The removal of police logging requirements, however, could represent a further divergence from the EUs LED, which requires logs to be kept detailing how data is accessed and used.The logs of consultation and disclosure shall make it possible to establish the justification, date and time of such operations and, as far as possible, the identification of the person who consulted or disclosed personal data, and the identity of the recipients of such personal data, says the LED.Clement-Jones told Computer Weekly that if the law changes to allow police data transfers to, and processing in, infrastructure not owned or controlled by UK bodies, it could absolutely be a problem for the UKs LED adequacy retention. He added that given these clear access and control issues, the potential removal of police logging requirements is egregious.Computer Weekly contacted DSIT about the removal of the logging requirements and whether it believes this measure represents a risk to the UK being able to renew its LED adequacy decision in April 2025, but DSIT declined to comment on the record.Speaking during the 16 December Lords debate on the bill against the removal of justification logging requirements, Clement-Jones said: The public needs more, not less, transparency and accountability over how, why and when police staff and officers access and use records about them.He added that while policing systems typically capture when, how and by whom data has been accessed, they very rarely capture the justification. This is despite the fact that Article 63 of the LED provided a grace period from May 2018 to May 2023 for member states to implement justification recording mechanisms to bring their legacy systems into compliance with the directive new systems procured from May 2016 onward were required to comply from the start.To alleviate the issue, Clement-Jones tabled a further amendment to ensure the logging requirements remain, which would prevent material divergence from the EU Law Enforcement Directive; although this was also withdrawn.He also highlighted that many commodity IT solutions procured by policing organisations do not capture justifications by default, noting that while a transitional relief period was put in place with the introduction of DPA 2018 to modify legacy systems installed before May 2016 later extended to May 2023 UK law enforcement bodies did not in general make the required changes.Nor, it seems, did it ensure that all IT systems procured after 6 May 2016 included a strict requirement for LED-aligned logging. By adopting and using commodity and hyperscaler cloud services, it has exacerbated this problem, he said, noting the government now wishes to strike the justification requirements completely.This is a serious legislative issue on two counts: it removes important evidence that may identify whether a person was acting with malicious intent when accessing data, as well as removing any deterrent effect of them having to do so; and it directly deviates from a core part of the law enforcement directive and will clearly have an impact on UK data adequacy.DSIT claims that removing the logging obligation will save 1.5 million police officer hours a year and save 42.5m for the public purse, but Sayers pointed out that the published impact assessments dont so far evidence these claims.The reality is that most police IT systems dont have the means to capture the required data, said Sayers, who was previously involved in the design and delivery of many UK national police systems.The factsheets identify this technology problem, which exists on cloud as well as legacy systems like the PNC [Police National Computer], but instead of addressing the issue the government simply want to strike the difficult bits out of the act.He added: The real reason they dont want to capture the information is theyve failed to invest any money in upgrading the legacy IT, and the new systems theyve adopted dont capture that information by default and cant be made to do so.DSIT claims that capturing justification is likely to be of little use in a misconduct investigation, but Sayers poured cold water on this.Public trust, the safety of vulnerable people, as well as the protection of police staff from claims of improper conduct, all rest on being able to prove that access to data was legitimate, he said.Home Office figures show police staff misuse of data to be a significant issue, with 1,630 recorded cases investigated in the year to March 2023, the last figures available.However, Barrett said the removal of justification logging is not a problem, adding its more important to have the ability to track who accessed data and when, because if youre a bad actor youre not going to put down the real reason if youve already got access to these kinds of systems, youre not an idiot, and so youre going to put something like routine checks or some other bland, uninteresting, non-determinative thing.He further added that inputting justifications only increases the administrative burden on police, and that while it is very common, even in much older computer systems, to be able to log time and dates, many systems are simply not architected to record justification.He added: Wed be much better off making sure that all the systems are really good at recording time and access, because the reality is, in your investigation, thats going to be the thing that youre looking at. Not whatever fanciful thing a bad actor has decided to enter as the fake justification for the access.During the DUAB debate, Baroness Jones insisted the removal of logging requirements is not a watering down of provisions. We are just making sure that the safeguards are more appropriate for the sort of abuse that we think might happen in future from police misusing their records.While the DUAB has since progressed to readings in the House of Commons, the police data issues were not addressed outside of vague references to reducing the administrative burden on police officers. It is currently in the committee stage, which will be followed by the report stage and a third reading.So far, the police data issues have not been discussed during the committee stage.Read more about police technologyDriving licence data could be used for police facial recognition: The governments Crime and Policing Bill could allow police to access the UK driving licence database for use in facial recognition watchlists, but the Home Office denies biometric data would be repurposed in this way.Axon still in possession of Police Scotland encryption keys: Suppliers possession of encryption keys for Police Scotland data sharing system opens potential for access and transfer of sensitive data without the knowledge or consent of the force.UK police forces supercharging racism with predictive policing: Amnesty International says predictive policing systems are supercharging racism in the UK by taking historically biased data to further target poor and racialised communities.0 Comments ·0 Shares ·27 Views
-
UK law enforcement data adequacy at riskwww.computerweekly.comAlsu - stock.adobe.comNewsUK law enforcement data adequacy at riskThe UK government says reforms to police data protection rules will help to simplify law enforcement data processing, but critics argue the changes will lower protection to the point where the UK risks losing its European data adequacy BySebastian Klovig Skelton,Data & ethics editorPublished: 31 Mar 2025 15:55 The UK government has introduced its Data Use and Access Bill (DUAB) to Parliament, but proposed reforms to police data protection rules could undermine law enforcement data adequacy with the European Union (EU).Currently going through the committee stage of Parliamentary scrutiny, theDUABwill amend the UKs implementation of the EU Law Enforcement Directive (LED), which is transposed into UK law via the current Data Protection Act (DPA) 2018 and represented in Part Three of the DPA, specifically.In combination with the current data handling practices of UK law enforcement bodies, the bills proposed amendments to Part Three which include allowing routine transfer of data to offshore cloud providers, removing the need for police to log justifications when accessing data, and enabling police and intelligence services to share data outside of the LED rules could present a challenge for UK data adequacy.In June 2021, theEuropean Commission granted data adequacy to the UKfollowing its exit from the EU, allowing the free flow of personal data to and from the bloc to continue, butwarnedthe decision may yet be revoked if future data protection laws diverge significantly from those in Europe.While Computer Weeklys previous reporting on police hyperscale cloud use has identified major problems with the ability of these services to comply with Part Three, the governments DUAB changes are seeking to solve the issue by simply removing the requirements that are not being complied with.For example, while the DPA 2018 does allow for overseas transfers to non-law enforcement recipients that is, cloud providers this is only permissibleHowever, in June 2024, Computer Weekly confirmed that UK policing data uploaded to Microsoft services is routinely sent offshore for some forms of processing, while IT support is provided on a global follow-the-sun model.To circumvent the lack of compliance with these transfer requirements, the government has simply dropped them from the DUAB, meaning policing bodies will no longer be required to assess the suitability of the transfer or report it to the data regulator.Commenting on the transfer issue during a DUAB debate in the House of Lords, Liberal Democrat peer Tim Clement-Jones highlighted how, as it stands, cloud service providers routinely process data outside the UK, and are unable to provide necessary contractual guarantees to policing bodies as required by Part Three: As a result, their use for law enforcement data processing is, on the face of it, not lawful.He added: The governments attempts to change the law highlight the issue and suggest that past processing on cloud service providers has not been in conformity with the UK GDPR [General Data Protection Regulation] and the DPA.Through the DUAB, the government has also expanded the list of lawful recipients to now include a processor whose processing is governed by, or authorised in accordance with, a contract with the controller that complies with section 59, which outlines key elements that must be contained in any contract between a law enforcement controller and processor.This includes specific details of the exact types of data, the categories of data subjects and the specific purpose of the processing, as well as explicit guarantees from the processor about how it will comply with all the requirements of Part Three.However, given the international nature of the data sharing that takes place on commodity hyperscale architecture, cloud providers are either unable or unwilling to make contractual guarantees that satisfy all aspects of Part Three.As Microsoft told the Scottish Police Authority (SPA), in relation to its Azure-hosted Digital Evidence Sharing Capability, the company cannot accept specific consent [to transfer data internationally] on a case-by-case basis as this would be impossible to operationalise.All of this effectively means that under the DUAB, the data can be routinely offshored to jurisdictions with lower data protection standards, without adherence to LED conditions around strict necessity.Similarly, while the LED provided a five-year grace period to ensure all legacy police systems could record justification logs for why a particular piece of information has been accessed with systems procured after May 2016 were required to have this capability from the start most policing systems in the UK still do not have this capability.Instead, the UK government has simply removed the requirement to record these justifications, arguing that the change will save police time and that the data has little evidentiary value because people are unlikely to record an honest justification anyway.According to Owen Sayers a long-term commentator on DPA Part Three compliance issues with more than 25 years of experience in delivering secure solutions to policing and the wider criminal justice sector changing the law in this way will permanently diverge UK law from the LED requirements.He added that while UK police have been breaking the law in practice since the DPA came into effect in May 2018, the law they were breaking was at least aligned to those in the European Union.Even though in practical terms the UK hasnt actually been protecting personal data as theyre required to under the LED, their law did at least give recourse to a data subject to take action about this processing (even if no one actually did so), he said.Once DUAB comes into force, however, the landscape has totally changed. Not only will UK law enforcement bodies be sending massive amounts of personal data (including a lot of data about EU citizens) offshore to a range of countries not deemed adequate by the EU, but UK law will have change to make it legal for them to do so.By making these changes under DUAB, the government have thrown into sharp relief that law enforcement bodies are breaching the law today theyve literally confirmed it by modifying the law to give Microsoft and AWS this special status.Computer Weekly contacted the Home Office about the threat to the UKs LED adequacy created by the governments proposed changes to the law enforcement data protection regime.We have introduced some targeted amendments in the Data Use and Access Bill to improve public trust and to drive up law enforcement efficiency by simplifying the legislation. We are committed to data adequacy and had the UKs adequacy decisions in mind when producing this bill, said a spokesperson. Any changes to our data protection regime must not come at the expense of security, and high standards of protection will continue to be applied.A Home Office source told Computer Weekly that that the use of cloud providers in particular has caused some confusion, and that measures contained within the bill are intended to give law enforcement the confidence to use cloud processors. However, they said the use of cloud services must not come at the expense of security and high standards of protection will continue to be applied.Read more about police technologyDriving licence data could be used for police facial recognition: The governments Crime and Policing Bill could allow police to access the UK driving licence database for use in facial recognition watchlists, but the Home Office denies biometric data would be repurposed in this way.Axon still in possession of Police Scotland encryption keys: Suppliers possession of encryption keys for Police Scotland data sharing system opens potential for access and transfer of sensitive data without the knowledge or consent of the force.UK police forces supercharging racism with predictive policing: Amnesty International says predictive policing systems are supercharging racism in the UK by taking historically biased data to further target poor and racialised communities.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueWhat to expect from Atlassian Team 25 conference CW Developer NetworkSLM series - Nooks: Downsizing AI without shrinking its smarts CW Developer NetworkView All Blogs0 Comments ·0 Shares ·26 Views
-
T-Levels not attracting as many students as hopedwww.computerweekly.comNewsT-Levels not attracting as many students as hoped A report from the National Audit Office has found that fewer students started T-Levels this year than previously predicted ByClare McDonald,Business Editor Published: 31 Mar 2025 16:00 Interest in T-level qualifications was overestimated by the Department for Education (DfE), according to a report by the National Audit Office (NAO).In its Investigation into introducing T-levels report, the NAO claimed the DfE overestimated the number of students who would choose the T-level route post GCSE. Some 25,508 students started a T-level in September 2024, which while a 59% year-on-year (YoY) increase represents only 42% of the DfEs estimate made in November of 2022.Originally, the DfE had aimed to have 100,000 students starting a T-level in September of this year, though it has revised its numbers due to slower-than-expected uptake, with its latest model showing around 50,000 to 60,000 students will be taking T-levels by September 2027.Gareth Davies, head of the NAO, said: T-levels were developed to provide crucial qualifications and industry experience to students, allowing them to go on to further education or begin roles in skilled jobs.They have the potential to offer new opportunities for young people and address critical skills gaps across the economy. Although the Department for Education has made progress in delivering the wide range of courses available, efforts must be made to increase student numbers and realise all the potential benefits of T-levels.T-levels have been in the making since 2016, when the Independent Panel on Technical Education recommended more of a focus on technical skills development in the UK.T-levels were pitched as qualifications which would provide these necessary skills for particular roles in line with what the UK needs for economic growth, particularly as the government has continually highlighted its ambitions of becoming a global tech superpower. But there are number of skills gaps across the UK, with concerns among employers there are not enough skilled workers to fill technical job roles so, are T-levels the answer?As of this year, there are 21 T-levels available to study, including in digital infrastructure and support services, digital production design and development, and engineering, with more expected in the future once some kinks have been worked out with the course content.One of the common complaints made by employers about graduates of tech courses is that they dont necessarily have the skills needed to fill the roles, with many stating that internal skills and talent development is a potential answer something T-levels may address through the amount of hours participating students spend on placements gaining real-world skills.So far, 98% of students who have taken part in a T-level have done an industry placement, though the Department for Education has been facing difficulties trying to raise awareness about T-level qualifications among students. Since the number of students who can take T-levels is dependent on industry placements, the DfE has concerns a lack of willing industry participants could have an impact on possible student uptake in the future.Attainment is also something to note, with the percentage of students attaining their T-levels dropping as more subjects have been introduced, with 89% of students so far achieving at least a pass last year, a YoY drop from 94% in 2023, and a drop from 97% in 2022.T-levels also typically cost more to run than other Level 3 qualifications the DfE provides T-level providers with between 5,500 and 7000 per T-level student, compared with a maximum contribution of 4,800. By the end of this month, an estimated 1.25bn has been spent by the Department of Education on T-levels since their inception.The NAO made a number of recommendations in its report to address the lack of student numbers, as well as delays in expanding the number of T-levels available. As industry placements are a vital part of offering T-levels, the NAO urged engagement between local education providers and employers to ensure the types of T-levels and the skills learned match the technical skills needs of that particular area.It also recommended the DfE develop a system to ensure the impact on T-levels is considered as part of any strategic changes to the development of technical education.But many in the industry are invested in the success of T-levels as a solution to the sectors skills gaps. Bev White, CEO of recruiter Nash Squared, has reported on Computer Weekly that T-levels could be the answer to filling industry roles where skills may currently be lacking.She said: My message to employers is to be curious about T-levels, lean in. They could be a fantastic source of fresh new talent for your business. Hundreds of employers have already hosted T Level students on industry placements, and that number is set to grow.Read more about digital educationSpeaking at the Bett Show this year, the UKs education secretary outlined the ways in which teachers will be using technologies such as AI in the future, including planning lessons and marking workResearch by EngineeringUK and The Royal Society has found that students with special educational needs have more interest in tech careers than other pupilsIn The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueWhat to expect from Atlassian Team 25 conference CW Developer NetworkSLM series - Nooks: Downsizing AI without shrinking its smarts CW Developer NetworkView All Blogs0 Comments ·0 Shares ·21 Views
-
Understanding of black box IT systems will reduce Post Office scandal-like riskwww.computerweekly.comUnderstanding of black box IT systems will reduce Post Office scandal-like risk0 Comments ·0 Shares ·7 Views
-
Microsoft restates commitment to OpenAI amid analyst note about datacentre expansion rollbackswww.computerweekly.comadam121 - stock.adobe.comNewsMicrosoft restates commitment to OpenAI amid analyst note about datacentre expansion rollbacksMicrosoft pushes back on analyst claims its changing relationship with OpenAI is forcing it to scale back its datacentre expansion plans in the US and EuropeByCaroline Donnelly,Senior Editor, UKPublished: 31 Mar 2025 14:45 Microsoft has pushed back against claims its decision to cancel and defer at least 2GW of datacentre projectsin the US and Europe is indicative of its fraying relationship with OpenAI.US analyst TD Cowen published a research note on 26 March 2025 that suggested the public cloud giant had cancelled and deferred datacentre lease agreements in the US and Europe that would have increased its compute capacity by at least 2GW.The reason for the rollback on its plans was, according to TD Cowen, due to Microsofts decision not to support OpenAIs incremental training workloads.TD Cowen had previously said the two companies were involved in a fraying relationship, after Microsoft confirmed in January 2025 that the exclusivity cloud hosting deal between the two firms had been rejigged.A Microsoft blog post, dated 21 January 2025, confirmed OpenAI had made a large Azure commitment that included changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal.This means Microsoft gets first refusal on whether or not it wants to host OpenAI workloads, but OpenAI also reserves the right to build its own capacity with other partners if Microsoft cannot meet its needs.Microsoft has now issued a statement to Computer Weekly, pushing back on TD Cowens take on the situation, while also restating the strength of the working relationship between the company and OpenAI.Read more about Microsoft and OpenAIUS analyst TD Cowen publishes research note pointing to further rollbacks on Microsofts datacentre expansion plansThe Competition and Markets Authorityhas begun looking at whether Microsofts relationship with OpenAI is anti-competitive.In reference to its decision to scale back its datacentre expansion plans, Microsoft said its well-positioned to meet the current and increasing customer demand its seeing for its services thanks to the significant investments its made in its infrastructure to this point.Last year alone, we added more capacity than any prior year in history, said a Microsoft spokesperson. While we may strategically pace or adjust our infrastructure in some areas, we will continue to grow strongly in all regions.This allows us to invest and allocate resources to growth areas for our future. Our plans to spend over $80bn on infrastructure this financial year remain on track as we continue to grow at a record pace to meet customer demand.Microsoft has been a partner in OpenAI since 2019, with the two firms previously stating that they were working towards a shared goal to responsibly advance artificial intelligence research while democratising the technology and making it accessible to all.Around the same time that Microsoft released details of its reworked cloud hosting arrangement with OpenAI, the latter released details of its $500bn effort to expand the infrastructure underpinning its services through the launch of the Stargate Project.Softbank, Oracle, MGX and OpenAI are the equity funders for the initiative, while Microsoft is listed as a technology partner.In reference to its ongoing partnership with OpenAI, the Microsoft spokesperson said: OpenAI continues to be a great partner. We remain committed to pushing the frontier of AI forward, driving innovation, and making cutting-edge models accessible to our customers and partners.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueWhat to expect from Atlassian Team 25 conference CW Developer NetworkSLM series - Nooks: Downsizing AI without shrinking its smarts CW Developer NetworkView All Blogs0 Comments ·0 Shares ·4 Views
-
Podcast: HDD safe from flash for a decade or morewww.computerweekly.comPodcast: HDD safe from flash for a decade or more0 Comments ·0 Shares ·18 Views
-
Meeting the UKs compute capacity needs: Alternatives to hyperscale datacentre buildswww.computerweekly.comThe government has not been shy about its plans to accelerate the pace of new datacentre builds in the UK since coming to power in July 2024.There have been commitments to lower the planning barriers that have previously slowed the pace of new datacentre builds, and lots of talk about how encouraging the sectors growth will bring positive economic benefits to the UK. Another acknowledgement from the government of the importance of datacentres occurred in September 2024, with the news that server farms are to be reclassified as critical national infrastructure (CNI).This, in turn, has promoted a raft of announcements from developers about their plans to build large-scale facilities, housing compute-intensive artificial intelligence (AI) workloads, to capitalise on the governments enthusiasm for ensuring such projects get over the line.Chief among these projects is a 10bn proposal to build Europes biggest AI datacentre in Blyth, Northumberland, funded by US investment firm Blackstone and supported by the UK governments Office for Investment.There is also a government-backed plan to build a 3.75bn hyperscale datacentre on a plot of green belt land neighbouring the South Mimms Service Station in Hertfordshire, which was announced in the same month.This project is being overseen by a company known as DC01UK, which secured local council approval for its plans in January 2025, with the government previously describing the project as a prime example of the type of project it wants to encourage more of in the UK.And while the governments actions have seemingly worked wonders for increasing the number of projects in the UK datacentre markets development pipeline, questions remain about the pressure all this development will have on the countrys already creaking power grids.Before the governments pro-datacentre growth interventions, concerns were also already being aired by real estate consultancies and industry analysts about the UK running out of suitable sites in which to accommodate the growing demand for large-scale compute capacity.This is why the current government has permitted more datacentres to be built on protected green belt sites in recent months, despite opponents to this idea querying whether it is right, and even necessary, to sacrifice the UKs green spaces in pursuit of economic growth.Glasgow-based Asanti Data Centres offers hosting facilities at six sites across the UK, including in Scotland, Manchester, Farnborough, Reading and Leeds.Stewart Laing, the companys CEO, tells Computer Weekly the UK needs to develop a multi-faceted strategy to meet its growing compute demands, because simply building more hyperscale facilities is neither practical nor sustainable.Instead, the government should consider throwing its support behind operators that are also able to offer smaller-scale, strategically placed facilities, rather than solely championing the building of even more power-hungry hyperscale facilities.Hyperscale facilities typically operate on a campus-style model, with sites spanning 100 hectares or more. A single datacentre building can be the equivalent in size to four football fields, with sites often consisting of multiple such buildings, says Laing.At this scale, dedicated power substations are required, raising serious concerns about the UKs ability to divert sufficient energy to support these operations especially when businesses and consumers are struggling with high energy costs.Laings company has traditionally favoured the building of smaller-scale datacentres that are sited in geographically diverse locations, whereas the hyperscale developers have typically focused on building out their presence in and around London and the south-east of England.Developers tend to pay a premium to acquire sites in this region, when there are so many other parts of the UK crying out for investment that would fit the bill too, argues Laing.Although there is still a challenge to power diverse locations, there is a much lower energy requirement and also the opportunity to stimulate local economies outside major urban hubs, he adds.Furthermore, the economic implications of inviting large, US-based corporations for hyperscale development cannot be overlooked. Issues surrounding tax contributions from tech giants such as Google, Amazon and Microsoft remain a concern, he continues.Surely encouraging alternative datacentre models within the UK would deliver greater benefits to the local economy as well as more equitable tax contributions.Read more about datacentre capacity planningSpace and power constrain datacentre planning: The government needs to tackle the resource issues that act as roadblocks to building out UK datacentre capacity.ARM and Meta plotting a path to dilute GPU capacity: Meta wants to make artificial intelligence available to everyone who uses its platforms, but scaling AI to over one billion people is not going to be cheap.Laing is not alone in querying whether selling off and reserving huge banks of land for hyperscale datacentres is a good idea, because these projects typically have long lead times, during which the investment priorities of the operators are subject to change.An example of this is a recent analyst note from TD Cowen about public cloud giant Microsofts alleged plans to scale back its datacentre buildout plans in the US.The note, widely distributed on professional social networking site LinkedIn, states that Microsoft has cancelled leases in the US that would have increased its compute capacity by a couple of hundred megawatts with at least two private datacentre operators.While we have yet to get the level of colour via our channel checks that we would like into why this is occurring, our initial reaction is that this is tied to Microsoft potentially being in an oversupply position, the analyst note states.In our view, this indicates a loss of a major demand signal that Microsoft was originally responding to.Microsoft has declined to comment on the contents of the TD Cowen note, but the company has circulated a statement to the press that says it is well-placed to meet the demand it is seeing for its services, as a result of its past and future datacentre capacity planning activities.While we may strategically pace or adjust our infrastructure in some areas, we will continue to grow strongly in all regions, the statement reads.Even so, there is a feeling among UK-based datacentre market stakeholders that the trend towards building datacentres of ever-increasing size could be coming to an end.There are strong signs that the hyperscale datacentre boom is slowing down, due to a mixture of power constraints, economic pressure and changing technology demands. In major European hubs like London, Dublin and Frankfurt, power availability is becoming a bottleneck, says UK datacentre market veteran Peter Hannaford.To emphasise this point, Hannaford points to a newly published report by real estate advisory company Cushman & Wakefield, documenting datacentre growth trends during the second half of 2024 across Europe, the Middle East and Africa (EMEA). I think the hyperscale bubble will burst this year. The signs are appearing, with the news that Microsoft has cancelled leases totalling several hundred megawatts Peter Hannaford, EdgeNebulaThe report states that there are already 400GW (gigawatts) worth of outstanding requests from datacentres for connections to the power grid around London, and regulator Ofgem estimates 60-70% of these will never happen, retells Hannaford.Rising construction costs and supply chain challenges have also impacted on large datacentre builds, he continues. More importantly AI and real-time applications are increasing the need for edge computing rather than massive, centralised facilities, and some companies are choosing smaller, regional datacentres closer to users instead of massive hubs.This is a business model Hannafords newly launched startup, EdgeNebula, is seeking to popularise across the UK by converting existing pockets of disused commercial property and office spaces into edge-like micro-datacentres.The idea being that these sites will be linked together into, as he terms it, an amorphous mass of compute capacity that can be used to host cloud and AI workloads.And with trends like cloud repatriation and increased interest across Europe for sovereign cloud services, Hannaford is of the view that this could also see demand for hyperscale datacentres begin to wane.I think the hyperscale bubble will burst this year. The signs are appearing, with the news that Microsoft has cancelled leases totalling several hundred megawatts with at least two private datacentre operators, he says.The my-datacentre-is-bigger-than-your-datacentre approach is simply not sustainable. And yet the abundance of redundant real estate and pockets of available power create an ideal landscape for the deployment of micro-clouds. Furthermore, advances in connectivity solutions make distributed hyperscale a real possibility.EdgeNebula is not the only company to see the value in repurposing existing sites to meet the UKs need for compute capacity, says Derek Main, technical director and datacentre sector lead at engineering and design firm Hoare Lea.We are seeing a growing trend towards adaptation or repurposing of existing buildings, especially the repurposing of logistics warehouses for the datacentre needs, he says.It presents a prime opportunity to boost capacity and increase efficiencies, with the ability to build quicker.Colocation giant Digital Realty would agree on all those points, based on its experience of transforming a former printing press at a place called Olivers Yard in East London into a datacentre.Speaking to Computer Weekly, Samus Dunne, managing director for the UK and Ireland at Digital Realty, says the project is a sign of the companys commitment to meeting the growing demand for datacentre capacity in the UK in a sustainable and efficient way.We saw an opportunity to repurpose existing infrastructure, reducing environmental impact while staying close to our customers in key urban areas. Olivers Yard in London was, and continues to be, an ideal choice its location offers excellent connectivity, and the chance to revitalise a historic building, turning an old printing press into a cutting-edge datacentre, says Dunne.Weve successfully applied this model in other locations too, including the Neckermann campus in Germany known as Digital Park Fechenheim, a colocation facility in Frankfurt which was designed by Egon Eiermann and was once the headquarters of the Neckermann mail-order company. Were always exploring more opportunities to transform existing, industrial sites into modern, high-performance datacentres to meet demand responsibly.That said, Dunne acknowledges the process of repurposing existing sites into datacentres is not without its challenges.Navigating zoning, environmental regulations and preserving historical features can also be complex, says Dunne. However, with the right expertise, flexibility and investment in emerging technologies like direct liquid cooling and renewable energy, it is possible.Even so, operators need to be aware that if they are considering following a similar path, power availability can remain a bottleneck on projects, particularly in instances where companies are hoping to host AI workloads.The UKs grid infrastructure needs significant investment to support this transition, Dunne acknowledges.Our track record of transforming industrial sites gives us the confidence to continue this approach, and we encourage others to consider it despite the challenges, as the benefits for capacity and sustainability are clear.0 Comments ·0 Shares ·23 Views
-
Countering nation-state cyber espionage: A CISO field guidewww.computerweekly.comQuorum Cybers recently released Global Cyber Risk Outlook Report 2025 outlines how nation-state cyber activities, particularly from China, are evolving. According to the report, Chinas cyber espionage operations will likely increase in 2025, with attacks targeting Western critical national infrastructure (CNI), intellectual property, and sensitive corporate data. The report also highlights that AI-powered cyber capabilities are being leveraged by China-state-sponsored, and other, threat actors to conduct advanced campaigns and evade detection more effectively.Chinas alleged involvement in data theft through services like DeepSeek raises significant concerns for cyber security leaders. Reports indicate that DeepSeeks privacy policies allow user data to be stored on servers within China, making it potentially accessible to the Chinese government under local cyber security laws. Cyber security researchers have also found that DeepSeek embeds technology capable of transmitting user data to China Mobile, a state-owned entity, further heightening fears of surveillance and data exploitation. These risks are so severe that US government entities have moved swiftly towards banning its personnel from using DeepSeek, citing security concerns over data interception, including keystrokes and IP addresses. For chief information security officers (CISOs), this serves as a stark reminder of the dangers posed by foreign adversaries.To mitigate the risks of nation-state cyber threats, security leaders must take a strategic, multi-layered approach. Below are key measures that should be considered:1. Adopt a zero-trust Security ModelZero-trust assumes that every request for access whether internal or external must be verified. Implementing zero trust involves addressing the following core principles:Verify connectivity explicitly through strong authentication, for example multi-factor authentication (MFA)Authenticate and authorise identities, devices, infrastructure, services and applications based on strong conditional access policiesEnforce privileged access through tactics such as just-in-time (JIT) and just-enough-access (JEA)Implement data protection controls based on defined classification policiesTake an assume breach stance, operating under the assumption that connecting entities have been exposed to threats.In partnership with many top cyber security solution providers, the NIST National Cybersecurity Center of Excellence (NCCoE) has drafted Special Publication (SP) 1800-35 Implementing a Zero Trust Architecture. The practice guide is designed to provide implementation examples and technical details on how security leaders can ultimately achieve zero trust to safeguard modern digital enterprises.2. Strengthen supply chain securityThreat actors often exploit supply chains to gain access to larger targets. Organisations should:Conduct rigorous third-party risk assessments, ensuring additional rigour is applied to connected and critical third partiesImplement contractual security obligations for vendors, ensuring key clauses such as the maintenance of strong cyber security programmes and audit rights are consideredContinuously monitor supplier network connections and other forms of access for suspicious activity.3. Enhance threat intelligence, monitoring and responseThreat management programmes must evolve to counter espionage threats. Organisations should:Maintain cyber threat intelligence (CTI) services to track state-sponsored threat actorsConduct ongoing vulnerability detection and mitigation activities, ensuring programmes monitor the full digital estateQuickly detect and respond to threats with 24x7 detection and response and threat hunting servicesIncreasingly leverage automation, including emerging artificial intelligence (AI) services, to streamline and accelerate cyber security programme processes.4. AI and data governance practicesAs AI becomes an integral part of enterprise environments, organisations must implement governance practices to manage AI solutions securely and protect corporate data. Security teams should:Define policies and supporting controls for the secure use of AI and data within business operationsEnsure AI models used internally are developed and deployed with strict security controlsMonitor third-party AI tools for compliance with security and data protection requirementsDefine and deploy strong AI and data protection controls to prevent unauthorised data exfiltration or manipulation.The Security Think Tank on nation-state espionageMike Gillespie and Ellie Hurst, Advent IM: Will DeepSeek force us to take application security seriously?Elisabeth Mackay, PA Consulting:How CISOs can counter the threat of nation state espionage.5. Educate end-users on AI risksThe rapid adoption of AI-driven tools within the workplace increases the risk of accidental exposure or misuse of sensitive data. Organisations should:Conduct regular security awareness training for employees on the risks associated with AI toolsEstablish guidelines on the appropriate use of AI applications in corporate environmentsImplement policies that prevent employees from sharing sensitive corporate data into public AI models7. Test and improve incident response readinessGiven the sophistication of nation-state actors, organisations must ensure their response strategies are up to par. Best practices include:Conducting regular tabletop exercises simulating attack scenarios, including state-sponsored eventsRunning red team/blue team exercises to test security defencesEstablishing and updating clear escalation protocols and contact lists, including the relevant authorities, in case of detected espionage attempts.As CISOs and security leaders navigate this new AI augmented era of cyber threats, leveraging strategic frameworks, advanced security tools, and frequently tested, highly operationalised processes will be essential in countering nation-state industrial espionage. By staying ahead of emerging risks, organisations can ensure the resilience of their operations in an increasingly hostile digital landscape.Andrew Hodges is vice president of product and technology at Quorum Cyber.0 Comments ·0 Shares ·57 Views
-
Understanding RAG architecture and its fundamentalswww.computerweekly.comAll the large language model (LLM) publishers and suppliers are focusing on the advent of artificial intelligence (AI) agents and agentic AI. These terms are confusing. All the more so as the players do not yet agree on how to develop and deploy them.This is much less true for retrieval augmented generation (RAG) architectures where, since 2023, there has been widespread consensus in the IT industry.Augmented generation through retrieval enables the results of a generative AI model to be anchored in truth. While it does not prevent hallucinations, the method aims to obtain relevant answers, based on a company's internal data or on information from a verified knowledge base.It could be summed up as the intersection of generative AI and an enterprise search engine.Broadly speaking, the process of a RAG system is simple to understand. It starts with the user sending a prompt - a question or request. This natural language prompt and the associated query are compared with the content of the knowledge base. The results closest to the request are ranked in order of relevance, and the whole process is then sent to an LLM to produce the response sent back to the user.The companies that have tried to deploy RAG have learned the specifics of such an approach, starting with support for the various components that make up the RAG mechanism. These components are associated with the steps required to transform the data, from ingesting it into a source system to generating a response using an LLM.The first step is to gather the documents you want to search. While it is tempting to ingest all the documents available, this is the wrong strategy. Especially as you have to decide whether to update the system in batch or continuously."Failures come from the quality of the input. Some customers say to me: 'I've got two million documents, you've got three weeks, give me a RAG'. Obviously, it doesn't work," says Bruno Maillot, director of the AI for Business practice at Sopra Steria Next. "This notion of refinement is often forgotten, even though it was well understood in the context of machine learning. Generative AI doesn't make Chocapic".An LLM is not de facto a data preparation tool. It is advisable to remove duplicates and intermediate versions of documents and to apply strategies for selecting up-to-date items. This pre-selection avoids overloading the system with potentially useless information and avoids performance problems.Once the documents have been selected, the raw data - HTML pages, PDF documents, images, doc files, etc - needs to be converted into a usable format, such astext and associated metadata, expressed in a JSON file, for example. This metadata can not only document the structure of the data, but also its authors, origin, date of creation, and so on. This formatted data is then transformed into tokens and vectors.Publishers quickly realised that with large volumes of documents and long texts, it was inefficient to vectorise the whole document.Hence the importance of implementing a "chunking" strategy. This involves breaking down a document into short extracts. A crucial step, according to Mistral AI, which says, "It makes it easier to identify and retrieve the most relevant information during the search process".There are two considerations here - the size of the fragments and the way in which they are obtained.The size of a chunk is often expressed as a number of characters or tokens. A larger number of chunks improves the accuracy of the results, but the multiplication of vectors increases the amount of resources and time required to process them.There are several ways of dividing a text into chunks.The first is to slice according to fragments of fixed size - characters, words or tokens. "This method is simple, which makes it a popular choice for the initial phases of data processing where you need to browse the data quickly," says Ziliz, a vector database editor.A second approach consists of a semantic breakdown that is, based on a "natural" breakdown: by sentence, by section - defined by an HTML header for example - subject or paragraph. Although more complex to implement, this method is more precise. It often depends on a recursive approach, since it involves using logical separators, such as a space, comma, full stop, heading, and so on.The third approach is a combination of the previous two. Hybrid chunking combines an initial fixed breakdown with a semantic method when a very precise response is required.In addition to these techniques, it is possible to chain the fragments together, taking into account that some of the content of the chunks may overlap."Overlap ensures that there is always some margin between segments, which increases the chances of capturing important information even if it is split according to the initial chunking strategy," according to documentation from LLM platform Cohere. "The disadvantage of this method is that it generates redundancy.The most popular solution seems to be to keep fixed fragments of 100 to 200 words with an overlap of 20% to 25% of the content between chunks.This splitting is often done using Python libraries, such as SpaCy or NTLK, or with the text splitters tools in the LangChain framework.The right approach generally depends on the precision required by users. For example, a semantic breakdown seems more appropriate when the aim is to find specific information, such as the article of a legal text.The size of the chunks must match the capacities of the embedding model. This is precisely why chunking is necessary in the first place. This "allows you to stay below the input token limit of the embedding model", explains Microsoft in its documentation. "For example, the maximum length of input text for the Azure OpenAI text-embedding-ada-002 model is 8,191 tokens. Given that one token corresponds on average to around four characters with current OpenAI models, this maximum limit is equivalent to around 6,000 words".An embedding model is responsible for converting chunks or documents into vectors. These vectors are stored in a database.Here again, there are several types of embedding model, mainly dense and sparse models. Dense models generally produce vectors of fixed size, expressed in x number of dimensions. The latter generate vectors whose size depends on the length of the input text. A third approach combines the two to vectorise short extracts or comments (Splade, ColBERT, IBM sparse-embedding-30M).The choice of the number of dimensions will determine the accuracy and speed of the results. A vector with many dimensions captures more context and nuance, but may require more resources to create and retrieve. A vector with fewer dimensions will be less rich, but faster to search.The choice of embedding model also depends on the database in which the vectors will be stored, the large language model with which it will be associated and the task to be performed. Benchmarks such as the MTEB ranking are invaluable. It is sometimes possible to use an embedding model that does not come from the same LLM collection, but it is necessary to use the same embedding model to vectorise the document base and user questions.Note that it is sometimes useful to fine-tune the embeddings model when it does not contain sufficient knowledge of the language related to a specific domain, for example, oncology or systems engineering.Vector databases do more than simply store vectors - they generally incorporate a semantic search algorithm based on the nearest-neighbour technique to index and retrieve information that corresponds to the question. Most publishers have implemented the Hierarchical Navigable Small Worlds (HNSW) algorithm. Microsoft is also influential with DiskANN, an open source algorithm designed to obtain an ideal performance-cost ratio with large volumes of vectors, at the expense of accuracy. Google has chosen to develop a proprietary model, ScANN, also designed for large volumes of data. The search process involves traversing the dimensions of the vector graph in search of the nearest approximate neighbour, and is based on a cosine or Euclidean distance calculation.The cosine distance is more effective at identifying semantic similarity, while the Euclidean method is simpler, but less demanding in terms of computing resources.Since most databases are based on an approximate search for nearest neighbours, the system will return several vectors potentially corresponding to the answer. It is possible to limit the number of results (top-k cutoff). This is even necessary, since we want the user's query and the information used to create the answer to fit within the LLM context window. However, if the database contains a large number of vectors, precision may suffer or the result we are looking for may be beyond the limit imposed.Combining a traditional search model such as BM25 with an HNSW-type retriever can be useful for obtaining a good cost-performance ratio, but it will also be limited to a restricted number of results. All the more so as not all vector databases support the combination of HNSW models with BM25 (also known as hybrid search).A reranking model can help to find more content deemed useful for the response. This involves increasing the limit of results returned by the "retriever" model. Then, as its name suggests, the reranker reorders the chunks according to their relevance to the question. Examples of rerankers include Cohere Rerank, BGE, Janus AI and Elastic Rerank. On the other hand, such a system can increase the latency of the results returned to the user. It may also be necessary to re-train this model if the vocabulary used in the document base is specific. However, some consider it useful - relevance scores are useful data for supervising the performance of a RAG system.Reranker or not, it is necessary to send the responses to the LLMs. Here again, not all LLMs are created equal - the size of their context window, their response speed and their ability to respond factually (even without having access to documents) are all criteria that need to be evaluated. In this respect, Google DeepMind, OpenAI, Mistral AI, Meta and Anthropic have trained their LLMs to support this use case.In addition to the reranker, an LLM can be used as a judge to evaluate the results and identify potential problems with the LLM that is supposed to generate the response. Some APIs rely instead on rules to block harmful content or requests for access to confidential documents for certain users. Opinion-gathering frameworks can also be used to refine the RAG architecture. In this case, users are invited to rate the results in order to identify the positive and negative points of the RAG system. Finally, observability of each of the building blocks is necessary to avoid problems of cost, security and performance.Read more about AI, LLMs and RAGWhy run AI on-premise? Much of artificial intelligences rapid growth has come from cloud-based tools. But there are very good reasons to host AI workloads on-premiseAdvancing LLM precision & reliability - This is a guest post written by Ryan Mangan, datacentre and cloud evangelist and founder of Efficient Ether.Why responsible AI is a business imperative - Tools are emerging for real-world AI systems that focus more on responsible adoption, deployment and governance, rather than academic and philosophical questions about speculative risks.0 Comments ·0 Shares ·40 Views
-
Unleashing the power of data: redefining UK industrial growthwww.computerweekly.comThe Spring Budget is expected to play a pivotal role in advancing the UK's AI ambitions. If the government gets it right, it will be a significant step towards Britain becoming a global AI superpower.Data has a significant role in every sector of the economy and will be central to the governments growth objectives. Data underpins government operations and touches the lives of every citizen. For the AI Opportunities Action Plan to succeed, data must be leveraged in both the development of the industrial strategy and throughout its implementation. To unlock its full potential, data infrastructure should be recognised not just as a horizontal enabler but as a critical growth-driving sector in its own right, similar to advanced manufacturing or life sciences. This requires a fundamental shift in how we perceive, manage, and invest in national data assets. To facilitate this, robust data infrastructure, rolled out through the development and implementation of a National Data Infrastructure Roadmap, must be developed as part of Invest 2035: The UKs Modern Industrial Strategy, as we highlighted last year in our response to the Invest 2035 green paper.If we are serious about achieving ambitious economic growth, we will need to use data insights to monitor progress, barriers, emerging trends, and opportunities. A central dashboard aggregating real-time data from government, industry, and regional partners could enable policymakers to identify trends, such as the emergence of niche growth sectors or regional innovation clusters. These insights would allow for targeted interventions and investments. This will require not just using existing data sources but the generation of new data assets.Advanced methodologies, such as network analysis, could reveal key relationships and patterns within innovation systems, highlighting how advances in one area can spark opportunities elsewhere. For instance, breakthroughs in renewable energy technology could catalyse growth in specialised manufacturing or logistics, revealing new opportunities for investment.The government hopes AI will enable the public sector to spend less time doing admin and more time delivering the services working people rely on. The Alan Turing Institute has estimated that of the 143 million complex tasks performed by civil servants every year, approximately 1,200 person-years of work could be saved if even for each task just one minute could be freed up through AI-enabled automation.Realising these benefits hinges on addressing a pressing challenge: the UK governments data is not yet AI-ready. Our latest research, analysed LLMs' knowledge of government websites and statistics. It revealed significant shortcomings in data quality, accessibility, and interoperability, especially regarding the thousands of datasets hosted on the governments current data portal data.gov.uk. These deficiencies, which previous governments of all colours have let build up across the UK and over time, risk hampering efforts to maximise the productivity gains promised by emerging technologies such as AI. The government now has a chance to change that.Public sector and citizen data can become more FAIR (Findable, Accessible, Interoperable, and Reusable) by adopting metadata standards like Croissant. Designed specifically for machine learning and AI applications, Croissant enables government datasets on platforms like data.gov.uk to be indexed by tools such as Google Dataset Search, significantly improving discoverability and usability. This will allow government data to achieve a higher degree of AI readiness by boosting interoperability and encouraging broader adoption by the AI community.The government must embrace the transformative potential of smart data schemes to drive innovation across the economy. By building on data portability, interoperability, and individual control principles, initiatives like Open Banking already demonstrate how these schemes empower businesses and consumers. Open Banking involves the process of banks and other financial institutions opening up data for anyone to access, use and share such as product descriptions and branch locations as well as the sharing of transaction and account data by bank customers to trusted third party organisations.By expanding this model, smart data schemes could enable the secure and ethical flow of information across traditional sector boundaries, unlocking new growth opportunities. For instance, integrating financial data with energy usage or transportation data could catalyse innovative services that benefit consumers while driving economic activity.To realise these benefits, smart data schemes require pan-sector standards and governance mechanisms. In turn, this necessitates the establishment of a new central authority to develop cross-sector standards, ensuring interoperability and reducing data silos. Additionally, the UK should actively engage with international initiatives, such as the Data Spaces were seeing in Europe, to ensure alignment with global standards and facilitate cross-border data flows.If we recognise that data infrastructure is a foundational layer of the economy, both an enabler and a sector in its own right, it becomes clear that it requires a comprehensive, 10-year roadmap supported by sustainable investment. A 10-year National Data Infrastructure Roadmap, backed by sustainable, multi-year funding from the Spending Review, could provide the stability required to achieve these ambitions.Building on their work over many years, establishing partnerships with leading organisations, including the Alan Turing Institute, the Open Data Institute (ODI), the Ada Lovelace Institute, the UK Catapults, and more, will be critical in developing interoperable systems, open standards, and AI-ready datasets. Collaborating with these institutions can ensure the infrastructure supports a broad spectrum of innovation and economic growth.To support AI innovation and adoption, the government plans to create a new National Data Library - a platform for managing and accessing public sector data. If this is going to be able to provide ethical and secure access to public data assets, it must be designed to be AI-ready from the outset. This includes implementing tried-and-tested data hygiene measures championed by organisations such as the ODI: adopting open, interoperable standards for safe data sharing, proactively addressing data gaps through iterative assessment, and instituting clear governance structures that balance innovation with public trust. Adopting a federated approach to assembling the various services needed. Our data-centric AI programme and recent white paper on building a better future with data and AI discuss these measures in detail.The UKs data economy is already significant, increasing its share of GDP from 6.5% in 2021 to 7.4% of GDP in 2023 - a higher proportion than any European country except Estonia. However, there is substantial untapped potential. Only 21% of businesses handling digital data currently analyse it for insights, and just 2% use it for AI or automated decision-making. Addressing this gap through coordinated policy interventions will be vital to ensuring responsible data sharing and access, improving business operations, and unlocking economic value. It will also need a sustained focus on building capability and trust while providing the technical infrastructure and incentives needed for change.As data becomes increasingly central to business and strategy, we need to recognise its value as a national asset, with both social and economic value. Recognising data as an asset can create incentives for private sector investment. This is also important for public sector data assets, where the return on investment should benefit UK taxpayers. If the data held within the National Data Library is recognised as a valuable national asset, sufficient funding for its maintenance and development would be more readily secured. This, in turn, would improve data quality, accessibility, and ultimately, its value for innovation and public benefit. To enable better investment flows into the sector, the government will need to partner with data research organisations like the ODI and accountancy bodies like the ICAEW (The Institute of Chartered Accountants in England and Wales) to develop globally aligned data standards and valuation mechanisms.Data has and will continue to be critical in driving the UK's economic growth and innovation. It must be a central part of the industrial strategy. Recognising data as critical national infrastructure - on par with roads and energy - is key. This data infrastructure can enable the successful implementation of the industrial strategy, driving growth across all sectors of the economy. By unlocking public sector data, empowering individuals with control over their data, and strengthening organisational data capabilities, the UK can ensure that data becomes a cornerstone of its economic future. With sustained investment and strategic planning, the transformative potential of data will be fully realised, securing the UKs position as a global leader in its use of data. However, we will need to move fast to avoid losing ground in the highly competitive worldwide digital economy.Read more articles on open dataGovernment, Nesta and ODI issue 600k smart data challenge to technologists: Department for Business and Trade, Challenge Works and the Open Data Institute have issued a Smart Data Challenge to app developers and entrepreneurs with a total prize fund of 600,000.Why the UK must lead on data to unlock AIs full potential: Unless the data silos in government are addressed, the UK risks falling short of the Action Plans ambitious goals to lead in AI adoption.0 Comments ·0 Shares ·36 Views
-
HMRC consults on clamping down on tax avoidance schemes that ensnare IT contractorswww.computerweekly.comHM Revenue & Customs (HMRC) has launched two anti-tax avoidance consultations, as part of its ongoing clampdown on the promoters and facilitators of disguised remuneration schemes.These types of setups have contributed to thousands of IT contractors across the UK being saddled with life-changing tax bills, and the UK government is in the middle of a concerted push to reduce the prevalence of these contrived salary payment mechanisms.The consultations were announced during the Spring Statement 2025, with the first seeking views on government proposals to tackle the promoters and enablers of tax avoidance schemes, and how to better support customers who might be targeted by them.These include proposals that would give HMRC additional powers and stronger sanctions, allowing HMRC to more efficiently and effectively disrupt the business models promoters rely on, said the government, in its consultation document.HMRC is inviting members of the public, representative bodies and advisers to feedback on this consultation, which is due to run until 18 June 2025 before the results are published later this year.A persistent and determined group of promoters of tax avoidance seek to exploit every opportunity to harm the tax system by selling tax avoidance schemes they claim side-step the rules, the consultation document continued.They cause harm to public finances and to the individuals that use the schemes they promote, who often end up with large tax bills on top of the substantial fees already paid out to the promoters. The government is determined to close down this unacceptable behaviour.The government has previously committed to tackling the problem of non-compliant umbrella companies acting as fronts for tax avoidance schemes, who lure in contractors with too good to be true take-home pay rates.It plans to do this by making employment agencies assume responsibility for ensuring the correct amount of Pay As You Earn (PAYE) contributions are paid by their workers when an umbrella company is involved in the labour supply chain.This change was announced in the Autumn Budget 2024, with the government stating that it expects the move to generate 895m in additional tax during the 2026/27 financial year by making it harder for umbrella companies to engage in tax avoidance-related activities.The second consultation is geared towards soliciting opinions on another set of HMRC proposals designed to strengthen the government tax collection agencys ability to take action against tax advisors who facilitate non-compliance from their clients.Most tax advisers in the UK are dedicated professionals who adhere to rigorous standards, helping millions of taxpayers pay the right tax, said the consultation document. However, a minority of advisers fall short of these standards. Their actions can facilitate non-compliance and contribute to the tax gap. This undermines trust in both the tax system and honest tax advisers.Among the proposals HMRC will be seeking feedback on are whether stronger penalties should be introduced against rogue tax advisers, including publishing details of any sanctions they are being subjected to as a result of their non-compliant behaviour.HMRC is also mulling over whether to alert professional bodies these advisers might be members to any suspect behaviour they might be dabbling in.Read more about tax avoidance in techAfter campaigners called for HMRC to pause all of its Loan Charge enforcement activity until the governments independent review of the policy is complete, Computer Weekly has learned that the agency is accepting requests to pause settlements.The number of non-compliant umbrella companies and tax avoidance schemes on HMRCs name and shame list has doubled in the past 12 months.HMRC said it would be particularly interested in hearing from accountancy firms, tax advisers, payroll professionals and insolvency practitioners during the feedback period of this consultation, which will end on 7 May 2025.The Spring Statement also saw Chancellor Rachel Reeves outline plans to use artificial intelligence (AI) technology to close a 500m tax gap caused by wealthy people making use of non-compliant off-shore tax schemes.As reported by Computer Weekly, this work will involve recruiting experts in private sector wealth management, and deploying AI and advanced analytics to help identify and challenge those who try to hide their wealth.On the topic of the consultations, Crawford Temple, CEO of independent payment intermediary compliance assessor Professional Passport, said they are well intended but would not have been needed if HMRC had tapped into the data it already has about these schemes earlier.HMRC is sitting on a goldmine of data that could have crushed tax avoidance schemes years ago, yet theyve chosen to twiddle their thumbs, he said. By failing to cross-reference intermediary reports and real-time payroll data, HMRC has effectively created a playground for tax avoiders.Their reactive approach has cost workers millions and the Treasury billions in lost revenue. Sluggish and slow enforcement mechanisms have transformed what could have been a precise surgical intervention into a widespread compliance crisis, and the economy is suffering as a result.For years, HMRC has watched non-compliant umbrellas flourish while launching yet more consultations, said Temple. Every moment of hesitation is an invitation for tax dodgers to thrive. The time for analysis is over the time for action is now.0 Comments ·0 Shares ·69 Views
-
UK public expresses strong support for AI regulationwww.computerweekly.comEvgV - stock.adobe.comNewsUK public expresses strong support for AI regulationMost of the UK public have experienced an AI-related harm and say they want laws introduced to regulate the technology, according to national survey by the Ada Lovelace and Alan Turing InstitutesBySebastian Klovig Skelton,Data & ethics editorPublished: 27 Mar 2025 16:08 Nearly three-quarters of the UK public say that introducing laws to regulate artificial intelligence (AI) would increase their comfort with the technology, amid rising public concern over the implications of its roll-out.In response to a national survey of more than 3,500 UK residents conducted by the Ada Lovelace and Alan Turing Institutes which asked people about their awareness and perceptions of different AI use cases, as well as their experiences of AI-related harms the vast majority (72%) said laws and regulations would make them more at ease with the proliferation of AI technologies.Nearly nine in 10 said they believed it is important that the government or regulators have the power to halt the use of AI products deemed to pose a risk of serious harm to the public, while over 75% said government or independent regulators rather than private companies alone should oversee AI safety.The institutes also found that peoples exposure to AI harms is widespread, with two-thirds of the public reporting encounters with various negative impacts of the technology. The most reported harms were false information (61%), financial fraud (58%) and deepfakes (58%).The survey also found support for the right to appeal against AI-based decisions and for more transparency, with 65% saying that procedures for appealing decisions and 61% saying more information about how AI has been used to make a decision would increase their comfort with the tech.However, the institutes said the rising demand for AI regulation is coming at a time when the UK does not have a set of comprehensive regulations around the technology.In a report accompanying the survey findings, the institutes added while they welcome the recognition in the UKs AI Opportunities Action Plan that government must protect UK citizens from the most significant risks presented by AI and foster public trust in the technology, particularly considering the interests of marginalised groups, there are no specific commitments on how to achieve this ambition.This new evidence shows that, for AI to be developed and deployed responsibly, it needs to take account of public expectations, concerns and experiences, said Octavia Field Reid, associate director at the Ada Lovelace Institute, adding that the governments legislative inaction on AI now stands in direct contrast to public concerns about the tech and their desire to see it regulated.This gap between policy and public expectations creates a risk of backlash, particularly from minoritised groups and those most affected by AI harms, which would hinder the adoption of AI and the realisation of its benefits. There will be no greater barrier to delivering on the potential of AI than a lack of public trust.According to the survey which purposefully oversampled social marginalised groups, including people from low-income backgrounds and minoritised ethnic groups attitudes to AI vary greatly between different demographics, with traditionally underrepresented populations reporting more concerns and perceiving AI as less beneficial. For example, 57% of black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the wider population.Across all of the AI use cases asked about in the survey, people on lower incomes perceived them as less beneficial than people on higher incomes.In general, however, people across all groups were most concerned about the use of their data and representation in decision-making, with 83% of the UK public saying they are worried about public sector bodies sharing their data with private companies to train AI systems.Asked about the extent to which they felt their views and values are represented in current decisions being made about AI and how it affects their lives, half of the public said that they do not feel represented.To realise the many opportunities and benefits of AI, it will be important to build consideration of public views and experiences into decision-making about AI, said Helen Margetts, programme director for public policy at the Alan Turing Institute.These findings suggest the importance of governments promise in the AI Action Plan to fund regulators to scale up their AI capabilities and expertise, which should foster public trust. The findings also highlight the need to tackle the differential expectations and experiences of those on lower incomes, so that they gain the same benefits as high income groups from the latest generation of AI.In their accompanying report, the institutes said to ensure the introduction of AI-enabled systems in public sector services works for everyone, policymakers must engage and consult the public to capture the full range of attitudes expressed by different groups.Capturing diverse perspectives may help to identify high-risk use cases, novel concerns or harms, and/or potential governance measures that are needed to garner public trust and support adoption, it said.Although peoples inclusive participation in both the public and private management of AI systems is key to making the technology work for the benefit of all, Computer Weekly has previously reported that there are currently no avenues to meaningful public engagement.According to the government chief scientific adviser Angela McClean, for example, there are no viable channels available to the public that would allow them to have their voices heard around matters of science and technology.In September 2024, a United Nations (UN) advisory body on AI also highlighted the need for governments to collaborate on the creation of a globally inclusive and distributed architecture to govern the technologys use.The imperative of global governance, in particular, is irrefutable, it said. AIs raw materials, from critical minerals to training data, are globally sourced. General-purpose AI, deployed across borders, spawns manifold applications globally. The accelerating development of AI concentrates power and wealth on a global scale, with geopolitical and geo-economic implications.Moreover, no one currently understands all of AIs inner workings enough to fully control its outputs or predict its evolution. Nor are decision-makers held accountable for developing, deploying or using systems they do not understand. Meanwhile, negative spillovers and downstream impacts resulting from such decisions are also likely to be global.It added that although national governments and regional organisations will be crucial to controlling the use of AI, the very nature of the technology itself transboundary in structure and application necessitates a global approach.Read more about AI governance and regulationAI Action Summit review: Differing views cast doubt on AIs ability to benefit whole of society: Governments, companies and civil society groups gathered at the third global AI summit to discuss how the technology can work for the benefit of everyone in society, but experts say competing imperatives mean there is no guarantee these visions will win out.Digital Ethics Summit 2024: recognising AIs socio-technical nature: At trade association TechUKs eighth annual Digital Ethics Summit, public officials and industry figures and civil society groups met to discuss the ethical challenges associated with the proliferation of artificial intelligence tools globally and the direction of travel set for 2025.Lord Holmes warns of increasingly urgent need to regulate AI: The real-world negative impacts of artificial intelligence will only get worse if the UK does not move to regulate the technology in a way that centres on accountability, trust and public participation, says Lord Holmes.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueSLM series - Digital Workforce: Beyond hallucinations & anthropomorphisation CW Developer NetworkData Engineering Hidden dangers in the dash to AI CW Developer NetworkView All Blogs0 Comments ·0 Shares ·59 Views
-
What happened when a tech journalist experimented with AI on a PC?www.computerweekly.comOver the past few months, the editorial team at Computer Weeklys French sister title, LeMagIT, has been evaluating different versions of several free downloadable large language models (LLMs) on personal machines. These LLMs currently include Google's Gemma 3, Meta's Llama 3.3, Anthropic's Claude 3.7 Sonnet, several versions of Mistral (Mistral, Mistral Small 3.1, Mistral Nemo, Mixtral), IBM's Granite 3.2, Alibaba's Qwen 2.5, and DeepSeek R1, which is primarily a reasoning overlay on top of distilled versions of Qwen or Llama.The test protocol consists of trying to transform interviews recorded by journalists during their reporting into articles that can be published directly on LeMagIT. What follows is the LeMagIT teams experiences:We are assessing the technical feasibility of doing this on a personal machine and the quality of the output with the resources available. Let's make it clear from the outset that we have never yet managed to get an AI to work properly for us. The only point of this exercise is to understand the real possibilities of AI based on a concrete case.Our test protocol is a prompt that includes 1,500 tokens (6,000 characters, or two magazine pages) to explain to the AI how to write an article, plus an average of 11,000 tokens for the transcription of an interview lasting around 45 minutes. Such a prompt is generally too heavy to fit into the free window of an online AI. That's why it's a good idea to download an AI onto a personal machine, since the processing remains free, whatever its size.The protocol is launched from the LM Studio community software, which mimics the online chatbot interface on the personal computer. LM Studio has a function for downloading LLMs directly. However, all the LLMs that can be downloaded free of charge are available on the Hugging Face website.Technically, the quality of the result depends on the amount of memory used by the AI. At the time of writing, the best result is achieved with an LLM of 27 billion parameters encoded on 8 bits (Google's Gemma, in the "27B Q8_0" version), with a context window of 32,000 tokens and a prompt length of 15,000 tokens, on a Mac with SOC M1 Max and 64 GB of RAM, with 48 GB shared between the processor cores (orchestration), the GPU cores (vector acceleration for searching for answers) and the NPU cores (matrix acceleration for understanding input data).In this configuration, the processing speed is 6.82 tokens/second. The only way to speed up processing without damaging the result is to opt for an SOC with a higher GHz frequency, or with more processing cores.In this configuration, LLMs with more parameters (32 billion, 70billion, etc) exceed memory capacity and either don't even load, or generate truncated results (a single-paragraph article, for example). With fewer parameters, they use less memory and the quality of writing falls dramatically, with repetitions and unclear information. Using parameters encoded on fewer bits (3, 4, 5 or 6) significantly speeds up processing, but also reduces the quality of writing, with grammatical errors and even invented words.Finally, the size of the prompt window in tokens depends on the size of the data to be supplied to the AI. It is non-negotiable. If this size saturates memory, then you should opt for an LLM with fewer parameters, which will free up RAM to the detriment of the quality of the final result.Our tests have resulted in articles that are well written. They have an angle, a coherent chronology of several thematic sections, quotations in the right place, a dynamic headline and concluding sentence. Regardless of the LLM used, the AI is incapable of correctly prioritising the points discussed during the interview However, we have never managed to obtain a publishable article. Regardless of the LLM used, including DeepSeek R1 and its supposed reasoning abilities, the AI is systematically incapable of correctly prioritising the various points discussed during the interview. It always misses the point and often generates pretty but uninteresting articles. Occasionally, it will write an entire, well-argued speech to tell its readers that the company interviewed... has competitors.LLMs are not all equal in the vocabulary and writing style they choose. At the time of writing, Meta's Llama 3.x is producing sentences that are difficult to read, while Mistral and, to a lesser extent, Gemma have a tendency to write like marketing agencies, using flattering adjectives but devoid of concrete information.Surprisingly, the LLM that writes most beautifully in French within the limits of the test configuration is Chinese Qwen. Initially, the most competent LLM on our test platform was Mixtral 8x7B (with an x instead of an s), which mixes eight thematic LLMs, each with just 7 billion parameters.However, the best options for fitting Qwen and Mixtral into the 48GB of our test configuration are, for the former, a version with only 14 billion parameters and, for the latter, parameters encoded on 3 bits. The former writes unclear and uninteresting information, even when mixed with DeepSeek R1 (DeepSeek R1 is only available as a distilled version of another LLM, either Qwen or Llama). The latter is riddled with syntax errors.The version of Mixtral with parameters encoded on 4 bits offered an interesting compromise, but recent developments in LM Studio, with a larger memory footprint, prevent the AI from working properly. Mixtral 8x7B Q4_K_M now produces truncated results.An interesting alternative to Mixtral is the very recent Mistral Small 3.1 with 24 billion parameters encoded on 8 bits, which, according to our tests, produces a result of a quality fairly close to Gemma 3. What's more, it is slightly faster, with a speed of 8.65 tokens per second.According to the specialists interviewed by LeMagIT, the hardware architecture most likely to support the work of generative AI on a personal machine is one where the same RAM is accessible to all types of computing cores at the same time. In practice, this means using a machine based on a system-on-chip (SoC) processor where the CPU, GPU and NPU cores are connected together to the same physical and logical access to the RAM, with data located at the same addresses for all the circuits.When this is not the case that is, when the personal machine has an external GPU with its own memory, or when the processor is indeed a SoC that integrates the CPU, GPU and NPU cores, but where each has access to a dedicated part in the common RAM - then the LLMs need more memory to function. This is because the same data needs to be replicated in each part dedicated to the circuits.So, while it is indeed possible to run an LLM with 27 billion parameters encoded in 8 bits on a Silicon M Mac with 48 GB of shared RAM, using the same evaluation criteria, we would have to make do with an LLM with 13 billion parameters on a PC where a total of 48 GB of RAM would be divided between 24 GB of RAM for the processor and 24 GB of RAM for the graphics card.This explains the initial success of Silicon M-based Macs for running LLMs locally, as this chip is a SoC where all the circuits benefit from UMA (unified memory architecture) access. In early 2025, AMD imitated this architecture in its Ryzen AI Max SoC range. At the time of writing, Intel's Core Ultra SoCs, which combine CPU, GPU and NPU, do not have such unified memory access.Writing the prompt that explains how to write a particular type of article is an engineering job. The trick to getting off to a good start is to give the AI a piece of work that has already been done by a human - in our case, a final article accompanied by a transcript of the interview - and ask what prompt it should have been given to do the same job. Around five very different examples are enough to determine the essential points of the prompt to be written, for a particular type of article. The trick is to give the AI a piece of work that has already been done by a human and ask what prompt it should have been given to do the same job However, AI systematically produces prompts that are too short, which will never be enough to write a full article. So the job is to use the leads it gives us and back them up with all the business knowledge we can muster.Note that the more pleasantly the prompt is written, the less precisely the AI understands what is being said in certain sentences. To avoid this bias, avoid pronouns as much as possible ("he", "this", "that", etc) and repeat the subject each time ("the article", "the article", "the article"...). This will make the prompt even harder to read for a human, but more effective for AI.Ensuring that the AI has sufficient latitude to produce varied content each time is a matter of trial and error. Despite our best efforts, all the articles produced by our test protocol have a family resemblance. It would be an effort to synthesise the full range of human creativity in the form of different competing prompts.Within the framework of our test protocol and in the context of AI capabilities at the time of writing, it is illusory to think that an AI would be capable of determining on its own the degree of relevance of all the comments made during an interview. Trying to get it to write a relevant article therefore necessarily involves a preliminary stage of stripping down the transcript of the interview.In practice, stripping the transcript of an interview of all the elements that are unnecessary for the final article, without however eliminating elements of context that have no place in the final article, but which guide the AI towards better results, requires the transcript to be rewritten. This rewriting costs human time, to the benefit of the AI's work, but not to the benefit of the journalist's work.This is a very important point - from that point onwards, AI stops saving the user time. As it stands, using AI means shifting work time from an existing task (writing the first draft of an article) to a new task (preparing data before delivering it to an AI).Secondly, the description in 1,500 tokens of the outline to follow when writing an article only works for a particular type of article. In other words, you need to write one outline for articles about a startup proposing an innovation, a completely different outline for those about a supplier launching a new version of its product, yet another outline for a player setting out a new strategic direction, and so on. The more use cases there are, the longer the upstream engineering work will take.Worse still, to date our experiments have only involved writing articles based on a single interview, usually at press conferences, so in a context where the interviewee has already structured his or her comments before delivering them. In other words, after more than six months of experimentation, we are still only at the simplest stage. We have not yet been able to invest time in more complex scenarios, which are nevertheless the daily lot of LeMagIT's production, starting with articles written on the basis of several interviews.The paradox is as follows - for AI to relieve a user of some of their work, that user has to work more. On the other hand, on these issues, AI on a personal machine is on a par with paid AI online.Read more about using LLMsGoogle claims AI advances with Gemini LLM - Code analysis, understanding large volumes of text and translating a language by learning from one read of a book are among the breakthroughs of Gemini 1.5.Prompt engineering is not for dummies - This is a guest post written by Sascha Heyer in his capacity as senior machine learning engineer at DoiT International and oversees machine learning.What developers need to know about Large Language Models - A developer strolls casually into work and gets comfy in their cubicle. Suddenly theres an update alert on the laptop screen - a new generative artificial intelligence function has been released.0 Comments ·0 Shares ·68 Views
-
Can regulatory oversight alone unlock cloud competition?www.computerweekly.comCloud computings rise is a success story under scrutiny. It has been nothing short of transformative, enabling businesses to scale operations, innovate rapidly, and optimise costs. It has become an essential pillar of modern enterprise IT, supporting mission-critical workloads across industries. From finance and healthcare to artificial intelligence (AI) and retail, the cloud is now the undisputed underlying infrastructure for digital transformation.Yet, as public cloud hyperscalers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform solidify their dominance, concerns over market competition, licensing restrictions, and barriers to switching are gaining momentum. The UKs Competition and Markets Authority (CMA) is taking a closer look at whether the UK cloud market is functioning fairly or whether customers are being locked into specific ecosystems with limited flexibility.These regulatory discussions are unfolding at a pivotal moment for the cloud market. There is a growing number of IT providers with hybrid and multi-cloud subscription-based services. Broadcom, for instance, with its acquisition of VMware, has a streamlined portfolio focused on private, public, and/or hybrid cloud flexibility. Given VMware's footprint in enterprise IT, Broadcom is positioning itself as a viable alternative (see VMware Cloud Foundation box) for end users seeking to escape cloud hyperscaler lock-in, as well as a useful partner for cloud service providers seeking to compete with the hyperscalers. The question is whether regulatory oversight alone can truly open the market, or if market forces can further help reduce hyperscaler dominance and end their deep ecosystem entrenchments.The cloud computing industry has reached a point where a few major providers dictate the market. The CMAs concerns are not unfounded the three major cloud hyperscalers AWS, Microsoft, and Google together control a sizable share of the UKs cloud infrastructure market, benefiting from deep enterprise relationships, extensive service ecosystems, and economies of scale that are difficult to match. And this is true not just in the UK, but in other major markets, ranging from the European Union to the United States. These advantages create structural challenges for organisations seeking to diversify their cloud strategy, whether they are end-users that rely on cloud service providers or other cloud service providers seeking to compete with the hyperscalers.One of the most significant barriers to competition is the cost of switching providers. Many organisations that initially embraced public cloud find themselves facing egress fees, technical dependencies, and licensing restrictions from hyperscalers that make hybrid-cloud adoption by end users more complex and costly than expected. For example, Microsofts licensing practices have come under scrutiny, with the argument that it unfairly raises the cost of running Windows workloads on competing platforms. If hyperscalers can no longer rely on egress fees and licensing constraints to retain customers, they may need to rethink service deprecation policies, reduce redundant offerings, and provide clearer pricing structures Bola Rotibi, chief of enterprise research, CCS InsightYes, hyperscaler dominance isnt purely a result of anti-competitive behaviour. These companies have earned their positions in part through innovation and strategic investment. AWS revolutionised developer and infrastructure-focused cloud services, making them easily accessible and aligned to their specific operational needs. Microsoft, on the other hand, has leveraged its strong enterprise footprint to make Azure a seamless extension of its software stack. Its offerings are widely deployed and deeply embedded into corporate IT infrastructures.The challenge regulators face is determining whether these advantages give hyperscalers the ability to lock-in customers and create an unfair playing field, or if they simply reflect the natural evolution of an industry where scale and efficiency drive competitive success.The banking industry offers a compelling case study in regulatory-driven competition. Open banking policiesforced large financial institutions to provide API access to fintech companies, enabling new players to compete with established banks. The result was a surge in innovation, improved customer services, and increased choice, benefiting both startups and traditional financial institutions.Could a similar pro-competition model be applied to cloud computing? If regulators push for greater data portability, reduced egress fees, and fairer licensing models, hyperscalers could be forced to compete more on service quality rather than continue to benefit from vendor lock-in mechanisms. This would encourage a more diverse cloud ecosystem, allowing alternative cloud service providers to expand the overall market, while potentially providing end users with more cloud-based options to better utilise their data and applications. Yet, there are important differences between banking and cloud computing. Unlike financial institutions, which can adapt through open application programming interface (APIs) and partnership models, cloud providers operate at a scale that requires enormous capital investment in infrastructure, networking, and security. Regulators must be careful not to create unintended consequences for example, excessive restrictions could reduce the incentive for hyperscalers to invest in next-generation cloud technologies.One similarity that does exist between banking and cloud computing is the presence of emerging alternatives in the cloud market that are poised to compete with the hyperscalers. This is where Broadcoms acquisition of VMware and the resulting business model adjustments become particularly relevant.For businesses looking to escape single-vendor cloud dependency, VMware Cloud Foundation (VCF) presents viable options, including private cloud, public cloud, or a combination of both through a hybrid cloud model. Where hyperscaler cloud platforms promote open ecosystem engagement and standards, they still pivot towards a design strategy that reinforces their own ecosystems. On the other hand, VCFs architectural principle is based on building for interoperability and offering a consistent, enterprise-grade cloud experience across private and public clouds.One of VCFs biggest advantages is its ability to support both virtual machines (VMs) and Kubernetes-based workloads on a single platform. Many enterprises are still running legacy applications that rely on Virtual Machines (VMs), yet also need to modernise with cloud-native, containerised applications. Instead of forcing businesses to choose between two separate architectures, VCF seamlessly integrates both. It is a perspective that has not escaped Broadcoms competitors in the market. A clear acknowledgment of businesses reliance on VMs and the slow transition to containerised operations is Red Hats launch of OpenShift Virtualisationa competing unified platform designed to manage both virtual machines and containers, helping accelerate the shift toward modernised, container-based workloads.Additionally, recent total cost of ownership studies have indicated that VCF delivers 40-52% cost savings compared to bare-metal or alternative cloud-native solutions. This is particularly relevant in an era where businesses are reevaluating cloud costs and looking for ways to optimise spending while maintaining operational flexibility.Security and compliance are also key considerations. Many regulated industriesincluding financial services, healthcare, and government sectorsrequire hybrid cloud models to comply with data sovereignty laws. VCF enables organisations to deploy a unified cloud infrastructure while ensuring that sensitive workloads remain under direct control.As regulatory conversations evolve, VCFs value proposition as a flexible, secure, and cost-effective option for end users and enabler for cloud service providers align well with industry needs and even CMA objectives.Regulating dominant cloud providers is a complex balancing act. If done well, it could promote a healthier, more competitive ecosystem, ensuring that businesses can choose cloud providers based on functionality rather than contractual obligations. If done poorly, it may slow down innovation, increase complexity, and create compliance burdens for all providers.It is a balancing act well understood by the CMA, the regulatory body tasked by the UK government with helping drive growth without violating its central mandate of promoting competition and protecting consumers. One potential outcome of regulation is that the hyperscalers themselves may be forced to improve. If hyperscalers can no longer rely on egress fees and licensing constraints to retain customers, they may need to rethink service deprecation policies, reduce redundant offerings, and provide clearer pricing structures. In a competitive landscape that values service quality over forced loyalty, businesses could ultimately benefit from more transparency, innovation, and choice.Yet, like in the case of financial services, regulation alone will not create more competition in the cloud marketplace. The presence of competitive options and enablers should be a factor when considering regulatory measures. In addition, businesses should take on greater responsibility for cloud architecture decisions, ensuring that vendor flexibility is a key consideration from the outset. Too often, organisations become entrenched in a single-provider cloud model not because of external constraints, but because of internal planning deficiencies. Choosing among private, public, and/or hybrid clouds requires investment in integration, governance, and skill developmentregulation can lower barriers, but companies must still take proactive steps to build adaptable, future-proof IT environments.The CMAs scrutiny of the cloud market represents a critical turning point for the cloud computing industry. If regulators successfully lower switching costs, enforce fairer licensing policies, and promote data portability, end users will have more options, and other cloud providers will be better positioned to capitalise on a more competitive market.However, success wont be determined by regulation alone. Regulation can create opportunities, but those opportunities need to be seized within the impacted market. The hyperscalers are not passive playersthey will adapt, innovate, and respond to regulatory changes in ways that could preserve their market dominance. Broadcoms opportunity lies in its ability to clearly articulate the value of various cloud models, simplify adoption, and prove the long-term benefits of its platform for both end-users and other cloud service providers. The cloud landscape is evolving, and the next 12 months will determine whether the hyperescalers maintain their stronghold, or if a more competitive and flexible cloud market grows significantly. Either way, the cloud market will not look the same a year from nowand given the enterprise footprint of VMware, Broadcom has a unique chance to shape its future.Bola Rotibi is chief of enterprise research at CCS Insight0 Comments ·0 Shares ·68 Views
-
Experts question courts rejection of former Post Office managers Horizon appealwww.computerweekly.comCongress weighs changes to regulatory agency CFPBCongress is taking a second look at federal agencies like the CFPB and considering reforms to ease regulatory and compliance ...5 steps to design an effective AI pilot projectGetting employee feedback on new technology can help mitigate risks before deployment. Learn key steps to follow when ...How can quantum computers be used in healthcare?Commercial products are few, and wide availability is probably a few years away, but lab research and prototypes show progress in...Benefits and challenges of zero standing privilegesZero standing privileges combines the zero-trust model with the principle of least privilege to strengthen privileged access ...Making a case for the cybersecurity data fabricWhen it comes to data, context is everything. Learn how a cybersecurity data fabric can supercharge a security team's ability to ...3 types of deepfake detection technology and how they workThink you're talking to your boss on Zoom? You might want to think again. Deepfake technology has already cost enterprises ...How multi-cloud networking can ensure reliabilityNetworking services enhance multi-cloud network reliability by reducing configuration errors, adding redundancy and ensuring ...12 network automation ideas to implement in your networkWhat's your path to network automation? Here are 12 automation ideas that span different levels of expertise to help network ...How to ensure network performance and reliabilityNetwork reliability is critical to network performance. Network administrators should follow reliability best practices to ...8 popular infrastructure-as-code (IaC) tools to considerLearn how these infrastructure-as-code tools can help streamline application development and deployment by reducing many of the ...6 sustainable resources to power data centersData centers are using clean energy to sustainably run parts of the facility. Six sustainable energy options to consider are ...8 IaC configuration file editors for admins to considerConfiguration files are essential for app and OS functionality but managing them at scale can be challenging. Here are eight ...0 Comments ·0 Shares ·65 Views
-
Research team demonstrates certified quantum randomnesswww.computerweekly.comcarloscastilla - FotoliaNewsResearch team demonstrates certified quantum randomnessA 56-qubit trapped ion quantum computer from Quantinuum has demonstrated quantum supremacy as a random number generatorByCliff Saran,Managing EditorPublished: 27 Mar 2025 10:01 A team of researchers have published a paper in which they show that a quantum computer can produce certified randomness, which has numerous application areas such as in cryptography.According to the paper, published in Nature, random-number generation is a natural task to demonstrate quantum computings supremacy over traditional, classical computing, as randomness is intrinsic to quantum mechanics.One of the researchers, Marco Pistoia, head of global technology applied research and a distinguished engineer at JPMorgan Chase, said that applications of certified randomness include cryptography, solving complex mathematical problems, and fairness and privacy.The paper in Nature notes that the main challenge for any application or device that needs to receive randomness from a third-party provider, such as a hardware security module, is that it needs to verify that the bits received are truly random and freshly generated.The team of researchers from JPMorganChase, Quantinuum, Argonne National Laboratory, Oak Ridge National Laboratory, and the University of Texas at Austin used a technique known as Random Circuit Sampling (RCS). RCS is used to perform a certified-randomness-expansion protocol, which outputs more randomness than it takes as input. It is a task that is often used to demonstrate quantum supremacy since it cannot be achieved on a classical computer.Scott Aaronson, Schlumberger centennial chair of computer science and director of the Quantum Information Center at The University of Texas at Austin, said: When I first proposed my certified randomness protocol in 2018, I had no idea how long Id need to wait to see an experimental demonstration of it.Using a 56-qubit Quantinuum System Model H2 trapped-ion quantum computer, the researchers demonstrated that a quantum computer can now achieve computational power beyond that offered by the most powerful classical supercomputers. Accessing H2 remotely over the internet, the team generated certifiably random bits.The randomness was checked using Frontier, Summit, Perlmutter and Polaris supercomputers equipped with graphics processing units (GPUs), which are especially suitable for quantum circuit simulations. With a combined sustained performance of 1.1 ExaFLOPS from the supercomputers, the team certified 71,313 bits of entropy, a measure of randomness.Discussing the breakthrough, Aaronson said: This is a first step toward using quantum computers to generate certified random bits for actual cryptographic applications.Pistoia added: This work marks a major milestone in quantum computing, demonstrating a solution to a real-world challenge using a quantum computer beyond the capabilities of classical supercomputers today.Rajeeb Hazra, president and CEO of Quantinuum, said: Our application of certified quantum randomness sets a new standard for delivering robust quantum security and enabling advanced simulations across industries like finance, manufacturing and beyond.Travis Humble, director of the Quantum Computing User Program, and director of the Quantum Science Center, both at Oak Ridge National Laboratory, added: These results in quantum computing were enabled by the world-leading US Department of Energy computing facilities at Oak Ridge National Laboratory, Argonne National Laboratory and Lawrence Berkeley National Laboratory. Such pioneering efforts push the frontiers of computing and provide valuable insights into the intersection of quantum computing and high-performance computing.Read more quantum computing storiesRevised roadmap projects quantum advantage by 2029: Quantinuum is planning to develop a machine called Apollo by 2029, which it claims will have enough logical qubits to perform complex calculations.Quantum innovation balances on commercial tightrope: While there is plenty of innovation in quantum technology, the industry needs greater collaboration to develop commercially viable systems.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueSLM series - Digital Workforce: Beyond hallucinations & anthropomorphisation CW Developer NetworkData Engineering Hidden dangers in the dash to AI CW Developer NetworkView All Blogs0 Comments ·0 Shares ·64 Views
-
Advanced Software fined 3m over LockBit attackwww.computerweekly.comSikov - stock.adobe.comNewsAdvanced Software fined 3m over LockBit attackThe ICO has issued a 3m fine to software provider Advanced in the wake of security failings that led to significant disruption to NHS customers in a ransomware attackByAlex Scroxton,Security EditorPublished: 27 Mar 2025 0:01 The UKs Information Commissioners Office (ICO) has today fined Advanced Computer Software Group now known as OneAdvanced 3.07m for cyber security failings that exacerbated the impact of a LockBit ransomware attack against the organisation.The cyber attack, which occurred in August 2022, saw services provided by Advanced customers including the NHS and other healthcare providers extensively disrupted when they lost access to its Adastra clinical patient management platform.One of the bodies that relied on Adastra at the time was the frontline 111 service. Other parts of the health service affected included ambulance dispatch, emergency prescriptions, out-of-hours patient services, and referrals.The ICO said the attack, which began through a customer account that did not have multifactor authentication (MFA) enabled, saw the data of 79,404 people stolen. Among this data were details of how to gain access to the properties of 890 individuals who were receiving care at home.The regulator concluded that Advanceds health and care subsidiary did not have appropriate technical and organisational measures in place to guarantee the security of its IT systems, highlighting gaps not just in MFA, but also in vulnerability scanning and patch management.The security measures of Advanceds subsidiary fell seriously short of what we would expect from an organisation processing such a large volume of sensitive information. While Advanced had installed multifactor authentication across many of its systems, the lack of complete coverage meant hackers could gain access, putting thousands of peoples sensitive personal information at risk, said information commissioner John Edwards.People should never have to think twice about whether their medical records are in safe hands. To use services with confidence, they must be able to trust that every organisation coming into contact with their personal information whether thats using it, sharing it or storing it on behalf of others is meeting its legal obligations to protect it, added Edwards. I urge all organisations to ensure that every external connection is secured with MFA today to protect the public and their personal information there is no excuse for leaving any part of your system vulnerable John Edwards, information commissioner With cyber incidents increasing across all sectors, my decision today is a stark reminder that organisations risk becoming the next target without robust security measures in place. I urge all organisations to ensure that every external connection is secured with MFA today to protect the public and their personal information there is no excuse for leaving any part of your system vulnerable, he said.The fine which is about half the amount initially proposed marks a first for the ICO, as it has never before levied such a penalty on a data processor under UK data protection law.Its significant reduction is the result of a number of factors, including representations made by Advanced on the progress it has made, and the organisations proactive engagement throughout the incident, which included full cooperation with the National Cyber Security Centre (NCSC), the National Crime Agency (NCA), and the NHS.The ICO and Advanced have now reached a voluntary settlement, by which Advanced acknowledges the decision to reduce the fine and will pay a final settlement without appeal.Edwards said this settlement was welcome and provided regulatory certainty without needing to incur more costs and delays associated with an appeal.The ICO warned others that they must take more proactive steps to assess and mitigate the well-known risk factors that enable ransomware gangs like LockBit to operate their criminal enterprises with ease. These include implementing MFA by default and without exception, and doing more work to assess vulnerabilities and fix them in a more timely manner.An Advanced spokesperson said: What happened over two-and-a-half years ago is wholly regrettable. With threat actors operating with increasing sophistication, it is upon all businesses to ensure their cyber posture is continually strengthened. Cyber security remains a primary investment across our business, and we have learned a great deal as an organisation since this attack.We reported the incident to the ICO in August 2022 and are pleased to see this matter concluded. Our focus remains steadfast on supporting our customers as they navigate the rapidly evolving technology landscape, ensuring they achieve their strategic growth and operational efficiency goals.Read more about ransomwareThis key member of the Black Basta ransomware gang is wanted by the US justice system. He narrowly escaped extradition at the end of June 2024,with the help of highly-placed contacts in Moscow.Several factors, including the impact of law enforcement operations disrupting cyber criminal gangs and better preparedness among users, may be behind a significant drop inthe total value of ransomware payments.The criminal ransomware fraternity was hard at work over the festive period, with attack volumes rising and a new threat actoremerging on the scene.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueCurve banking app on Amazfit Inspect-a-GadgetWhat to expect from Qlik Connect 2025 CW Developer NetworkView All Blogs0 Comments ·0 Shares ·74 Views
-
How to respond to digital regulation in 2025www.computerweekly.comGiven digital technology is so central to how we live, work, transact and communicate, we have recently seen the finalisation of an unprecedented amount of digital regulation in both the UK and the EU.In such a fast-moving environment, this begs the question; how should companies stay ahead of the curve? Although the exact response will obviously be company specific, a proactive and strategic approach is key.As an overriding theme, 2025 is set to be a pivotal year in terms of the regulatory implementation of many new digital rules.This includes online safety and competition obligations, where we will see both the relative maturing of the EU regimes and the corresponding UK regimes (e.g. the Online Safety Act) coming into effect.Taking the Online Safety Act as an example, this requires that online services build in safety by design and move from a reactive to a proactive response to online harms. Online services should be preparing now, given Ofcoms timeline for completion of risk assessments in 2025. Ofcom has estimated that approximately 100,000 online services are within scope.From a broader consumer protection perspective, this year will usher in the new powers of the UK Competition and Markets Authority (CMA) to directly enforce a range of consumer law. The CMA is expected to prioritise activity to ensure that consumers are treated fairly when purchasing online. This should be high on the board and executive agenda of consumer facing companies that have a significant online presence, in particular given the new financial penalties that the CMA has at its disposal, including up to 10% of world-wide turnover.It almost goes without saying that there will also be a continuing regulatory focus on AI, which we see as coalescing around the key themes of risk, growth, competitiveness, protection of human rights and freedoms, and accountability. Taking just one example, a significant amount of implementation activity under the EUs AI Act will take place. Firms with activities in the EU should already be assessing which of their current and planned AI systems and models fall within the scope of this regulation and conducting a gap analysis against key requirements.Regulators are also focused on interventions to unlock economic growth and promote innovation. One example is in relation to cloud computing, which is so integral to the economy today. New EU regulatory requirements under the EU Data Act will go live during the year, designed to address concerns about a lack of cloud service provider competition, customer lock in and limited interoperability, all of which are believed to act as barriers to multi-cloud adoption. In the UK, similar concerns in respect of the largest cloud service providers have been highlighted by the CMA enquiry team, with a recommendation that they be investigated under the UKs new digital markets regime that went live in January.Another example is in relation to data sharing, with new rules for sharing connected product data, such as connected cars or smart home devices, going live in the EU. The UKs Data (Use and Access) Bill is also likely to become law in the UK, which is expected to lead to the introduction of new Smart Data schemes in certain areas of the economy like the energy market. This aims to generate benefits such as those achieved via open banking, for example.One thing is clear - the role of digital regulation in todays society will continue to be central to the public, political and economic debate for the remainder of the year ahead.Deloittes Digital Regulatory Outlook 2025 is available here.Suchitra Nair is partner and head of Deloittes EMEA centre for regulatory strategyRobert MacDougall is director in Deloittes centre for regulatory strategy0 Comments ·0 Shares ·78 Views
-
Chancellor Rachel Reeves to use AI to catch wealthy tax dodgerswww.computerweekly.comJackie Davies - stock.adobe.comNewsChancellor Rachel Reeves to use AI to catch wealthy tax dodgersHMRCs use of artificial intelligence is one of many initiatives outlined in Chancellors Spring StatementByCliff Saran,Managing EditorPublished: 26 Mar 2025 15:45 In her Spring Statement, Chancellor Rachel Reeves said government departments will reduce their administrative budgets by 15% by the end of the decade, adding that the savings on back-office functions will total at least 2.2bn per year by 2030.Artificial intelligence (AI) has a strong presence in the Chancellors statement, with the government confirming the creation of a 3.25bn Transformation Fund to support the fundamental reform of public services, which it said will seize the opportunities of digital technology and AI, and transform frontline delivery to release savings for taxpayers over the long term. Among the areas receiving funding is 42m for three pioneering Department of Science, Innovation and Technology-led Frontier AI Exemplars. The Chancellor said these Exemplars will test and deploy AI applications to make government operations more efficient and effective, and improve outcomes for citizens by reducing unnecessary bureaucracy.The government has previously said it wants to overhaul the UKs regulatory system and fast-track planning decisions on major economic infrastructure projects. From an AI perspective, this potentially means planning permission for AI datacentres will be fast-tracked. Its AI Opportunities Action Plansets out how AI will help drive efficiency and growth across the UK economy.Among the areas where AI is being pushed is at the Ministry of Defence (MoD), where 10% of its procurement budget from next year is set to be spent on what the Chancellors statement refers to as novel technologies. These include dual use technologies, uncrewed and autonomous systems, and AI-enabled capabilities.As part of these plans to introduce novel technologies to the MoD, 400m is being ring-fenced by UK Defence Innovation (UKDI), to enable innovative companies to engage in defence procurement with the MoD. UKDIs role will be to drive significantly faster innovative procurement, and actively foster a strong UK defence technology sector.In a bid to tackle wealthy tax avoiders and bring in 500m in unpaid taxes, HMRC will be overhauling its approach to offshore tax non-compliance by the wealthy. This will include recruiting experts in private sector wealth management and deploying AI and advanced analytics to help identify and challenge those who try to hide their wealth.While the Chancellors statement discusses where AI will be deployed, Reeves failed to address legacy IT, which has been raised by the Public Accounts Committee as a major issue preventing the effective deployment of AI in the public sector. The PACs Use of AI in government report notes that AI relies on high-quality data to learn. However, according to DSIT, government data is often of poor quality and locked away in out-of-date legacy IT systems.Read more about AI in governmentAI Action Summit: UK and US refuse to sign inclusive AI statement.Developing AI datacentres: Has the UK government got what it takes?Its not a surprise that AI features in the Chancellors plans for the economy. Jonathan Hardinges, chief strategy officer at data analytics consultancy GlobalData, said: Having survived many downturns in business, the survival technique is always one of agility, and the ability to not just identify the challenge, but respond rapidly without being knee-jerk. There must be a strategic intent underlying how leaders in business respond to crises.Hardinges said AI can play a significant role in helping business leaders tackle the complex trading environment and geopolitical tension that currently exists. We also have a plethora of tools at hand that help us innovate and work more effectively than we have done before, he added.AI is the business leaders secret weapon, but unless one understands how to use it strategically and effectively, businesses will be at the tail end of the upturn and struggle to survive in an increasingly competitive albeit innovative economic climate.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueCurve banking app on Amazfit Inspect-a-GadgetWhat to expect from Qlik Connect 2025 CW Developer NetworkView All Blogs0 Comments ·0 Shares ·55 Views
-
4 Day Week Foundation launches tech sector pilotwww.computerweekly.comsteheap - stock.adobe.comNews4 Day Week Foundation launches tech sector pilot UK tech firms are being invited to join a pilot programme run by the 4 Day Week Foundation, which aims to help prepare them for implementing shorter working weeksBySebastian Klovig Skelton,Data & ethics editorPublished: 26 Mar 2025 13:35 The 4 Day Week Foundation is launching a pilot programme for companies in the technology sector to support their transition to a four-day working week.Beginning in May 2025, the programme will provide six weeks of training and workshops designed to help tech firms prepare for and implement four-day weeks.Alongside the core training programme, each organisation will also benefit from research support with academics from the University of Cambridge, University of Sussex and Newcastle University to measure the impact of their four-day week trial.Companies that have already successfully implemented four-day weeks will also be available for networking opportunities and best practice sharing.A previous UK trial of the four-day working week, conducted from June to December 2022 which ended with most firms involved deciding to continue with shorter weeks on a permanent basis showed that many enterprises simply do not need to spend extra money, or lose productivity, when shifting staff to shorter hours, particularly if their job is desk-based.Speaking with Computer Weekly at the time, tech firms involved in the trial said the sector was well-placed to benefit from a four-day week because of the huge variety of digital tools available, and that offering shorter weeks was helpful for the attraction and retention of talent.Prior to that, a four-day week trial run in Iceland by Reykjavk City Council and the national government, which included more than 2,500 workers, found that productivity either remained the same or improved in the majority of workplaces involved.Nothing better represents the future of work than the tech sector, which we know is an agile industry ripe for embracing new ways of working, such as a four-day week, said Sam Hunt, business network coordinator at the 4 Day Week Foundation. The nine-to-five, five-day working week was invented 100 years ago and no longer suits the realities of modern life. We are long overdue an update Sam Hunt, 4 Day Week FoundationAs hundreds of British companies have already shown, a four-day, 32-hour working week with no loss of pay can be a win-win for workers and employers. The nine-to-five, five-day working week was invented 100 years ago and no longer suits the realities of modern life. We are long overdue an update.In January 2025, the 4 Day Week Foundation announced that over 200 companies in the UK have permanently adopted a reduced hours four-day week with no loss of pay for more than 5,000 employees, with the vast majority working 32 hours a week or less. Of these, 24 companies are from the technology, IT and software sector.According to Sian Herrington, CEO of IT firm Noteworthy Support one of the companies supporting the pilot programme four-day weeks have been a game-changer for the firm, which implemented the practice at its inception in 2018.Its not just about giving our team more personal time its about creating a culture that values efficiency, well-being and balance, she said.Weve seen consistently high productivity, engagement and overall job satisfaction. Adopting this model has helped us attract top talent and reinforced our belief that happier teams build better businesses. We will always offer a four-day working week to our team.In December 2024, learning technology firm Thrive took part in the UKs first medical trial of the four-day week, which saw researchers at the University of Sussex collect data on 115 Thrive employees between July and October 2024, including MRI scans, blood tests and sleep tracking, as well as weekly questionnaires covering their workplace experiences and well-being.According to the results, there were notable improvements in a number of employee well-being metrics particularly those related to stress levels, sleep quality and detachment from work indicating a significant improvement in work-life balance. Researchers concluded that, overall, the shorter hours led to happier, more productive staff.In November 2023, the think tank Autonomy a supporter of the 4 Day Week Foundation published a research paper that argued automating jobs with large language models (LLMs) could lead to significant reductions in working time without a loss of pay or productivity.Autonomy noted that although people have long beenpredicting and expectingfar shorter working weeks due to technological advances, historical increases in productivity over recent decades have not translated into increased wealth or leisure time for most people, largely as a result of economic inequality.However, it also said that realising the benefits of artificial intelligence (AI)-driven productivity gains in this way will require concerted political action, as these gains are not always shared evenly between employers and employees, and depend on geographic, demographics, economic cycle, and other intrinsic job market factors such as workers access to collective bargaining.This is a paper that identifies an opportunity and not a destiny. The actual diffusion and adoption of technology is always uneven, driven by a variety of factors: wage levels, government policy, levels of sector monopolisation, trade union density and so on, it said.Needless to say, widespread adoption of these new AI technologies will require a robust industrial strategy that traverses national, federal and municipal levels, and that deploys incentives and regulations for the private sector.Most importantly, workplace technologies are social and political technologies, and therefore worker voice those who will be working alongside and in collaboration with these tools will be essential.Read more about the four-day weekAfas leads the way with four-day work week in the Netherlands: Dutch labour unions want a shorter working week and software company Afas is leading the way, introducing a four-day work week from 1 January 2025.Four-day working week set to stay at Atom bank: Challenger bank Atom has formalised a four-day working week policy after a successful trial.Highgate IT Solutions trials four-day working week: Channel player Highgate IT Solutions is on the brink of a pilot to investigate if changes to the structure of work could benefit employees.In The Current Issue:Can a future digital NHS survive another change?Digital twins drive efficiency across machines and infrastructureDownload Current IssueWhat to expect from Qlik Connect 2025 CW Developer NetworkSLM series - Memgraph: The SLM-knowledge graph combo CW Developer NetworkView All Blogs0 Comments ·0 Shares ·55 Views
-
More than 200 Lloyds bank bosses to receive artificial intelligence trainingwww.computerweekly.comLloyds Banking Group is training 200 of its senior leaders to ensure the organisation can get the most out of artificial intelligence (AI) technology. The bank is working with training provider Cambridge Spark on the programme, which will embed AI skills in the leadership ranks.Participants of the programme will receive training in an 80-hour programme, known as Leading with AI, delivered by Cambridge Spark alongside experts from Cambridge University.Ron van Kemenade, chief operating officer at Lloyds Banking Group, said: AI is a game-changer for financial services, and were investing to enhance our services with cutting-edge technology. The programme with Cambridge Spark will empower our business leaders to further innovate with AI and drive commercial excellence using this transformative technology.Our approach to AI is based on integrating it deeply throughout every aspect of our business rather than limiting it to a centralised technical team. Were building on our existing expertise to develop the most AI-capable leadership team in banking.Lloyds Bank has also made investments in AI training beyond senior leaders with a Data & AI Academy, GenAI Masterclasses and a Data & AI Summer School available to all its employees.It has worked with Cambridge Spark before on a graduate bootcamp focused on practical industry skills for emerging data scientists and data engineers.For the senior leaders, the Leading with AI programme will focus on areas including identifying transformational opportunities for AI and spearheading its implementation.Raoul-Gabriel Urma, CEO at Cambridge Spark, said that embedding AI know-how at the top of organisations is vital: Advancing AI capabilities represents both the greatest challenge and opportunity for todays businesses. Enhancing these capabilities in senior leadership creates a powerful multiplier effect that drives innovation throughout the organisation.The Bank of England and the FCA have been tracking how financial services firms in the UK are using AI and machine learning. The results of its recent survey, which covered 120 firms, found that three-quarters are already using some form of AI in their operations. This included all the large UK and international banks, insurers and asset managers that responded, and represented a 53% increase on the same survey in 2022.There will be challenges for senior bank leaders who must understand the risks that AI poses as well as its benefits, as they will be expected to work within regulatory regimes.During an international financial conference in October, Sarah Breeden, deputy governor of financial stability at the Bank of England, said that regulation must stay ahead of AI take-up.She said this will help us to understand more deeply not only AIs potential benefits, but also the different approaches firms are taking to managing those risks which could amount to financial stability risks.The regulator will then try to spread best practices and decide when regulatory guidelines and guardrails are needed. The power and use of AI is growing fast, and we mustnt be complacent, said Breeden. We know from past experience with technological innovation in other sectors of the economy that its hard to retrospectively address risks once usage reaches systemic scale.Read more about GenAIMany organisations are testing out uses for generative AI, but how are they getting on? We speak tofive early adoptersto find out the lessons learned so far.Employees areusing GenAIprimarily for spelling and grammar checking, while business chiefs are using it for data analysis.Deloitte survey showsbusiness and IT leaders are optimistic about GenAI, but academic researchers warn of AI training time bomb.0 Comments ·0 Shares ·53 Views
-
Military AI caught in tension between speed and controlwww.computerweekly.comMilitary planners and industry figures say artificial intelligence (AI) can unlock back-office efficiency for the UKs armed forces and help commanders make faster, better-informed decisions, but intractable problems baked into the technology could further reduce military accountability. Speaking on a panel about the ethics of using autonomous technologies in warfare at the Alan Turing Institute-hosted AI UK event in mid-March, industry figures and a retired senior British Army officerThey argued that proliferating AI throughout UK defence will deter future conflict, free up resources, improve various decision-making processes including military planning and target selection and stop the country from irreversibly falling behind its adversaries.While these speakers did highlight the importance of ensuring meaningful human oversight of military AI, and the need for global regulation to limit the proliferation of uncontrollable AI systems in this context, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, argued there is a clear tension between autonomy and control that is baked into the technology.She added this intractable problem with AI means there is a real risk that humans are taken further out of the military decision-making loop, in turn reducing accountability and lowering the threshold for resorting to violence.Major general Rupert Jones, for example, argued that greater use of AI can help UK defence better navigate the muddy context of modern warfare, which is characterised by less well-defined enemies and proxy conflicts.Warfares got more complicated. Victory and success are harder to define, he said, adding the highest potential use of AI is in how it can help commanders make the best possible decisions in the least time.To those who are not familiar with defence, it really is a race youre racing your adversary to make better, quicker decisions than they can. If they make faster decisions at you, even if theyre not perfect positions, they will probably be able to gain the momentum over you. With decision-making, you would need to have enormously robust, reliable and always up-to-date data to replace the capabilities and cognitive capacities of a human decision-maker Elke Schwarz, Queen Mary University LondonOn top of the technologys potential to enhance decision-making, Jones said the hugely expensive nature of running defence organisations means AI can also be used to boost back-office efficiency, which in turn would unlock more funds for use on front-line capabilities.AI gives you huge efficiency, takes humans out of the loop, frees up money and one thing we need in UK defence right now is to free up some money so we can modernise the front end, he said.However, he noted that the potential of the technology to enhance decision-making and unlock back-office efficiencies would rest on the ability of UK defence to improve its underlying data practices so that the vast amounts of information it holds can be effectively exploited by AI.Jones added that UK defence organisations should begin deploying in the back office first to build up their confidence in using the technology, before moving on to more complex use cases like autonomous weapons and other AI-powered front-line systems: Build an AI baseline you can grow from.While Schwarz agreed that AI will be most useful to the military for back-office tasks, she took the view this is because the technology is simply not good enough for lethal use cases, and that the use of AI in decision-making will muddy the waters further.With decision-making, for example, you would need to have enormously robust, reliable and always up-to-date data to replace the capabilities and cognitive capacities of a human decision-maker, she said, adding the dynamics inherent in the technology create a clear tension between speed and control.On one hand, we say, Well, we need to have meaningful human control at all points of using these systems, but ultimately the raison dtre for these systems is to take the human further out of the loop, so theres always tension, said Schwarz.The reason the human is taken further out of the loop is because the logic of the system doesnt cohere that well with the cognitive logic of how we, as humans, process data.Elke added that on top of the obvious tension between cognition speed and meaningful human control, there is also the problem of automation bias, whereby humans are more likely to trust computer outputs because there is a misplaced sense the results are inherently objective.We are more likely to trust the machine decision that we have less time to overrule, where we cannot create a full mental picture in time to make a human decision as we are further embedded into digital systems, those are the kinds of tensions that I dont see going away anytime soon. Theyre intractable problems, she said.That takes us to ethics and the question of, what do we do with ethical decisions when the human is taken out?While Schwarz urged extreme caution, Henry Gates, associate director at AI defence startup Helsing, said there is a pressing need to move fast with the development of military AI so that the UK does not fall behind other nefarious actors and is able to have a greater say over how autonomous military systems are regulated.If we are just a country that doesnt have any of these weapons people arent really going to listen to us, he said, adding that moving at pace with military AI can also help build an alternative deterrence.In the same way we have nuclear weapons as a deterrence to nuclear war, AI potentially provides a route towards conventional deterrence that reduces armed conflict.Schwarz, however, warned against putting all our eggs in the AI basket to deter war, arguing there needs to be greater investment in human capabilities for dialogue, trust and diplomacy.She also warned that instead of acting as a deterrent, AIs socio-technical nature whereby the technical components of a given system are informed by social processes and vice versa means it can negatively shape humans perspectives of one another, leading to dehumanisation.Ultimately, it has always been the case [with] technologies that the more we come to rely on them, the more they shape our perspectives about us, and about others as well, she said, adding this is certainly the case with AI as, unlike other tools of war, like tanks or guns that are used as physical prosthetics, the technology acts as a cognitive prosthetic.What is the logic of all of that? Well, an AI system sees other humans as objects, necessarily edges and traces so implicit then is an objectification, which is problematic if we want to establish relationships.On the issue of meaningful human control, Gates added there are three things to consider: the extent to which decision-making is delegated to AI, performance monitoring to ensure models do not drift from their purpose, and keeping humans in full control of how AI systems are being developed. In the same way we have nuclear weapons as a deterrence to nuclear war, AI potentially provides a route towards conventional deterrence that reduces armed conflict Henry Gates, HelsingHowever, Keith Dear, managing director of Fujitsus Centre for Cognitive and Advanced Technologies, argued that the capabilities of AI have come so far in such a short space of time that it will soon be able to outperform humans on how to apply the laws of war to its decisions.For a target to be justified under the law of armed conflict, it has to be positively identified, has to be necessary has to be proportionate, it has to be humane, so no uncontrolled effects, and it has to be lawful. All of those things are tests that you could apply to an AI in the same way that we apply them to a soldier, sailor or an airman serving on the front line, he said.When you delegate authority, it has to outperform us on those things, and if it does outperform us in those roles where you can baseline and benchmark that, it becomes unethical not to delegate authority to the machine, which has a lower false negative in making those decisions than us.Highlighting how the speed of modern stock trading means it is largely left to computers, Dear added AI will create a similar situation in warfare in that, because it will have eclipsed the speed of human cognition, decision-making can and should be left to these autonomous systems.Its an AI watching the AI. You may have humans before the loop, but the idea that, as warfare speeds up and we get to AGI [artificial general intelligence], therell be someone in the loop is perverse I think its a choice to lose, he said.Commenting on the idea that AI will reduce human suffering in conflict and create a future where wars are fought between armies of drones, Gates added it was unlikely, noting that while it may change the character of war, it does not change the underlying logic, which is how one group can impose its will on another.Jones agreed, noting that whether or not an AI is sat in the middle, the idea is to hurt the people on the other side. You are still trying to influence populations, political decision-makers, militaries, he said.For Dear, there will be no role for humans on the battlefield. When your machines finish fighting and one side has won, itll be no different to having a human army that won on the battlefield the point then is that [either way] you have no choice but to surrender or face a war of extermination, he said.Schwarz, however, highlighted the reality that many of todays AI systems are simply not very good yet, and warned against making wildly optimistic claims about the revolutionary impacts of the technology in every aspect of life, including warfare. It is not a panacea for absolutely everything, she said.Read more about military technologyGoogle drops pledge not to develop AI weapons: Google has dropped an ethical pledge to not develop artificial intelligence systems that can be used in weapon or surveillance systems.Government insists it is acting responsibly on military AI: The government has responded to calls from a Lords committee that it must proceed with caution when it comes to autonomous weapons and military artificial intelligence, arguing that caution is already embedded throughout its approach.UK Defence Committee urges MoD to embrace AI: Defence Committee outlines changes it thinks the Ministry of Defence should make to realise the battlefield advantages of artificial intelligence.0 Comments ·0 Shares ·56 Views
More Stories