Computer Weekly
Computer Weekly
Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
  • 1 people like this
  • 168 Posts
  • 2 Photos
  • 0 Videos
  • 0 Reviews
  • Science &Technology
Search
Recent Updates
  • WWW.COMPUTERWEEKLY.COM
    Post Office creates CTO role to support extensive and complex plans
    Congress weighs Trump's approach to antitrust enforcementUnder the incoming Trump administration, antitrust enforcement might shift to one agency in a move some believe will streamline ...5 steps to design an effective AI pilot projectGetting employee feedback on new technology can help mitigate risks before deployment. Learn key steps to follow when ...U.S. approach to misinformation, AI will shift under TrumpPresident-elect Donald Trump has been vocal in his criticisms of big tech's content censorship power and President Joe Biden's ...BeyondTrust SaaS instances breached in cyberattackBeyondTrust, a privileged access management vendor, patched two vulnerabilities this week after attackers compromised SaaS ...10 cybersecurity predictions for 2025AI will still be a hot topic in 2025, but don't miss out on other trends, including initial access broker growth, the rise of ...Add gamification learning to your pen testing training playbookOrganizations that embrace gamification in their pen testing training are better positioned to build and maintain the skilled ...Network design principles for effective architecturesIt's important for network architects to consider several factors for an effective network design. Top principles include ...The business case for AI-driven network orchestrationOrganizations that are considering AI-driven network orchestration will find it has many business cases -- chief among them are ...The purpose of route poisoning in networkingRoute poisoning is an effective way of stopping routers from sending data packets across bad links and stop routing loops. This ...VMware by Broadcom changes to continue in 2025After a year of lawsuits, customer confusion and a restructuring of the core business, VMware by Broadcom has much to prove in ...Top data center infrastructure management software in 2025DCIM tools can improve data center management and operation. Learn how six prominent products can help organizations control ...GPU scarcity shifts focus to GPUaaSHigh GPU costs and scarcity drive users to GPUaaS for AI workloads. But businesses should assess needs before investing.Boomi's Rivery acquisition aimed at AI, real-time analysisWith the purchase, the data integration specialist adds change data capture capabilities for AI and real-time analytics.Record funding round reflects Databricks' differentiationBased on its AI development and expansion within data management, the record $10 billion the vendor raised shows it is viewed ...Databricks' record $10 billion funding round to fuel growthThe data lakehouse pioneer has expanded into AI development and plans to use the funding to fuel further investments in AI, make ...
    0 Comments 0 Shares 2 Views
  • WWW.COMPUTERWEEKLY.COM
    Why Keir Starmers plan to rewire Whitehall needs an IT-rethink
    In my personal experience, there are certain institutional barriers to productive and successful delivery of major projects in government. Indeed it may be that the mechanisms that are put in place to reduce the risk of delivery failure and wasted money may in many cases be the very things that are significantly increasing the risk of that failure.At the heart of many of the challenges facing major government IT programmes is the fundamental disconnect between the bottom-up Agile approaches encouraged by the Government Digital Service (GDS) and followed by most IT programmes and the top-down nature of the project approval, funding and oversight mechanisms.This approach frequently demands an agreed up-front design, a fully defined set of outputs and benefits at the start of the project and a business case setting out in great detail the budget required for delivery. These are all fundamentally based on Waterfall-type project planning.As an ex-Treasury official myself I fully understand the need to ration spending and to allocate it to where it is most useful, however the way this is currently configured does not align with Agile project delivery.At best these are simply slightly spurious formalities that projects must go through before they can start the Agile approach to delivery. At worst they undermine the delivery approach needed and distract the project team from the iterative, fast-paced and flexible approach that is needed for successful delivery. This needs to change in the current governments vision to emulate a start ups test and learn mantra.But this approach will also falter if another tendency of government IT is allowed to prevail. Many departments focus on delivering all, or certainly most, projects almost exclusively in-house using bespoke code to build the necessary solutions. This is often done because of the complexity, or at least the perceived complexity, of government processes and how much they differ from those in private sector organisations.However, this focus on building systems using bespoke code is time-consuming, expensive and hard to manage, and still all too often fails to deliver. It also often ends up with a disconnect between the frequently huge IT team and the business staff who are ultimately going to own and use the system, and with massive amounts of design documentation being passed back and forth between them.To deliver Keir Starmers vision of re-wiring Whitehall, there does need to be an approach that looks to how government can apply low-code software development intelligently and in the right areas. This can revolutionise the way the government designs and builds IT by significantly reducing the amount of custom code creation needed and by transforming the way business people are involved in the process.The new government is right in how its choosing small discrete projects. A more iterative, less big bang approach to government transformation should be adopted. Starting small and picking one or two key processes in any given area, to begin with, and adopting an approach such as Agile low-code development that reduces reliance on scarce and expensive technical skills while compelling business and IT teams to work together in an integrated way.This lets you get to the stage where the outcomes can be assessed much sooner, providing the basis on which to move onto the next mini-project. Ulitimately you end up ticking off a lot of stages and achieve sweeping but sustainable transformation but with the problems of more traditional approaches minimised.Alex Case, is a former senior civil servant at Downing Street and a now government industry principal at Pegasystems, which develops applications for low-code
    0 Comments 0 Shares 3 Views
  • WWW.COMPUTERWEEKLY.COM
    Government review of denied datacentre builds sees Iver project get green light
    jijomathai - stock.adobe.comNewsGovernment review of denied datacentre builds sees Iver project get green lightThe government has granted the developers of a proposed datacentre in Iver, Buckinghamshire, permission to press ahead with the project after the local council blocked the plans on Green Belt protection groundsByCaroline Donnelly,Senior Editor, UKPublished: 20 Dec 2024 13:27 A government review of a local councils decision to block a US-based company from building a hyperscale datacentre in Iver, Buckinghamshire, has concluded the project should proceed.Buckinghamshire Council refused permission in November 2022 for US investment company Affinius Capital to proceed with its plans to redevelop an industrial estate in Court Lane, Iver, Buckinghamshire and build a 65,000m2 datacentre on the site instead.The reason given by the council for the refusal is that the project would be an inappropriate use of Green Belt Land, which are protected pieces of land that are intended to prevent the onset of urban sprawl.Shortly after coming to power in July 2024, the Labour government pledged to review the councils decision to block the project in support of its strategy to stimulate the UKs economic growth by accelerating the delivery of large-scale infrastructure projects.The developer had raised an appeal against the councils decision, and a month before the governments intervention a public local inquiry was held over four days in June 2024.Following a review of the councils decision and the local inquiry, the government has now granted Affinius Capital permission to proceed with the project, with a letter dated 6 December 2024, outlining the reasons why.The letter states that the decision to overturn Buckinghamshire Councils decision to block the build was made by the minister of state for Housing and Planning Matthew Pennycook, on behalf of the secretary of state Angela Rayner.Weighing in favour of the proposal are the need for new datacentres, reduction in HGV movements, heritage benefits, reuse of previously developed land, and investment and job creation, which each carry significant weight, the letter stated.Weighing against the proposal are harm to Green Belt, which carries substantial weight; harm to [a nearby] listed building, which carries great weight; and landscape harm and visual harm, which carries moderate weight.The letter also goes on to state that, in Rayners view, there are very special circumstances to justify this development in the Green Belt, adding:The secretary of state therefore concludes that the appeal should be allowed and planning permission granted.The letter also states that the secretary of states decision on this matter can be challenged in the High Court, provided an application to do so is received within six weeks from the date of the letter.Computer Weekly contacted Affinius Capital for comment on this story, but no response was received by the time of publication.The Affinius Capital project was one of two datacentre developments the government placed under review in July 2024. The other is being overseen by Oxford-based developer Greystoke Land, after its bid to build a 1bn datacentre in Abbots Langley, Hertfordshire, was denied in January 2024. That decision is being appealed.At the time of writing, Computer Weekly understands a decision at government level on whether that build will go ahead remains pending.Read more about datacentre developmentsApple is set to hear at the end of this month whether its much-delayed Irish datacentrebuild can go ahead. Computer Weekly examines the ins and outs of this complex case.Reports citing the rapid rise of West London as a major datacentre hub as the cause of a potentialban on new housing developments in the area have not gone down well with industry watchers.In The Current Issue:What do the home secretarys policing reforms mean for the future of the Police Digital Service?What are the security risks of bring your own AI?Download Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 17 Views
  • WWW.COMPUTERWEEKLY.COM
    A job seekers guide to using AI and what it means for employers
    Artificial intelligence (AI) is a powerful tool to help job seekers find roles and make their applications, with ever more people using it. Multiple published surveys have suggested this figure could be as high as 50% of applicants. But while AI is undoubtedly a great support tool, it can create issues if individuals use it to present a misleading impression of themselves and their capabilities. So how can it best be used and what are the dos and donts for job seekers to think about?At the same time, the growing use of AI presents new challenges for employers. In some cases, it is dramatically increasing the number of applications for employers to work through. Figures from the Institute of Student Employers show a 59% rise in the average number of applications being received for graduate jobs (140 per position) with recruiters in higher-paid and growth sectors, such as digital and IT, receiving as many as 205 applications per vacancy and at Harvey Nash, we are seeing as many as 500 in some instances. The Institute says that AI is the driver of these increases. Moreover, can employers trust that applications actually represent candidates faithfully and honestly? In this article, Ill highlight some advice points for them to consider too.There are multiple ways in which AI tools can help job seekers in their efforts to land that dream role. Some of the best-known tools include ChatGPT, Microsoft Copilot, Gemini and Bard with many other more specialised tools available for job searches and application support. Understanding descriptions. Generative AI tools can instantly summarise complex job descriptions, helping candidates quickly understand core responsibilities and requirements, allowing them to tailor their applications effectively. Highlighting relevant experience. By extracting key information from job descriptions, candidates can emphasise relevant skills and experience in their CVs and cover letters. AI-driven CV refinement. Job seekers can use generative AI to enhance their CVs. Tools can suggest improvements, optimise formatting and ensure that critical details stand out. Keyword optimisation. AI can identify relevant keywords for specific roles, improving a CVs chance of passing automated screening tools. Mock interview simulators. AI-powered simulators can help candidates better prepare for interviews. By posing common interview questions and provide feedback, they help to build a candidates confidence and enhance their overall interview performance. AI-powered job search. Many tools can match candidates with suitable roles based on their skills and experience. This streamlines the job search process and helps candidates identify the roles they are most suited for.Using AI tools in this way brings a number of benefits to job seekers, most notably: Efficiency. Generative AI accelerates tasks like summarising job descriptions, refining CVs and preparing for interviews. Productivity boost. AI can act as a work buddy, helping candidates better manage and prepare when applying for multiple vacancies. Improved quality. AI can help candidates better communicate their strengths and present themselves more effectively, increasing their chances of being shortlisted or interviewed. Advanced options. Many AI tools are freely available, but there are also paid-for versions of tools that offer even greater functionality and have a greater ability to learn from previously produced content to reflect an individuals tone of voice or style of language.While these are all compelling benefits, nevertheless the use of AI does present various potential issues. AI tools can have the effect of making everyones applications and the way in which they present information look the same. There is a danger of losing individuality as applications become more vanilla and standardised. Here are some advice points accordingly:Do Use your own words and language as much as possible to keep it authentic and bring out your own character. If using AI to create your CV, stand back from it and ask yourself whether the structure of it is bringing out your unique qualities and experience effectively. Avoid generic phrasing that feels stilted or impersonal otherwise there is a danger of a sea of sameness. Answer interview questions/tasks on your own. You may want to use AI to refine them afterwards, but always start with your own answers. Its your own knowledge and ability that youre being assessed on and you might get caught out later on! Use AI as a support tool not to do the whole job for you. It can help you make the process quicker and more efficient, but shouldnt become a substitute for you putting in the appropriate level of effort yourself.Dont Lie or exaggerate to give a false impression, otherwise there is a danger of AI becoming like catfishing for job applications. Checks in the process later on will almost certainly expose any untruths. Use AI to send off reams of untargeted applications on the off-chance you might be successful. This will ultimately waste your own time as well as the employers. Use Americanisms and American spellings (if youre in the UK) which many generative AI tools are programmed with. Adapt what AI produces so that its suitable for the market you are in. Pass off AI-generated answers or content as your own. You need to build relationships with recruitment agencies and prospective employers and will lose their trust if they realise you have been leaning excessively on AI.What does this mean for employers and recruiters?The use of AI by candidates and job seekers is something that employers have become increasingly aware of. There is no problem in principle with a candidate using AI indeed, it shows initiative and with many organisations embedding AI into their own process and systems, it would likely often be seen as a positive. Nevertheless, it is having some impacts that employers need to manage.Firstly, as I have noted, AI is ramping up the number of applications that employers are receiving, almost becoming a barrage in some instances. This creates a workload issue, with teams having to sift through many more applications, cover letters and CVs to produce their shortlist of candidates.Secondly and more seriously AI is making it harder for employers to really know how capable a candidate is, given that applicants may use AI to smarten their CVs, word their covering letters, answer questions on application forms, and assist them with remote/take-home tests and technical exercises like coding challenges.There are several ways that employers can manage the situation, in particular: Review your assessment techniques. Look across the questions and tests you set candidates, and consider whether you should introduce more open-ended questions that are harder for AI tools to answer authentically. Use real-world scenarios and situational questions that require human experience to respond to. Also think about using more on the spot tests that candidates take in your offices or assessment centre rather than remotely. Upskill your teams. Think about providing training for your in-house recruitment team and hiring managers to understand how AI is changing the landscape, and what to look out for. This training could include interview techniques how to effectively probe candidates on information they have given or skills/experience they say they have. Consider the recruitment agency option. Depending on the number of vacancies your business has and the number of applications you receive, a good recruitment agency could be a significant support. Experienced recruiters can take the burden away from already stretched in-house teams. Recruiters should be well-versed in the phenomenon of AI and have the tools to screen and assess applications, CVs and other materials. They should also speak or communicate directly with candidates of potential interest (face-to-face, on the phone or video call, and/or via email) before putting them forward for interview making sure that they are who they say they are and have the skills and capabilities to match.It is fair to say that AI presents the biggest challenges for enterprises running large-scale recruitment activities such as graduate schemes or other high-volume intakes. These are more prone to candidates trying to game the system supported by AI. But it is presenting issues for all employers to be aware of.For all these challenges, there are nevertheless several benefits that AI can bring employers too. AI tools can help prepare candidate information packs (and agency briefs) more easily and quickly. They can score various types of tests automatically. And AI can be used to support the diversity and inclusion agenda - scanning draft job adverts and role descriptions to identify whether they are optimally worded, including considering the needs of specific groups such as people with disabilities or those who are neuro-diverse.Used well, AI can significantly help both sides job seekers and employers alike. One thing is certain: it is here to stay, and indeed can be expected to dramatically grow as tools become more widely available and functionality continues to mature.This just underlines the importance for both individuals and employers to understand the dynamics at play and observe the emerging etiquettes in order to create benefits for everyone while minimising the threat of downsides.Emma Gardiner is Regional Director UK North at Harvey Nash
    0 Comments 0 Shares 16 Views
  • WWW.COMPUTERWEEKLY.COM
    Interview: Wendy Redshaw, chief digital information officer, NatWest Retail Bank
    Wendy Redshaw, chief digital information officer (CDIO) at NatWest Retail Bank, has had a distinguished career leading technology-led change in some of the worlds biggest financial services organisations. Now, shes using that experience to drive even more innovation.After four years as CIO for collaborative technology solutions with Deutsche Bank, Redshaw says she was eager to work for a UK finance house. In late 2018, she found the perfect home at NatWest as head of technology and digital distribution for the personal bank.The opportunity was interesting because NatWest was ready for digital transformation but wasnt naturally sitting in a leadership position at that time, she says.The role allowed me to land and think about what to do. I found an organisation that was fundamentally focused on its customers and perhaps had less digital experience in-house.After working with her team to deliver technological improvements across the personal bank offline and online, Redshaw moved into the CDIO position in February 2020.It wasnt just because I wanted a longer acronym than most technologists, she jokes.We created the role so we could sew together business and technology because, as with many organisations, technology had historically been something that happened over there, and the business did their thing, and then they would give the technologists something to work on. We wanted better integration.Redshaw says the creation of her CDIO role in 2020 was a public statement that NatWest wanted to create a partnership approach to technology and business: This is a digital bank in the making, and hopefully, with the results that weve seen, weve achieved our aims.The technological transformation in banking services that Redshaw oversees at NatWest today differs greatly from the finance industry she joined as a software engineer in 1987.We didnt call it digital then, she says.I remember the focus was on, How do we use technology to make things quicker, simpler and more secure for our customers? She points to work on a security module for the London Stock Exchange and the beginning of the settlement systems CHAPS and Euroclear.There was a lot of change where technology was being brought in, but it was more for the underpinning services than for the consumer-facing areas, she says, before fast-forwarding to the present-day bank. Over that time, weve seen that digital is now in the hands of our retail customers.Redshaw says the shift in technological focus also helped prompt her switch to the retail side of banking. After a career driving behind-the-scenes IT changes in major firms, such as Lloyds TSB, Barclays Capital and Royal Bank of Scotland, her current role at NatWest is focused on delivering innovative customer services.Thats where the exciting stuff is happening. Yes, of course, we use AI across several areas of the organisation something like 17% of our models are AI-based now, such as for controlling fraud, financial crime and so on, she says.However, in terms of affecting human beings, digital services are at our customers fingertips. If you think about my driver for going into the CDIO role, the customer is where I thought Id have the most impact.As CDIO, Readshaw is directly accountable to the group CIO and retail banking CEO. Responsible for digital operations leadership, she manages 4,500 people across four locations globally and leads the delivery of retail banking technology for Royal Bank of Scotland, NatWest and Ulster Bank North.Redshaws team is digitalising services to make life easier for the groups customers. Their work is supported by a planned investment of 3.5bn from 2023 to 2025, with more than 70% of spending targeted at data and technology.NatWest has 10.9 million digitally active retail and business banking customers and 3.5 million use online banking platforms. The hard work continues apace. In 2024, Redshaw led the launch of a retail banking app on Apples Vision Pro virtual reality headset.One of her proudest achievements is the introduction of generative AI (GenAI) into the banks conversational assistant, Cora. She says the bank made an early move into chatbots. Cora was introduced in 2017. The technology could answer basic questions, but Redshaw wanted it to do more.When I joined in 2018, I realised it was quite a good channel to do something with, she says. I had some grand ambitions for her things like digital avatars having a voice, and all these engaging ways of doing things. I said, Look, I see this particular technology being something we could get moving on.Redshaw saw that, while machine learning technology was progressing at pace, it wasnt quite ready for the giant leap in digital experiences she envisioned. However, the public release of generative AI models in late 2022 helped turn theory into a practical reality. Working with experts from IBMs client engineering team to develop the initial proof of concept, NatWest launched its next-generation assistant, Cora+, in June 2024.Cora+ is a multichannel platform that securely accesses data from multiple sources, including products, services and banking information. The virtual assistant technology is powered by IBMs Watsonx AssistantIt was the perfect example of an interest in technology, an interest in people, and an interest in delivering business value, she says.I feel very excited about how weve taken something that just answered questions and moved into generative AI at scale for millions of customers. And its only the first step. Ive got big ambitions for what I want to do with that technology.Cora+ uses ChatGPT 3.5 alongside an unnamed GPT large language model (LLM). The second model is trained to judge the output of the first model. While the GPT models play an important role in NatWests digital strategy, the organisation is eager to keep an open approach to AI and innovation.Redshaw says the group wants to avoid being locked into a specific LLM. She wants the capability to swap from large to small language models (SLMs). Organisations can use SLMs to derive outputs from constrained amounts of data that require less computing power, which is important for a big business like NatWest that wants to meet sustainability targets.As a result, it was a case of, OK IBM, we like working with you, but we want to be able to switch the language models in and out depending on the business requirement, she says. And they were like, Absolutely. So, thats great. We have the same mindset around using the best of everything to get value for our customers safely.This is a digital bank in the making, and hopefully, with the results that weve seen, weve achieved our aimsWendy Redshaw, NatWest Retail BankIn addition to the work on Cora+, Redshaw and her colleagues are analysing how AI can boost customer experiences in other areas. NatWest has worked with IBM to develop a digital legal assistant powered by GenAI. This tool streamlines contract management and enhances accessibility, especially for neurodivergent users. The tool supports colleagues with compliance checks, producing 20% efficiency gains.More generally, Redshaw is proud her team completes thousands of releases annually. The departments focus on micro-projects is as important as delivering large-scale initiatives and helps NatWest hit tight transformation deadlines. Across all projects, IBM acts as a key technology partner, with Redshaw suggesting the nature of the long-term working relationship with the tech giant is like interacting with people on the internal team.John Duigenan, distinguished engineer and general manager of the global financial services industry at IBM, says shifting to constant innovation, experimentation, and learning is typical of the work his company sees in its most pioneering clients. We got to work with a trusted partner, and we got to learn together, he said, referring to IBMs relationship with NatWest.Its great we co-create approaches to using technology and collaborate on innovation. Our teams blend incredibly well, and we deliver together in new ways. We have an approach that says, We know why this work will matter for all of us because we can measure the impact.Redshaw reflects on achievements during the past few years. While the benefits of the digital transformation shes enacted at NatWest are clear, theres always an opportunity to do more.She says the rapid pace of transformation makes it difficult to predict with any degree of certainty what will happen next: What will the success metrics be in three years? We wont be judged on the same metrics because digital banking is changing quickly.However, she expects to see developments in some key areas.In the AI space, I expect to see more voice, she says. At the moment, Cora listens to our telephony and sends a text, a deep link, or something else thats required. In the future, I think itll probably answer the phone and deal with questions.Redshaw also expects progress in text-based answering. Her banks research suggests people in financial difficulties often prefer having a guilt-free conversation with a bot rather than a human.I would expect something in that financial health and support space that uses natural language, she says.Theres even the potential for advances in unexpected areas. Redshaw says shes keen to add Cora to ATMs, something that she was previously told was impossible.Ive now spoken to some innovation engineers, and theyve said they think it might be possible, she says. So, I suspect we will see something like a digital point of presence.Finally, Redshaw expects the bank to continue honing its approach to mobile. People now have their bank in their pocket, she says.I imagine we will give more richness and engagement through these devices. Even though our mobile strategy is great, I think it will lean towards more engagement and personalisation during the next 24 months.Read more about digital transformation in financial servicesInside Prudentials AI strategy: Prudential is leveraging AI across its operations, from enhancing customer interactions and streamlining internal processes to empowering its workforce through upskilling initiatives.Setting the high bar in digital transformation: Roel Louwhoff, Standard Chartereds chief transformation, technology and operations officer, outlines what it takes for the company to become a client-focused, data-driven bank.Inside Macquarie Banks data transformation journey: Australias Macquarie Bank has moved all its data and analytics to the cloud and is applying machine learning to detect fraud and improve customer experience.
    0 Comments 0 Shares 1 Views
  • WWW.COMPUTERWEEKLY.COM
    The Data Bill: Its time to cyber up
    In the latest deliberations on the Data Use and Access Bill in the House of Lords, I set out two amendments to offer well overdue updating to the Computer Misuse Act (CMA) of 1990. In preparing for committee stage of the bill I remain incredibly grateful to everyone involved with the CyberUp campaign, their analysis and commentary always so perfectly on point.I hardly think I need to rehearse the backdrop to the CMA, many people will be well aware of the act and its shortcomings. Curiously, in the intervening thirty-four and a half years, despite seismic changes in our society and technologies - crucially, including the rise of cyber security threats - the act remains unamended.Having said that though, Ive tempted myself a little as it is the case that the act was originally drafted to protect telephone exchanges in 1990, when only 0.5% of the population had access to the internet.The CMA was the UKs first computer crime law and came about following an attack on Prestel in the mid-1980s. Anyone under the age of 40 is probably wondering what Prestel was - a forerunner of internet-based online services launched by the Post Office in 1979 - which only serves to make the point.My amendments to the new Data Bill seek to achieve a very clear and materially significant change, to enable cyber security professionals to do what we have asked of them without the legislation tying at least one hand behind their back.Thirty-four years on, the CMA still governs how we tackle cyber criminals. As it is currently written, the act inadvertently criminalises legitimate cyber security research. This includes a large proportion of vulnerability research and threat intelligence activities which are critical in protecting the UK from increasingly sophisticated cyber attacks.Fundamentally, it restricts cyber security researchers from conducting essential work to protect the UK, including critical national infrastructure. While improving data access is a positive move, it is equally crucial to modernise cyber security laws to protect not just the data but also the systems that underpin it.The wording of my amendments in full is:Data use: definition of unauthorised access to computer programs or data In section 17 of the Computer Misuse Act 1990, at the end of subsection (5) insert c) they do not reasonably believe that the person entitled to control access of the kind in question to the program or data would have consented to that access if they had known about the access and the circumstances of it, including the reasons for seeking it, and (d) they are not empowered by an enactment, by a rule of law, or by order of a court or tribunal to access of the kind in question to the program or data. Data use: defences to charges under the Computer Misuse Act 1990 (1) The Computer Misuse Act 1990 is amended as follows. (2) In section 1, after subsection (3) insert (4) It is a defence to a charge under subsection (1) to prove that (a) the persons actions were necessary for the detection or prevention of crime, or (b) the persons actions were justified as being in the public interest. (3) In section 3, after subsection (6) insert (7) It is a defence to a charge under subsection (1) in relation to an act carried out for the intention in subsection (2)(b) or (c) to prove that (a) the persons actions were necessary for the detection or prevention of crime, or (b) the persons actions were justified as being in the public interest. As I said in the debate, dont take my word for it, the National Cyber Security Centre acknowledged the widening gap between the risks facing the UK and its ability to mitigate them in its 2024 annual review, clearly stating that updating this out-of-date legislation is a crucial step in closing this gap.Introducing a statutory defence would provide legal clarity and protection for ethical cyber security professionals undertaking legitimate vulnerability research and threat intelligence activities. Such a defence would align the UK with best practices internationally, ensuring that we keep pace with nations like the US and EU, which are moving to safeguard ethical cyber security work.To put some numbers to this, there have been nine million instances of cyber crime against UK businesses and charities since May 2021, according to the Department for Science, Innovation and Technologys 2024 cyber breaches survey, published April 2024. Half of businesses and 32% of charities suffered a cyber breach or attack last year, with 2.4bn estimated increased revenue potential post-update for the sector.Analysis based on CyberUps recent industry report suggests that 60% of respondents said the CMA is a barrier to their work in threat intelligence and vulnerability research, and 80% believed the UK was at a competitive disadvantage due to the CMA.Concluding my remarks, I asked whether the minister would be able to provide an update on the work to reform the Computer Misuse Act? I also asked her whether she believed that my amendments as drafted would provide the legal protection that we seek and, if so, why the government would not bring them into force via the means of the Data Bill.The ministers answers to both questions were largely the same - we must wait, the amendments are premature, there was not consensus among those who responded to last years consultation on the matter so the path forward must continue with no timeline or sense of when this most pressing of issues will be resolved.If the government needs some public support to increase its pace on this project, how about the fact that two-thirds of UK adults are inclined to support a change in the law to allow cyber security professionals to carry out research to prevent cyber attacks?There is also support for such a statutory change from the excellent report of the then chief scientific advisor, Patrick Vallance, earlier this year which concluded that, Amending the CMA to include a statutory public interest defence that would provide stronger legal protections for cyber security researchers and professionals.Other nations have already led in this area, not least France and the Netherlands. Belgium, Germany and Malta are currently amending their legal frameworks to this end. As I stated in the debate, its time to pass these amendments, its time to afford our cyber security professionals the safety they need to do the self-same thing for us, all of us. As has been the case for far too long - its time to CyberUp.Timeline: Computer Misuse Act reformJanuary 2020: A group of campaigners says the Computer Misuse Act 1990 risks criminalising cyber security professionals and needs reforming.June 2020: The CyberUp coalition writes to Boris Johnson to urge him to reformthe UKs 30 year-old cyber crime laws.November 2020: CyberUp, a group of campaigners who want to reform the Computer Misuse Act, finds 80% of security professionals are concerned that they may be prosecutedjust for doing their jobs.May 2021: Home secretary Priti Patel announces plans to explore reforming the Computer Misuse Act as calls mount for the 31-year-old law to be updatedto reflect the changed online world.June 2022: A cross-party group in the House of Lords has proposed an amendment to the Product Security and Telecommunications Infrastructure Bill that would address concerns about security researchers or ethical hackers being prosecutedin the course of their work.August 2022: A study produced by the CyberUp Campaign reveals broad alignment among security professionals on questions around the Computer Misuse Act, which it hopes will give confidence to policymakersas they explore its reform.September 2022: The CyberUp coalition, a campaign to reform the Computer Misuse Act, has called on Liz Truss to push ahead with needed changes to protect cyber professionalsfrom potential prosecution.January 2023: Cyber accreditation association Crest International lends its support to the CyberUp Campaign forreform to the Computer Misuse Act 1990.February 2023: Westminster has opened a new consultation on proposed reforms to the Computer Misuse Act 1990, but campaigners who want the law changed to protect cyber professionalshave been left disappointed.March 2023: The deadline for submissions to the governments consultation on reform of the Computer Misuse Act is fast approaching, and cyber professionals need to make their voices heard,say Bugcrowds ethical hackers.November 2023: A group of activists who want to reform the UKs computer misuse laws to protect bona fide cyber professionals from prosecution have been left frustrated by a lack of legislative progress.July 2024: In the Cyber Security and Resilience Bill introduced in the Kings Speech, the UKs new government pledges to give regulators more teeth to ensure compliance with security best practiceand to mandate incident reporting.July 2024: The CyberUp Campaign for reform of the 1990 Computer Misuse Act launches an industry survey inviting cyber experts to share their views on how the outdated lawhinders legitimate work.December 2024: An amendment to the proposed Data (Access and Use) Bill that will right a 35-year-old wrong and protect security professionals from criminalisation is to be debated at Westminster.
    0 Comments 0 Shares 1 Views
  • WWW.COMPUTERWEEKLY.COM
    Top 10 data and ethics stories of 2024
    In 2024, Computer Weeklys data and ethics coverage continued to focus on the various ethical issues associated with the development and deployment of data-driven systems, particularly artificial intelligence (AI).This included reports on the copyright issues associated with generative AI (GenAI) tools, the environmental impacts of AI, the invasive tracking tools in place across the internet, and the ways in which autonomous weapons undermine human moral agency.Other stories focused on the wider social implications of data-driven technologies, including the ways they are used to inflict violence on migrants, and how our use of technology prefigures certain political or social outcomes.1. AI likely to worsen economic inequality, says IMFIn ananalysispublished 14 January 2024, the IMF examined the potential impact of AI on the global labour market, noting that while it has the potential to jumpstart productivity, boost global growth and raise incomes around the world, it could just as easily replace jobs and deepen inequality; and will likely worsen overall inequality if policymakers do not proactively work to prevent the technology from stoking social tensions.The IMF said that, unlike labour income inequality, which can decrease in certain scenarios where AIs displacing effect lowers everyones incomes, capital income and wealth inequality always increase with greater AI adoption, both nationally and globally.The main reason for the increase in capital income and wealth inequality is that AI leads to labour displacement and an increase in the demand for AI capital, increasing capital returns and asset holdings value, it said.Since in the model, as in the data, high income workers hold a large share of assets, they benefit more from the rise in capital returns. As a result, in all scenarios, independent of the impact on labour income, the total income of top earners increases because of capital income gains.2. GenAI tools could not exist if firms are made to pay copyrightIn January, GenAI company Anthropic claimed to a US court that using copyrighted content in large language model (LLM) training data counts as fair use, and that todays general-purpose AI tools simply could not exist if AI companies had to pay licences for the material.Anthropic made the claim after, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed firm in October 2023, demanding potentially millions in damages for the allegedly systematic and widespread infringement of their copyrighted song lyrics.However, in asubmissionto the US Copyright Office on 30 October (which was completely separate from the case), Anthropic said that the training of its AI model Claude qualifies as a quintessentially lawful use of materials, arguing that, to the extent copyrighted works are used in training data, it is for analysis (of statistical relationships between words and concepts) that is unrelated to any expressive purpose of the work.On the potential of a licensing regime forLLMs ingestion of copyrighted content, Anthropic argued that always requiring licences would be inappropriate, as it would lock up access to the vast majority of works and benefit only the most highly resourced entities that are able to pay their way into compliance.In a40-page document submitted to the courton 16 January 2024 (responding specifically to apreliminary injunction requestfiled by the music publishers), Anthropic took the same argument further, claiming it would not be possible to amass sufficient content to train an LLM like Claude in arms-length licensing transactions, at any price.It added that Anthropic is not alone in using data broadly assembled from the publicly available internet, and that in practice, there is no other way to amass a training corpus with the scale and diversity necessary to train a complex LLM with a broad understanding of human language and the world in general.Anthropic further claimed that the scale of the datasets required to train LLMs is simply too large to for an effective licensing regime to operate: One could not enter licensing transactions with enough rights owners to cover the billions of texts necessary to yield the trillions of tokens that general-purpose LLMs require for proper training. If licences were required to train LLMs on copyrighted content, todays general-purpose AI tools simply could not exist.3. Data sharing for immigration raids ferments hostility to migrantsComputer Weekly spoke to members of the Migrants Rights Network (MRN) and Anti-Raids Network (ARN) about how the data sharing between public and private bodies for the purposes of carrying out immigration raids helps to prop up the UKs hostile environment by instilling an atmosphere of fear and deterring migrants from accessing public services.Published in the wake of the new Labour governmentannouncinga major surge in immigration enforcement and returns activity, including increased detentions and deportations, areport by the MRNdetails how UK Immigration Enforcement uses data from the public, police, government departments, local authorities and others to facilitate raids.Julia Tinsley-Kent, head of policy and communications at the MRN and one of the reports authors, said the data sharing in place coupled with government rhetoric about strong enforcement essentially leads to people self-policing because theyre so scared of all the ways that you can get tripped up within the hostile environment.She added this is particularly insidious in the context of data sharing from institutions that are supposedly there to help people, such as education or healthcare bodies.As part of the hostile environment policies, the MRN, the ARN and others have long argued that the function of raids goes much deeper than mere social exclusion, and also works to disrupt the lives of migrants, their families, businesses and communities, as well as to impose a form of terror that produces heightened fear, insecurity and isolation.4. Autonomous weapons reduce moral agency and devalue human lifeAt the very end of April, military technology experts gathered in Vienna for a conference on the development and use of autonomous weapons systems (AWS), where they warned about the detrimental psychological effects of AI-powered weapons.Specific concerns raised by experts throughout the conference included the potential for dehumanisation when people on the receiving end of lethal force arereduced to data points and numbers on a screen; the risk of discrimination during target selection due to biases in the programming or criteria used; as well as the emotional and psychological detachment of operators from the human consequences of their actions.Speakers also touched on whether there can ever be meaningful human control over AWS, due to the combination of automation bias and how such weapons increase the velocity of warfare beyond human cognition.5. AI Seoul Summit reviewThe second global AI summit in Seoul, South Korea saw dozens of governments and companies double down on their commitments to safely and inclusively develop the technology, but questions remained about who exactly is being included and which risks are given priority.The attendees and experts Computer Weekly spoke with said while the summit ended with some concrete outcomes that can be taken forward before the AI Action Summit due to take place in France in early 2025, there are still a number of areas where further movement is urgently needed.In particular, they stressed the need for mandatory AI safety commitments from companies;socio-technical evaluations of systemsthat take into account how they interact with people and institutions in real-world situations; and wider participation from the public,workersand others affected by AI-powered systems.However, they also said it is early days yet and highlighted the importance of the AI Safety Summit events in creating open dialogue between countries and setting the foundation for catalysing future action.Over the course of the two-day AI Seoul Summit, a number of agreements and pledges were signed by the governments and companies in attendance.For governments, this includes the European Union (EU) and a group of 10 countries signing the Seoul Declaration, which builds on theBletchley Decelerationsigned six months ago by 28 governments and the EU at theUKs inaugural AI Safety Summit. It also includes theSeoul Statement of Intent Toward International Cooperation on AI Safety Science, which will see publicly backed research institutes come together to ensure complementarity and interoperability between their technical work and general approaches to AI safety.The Seoul Declaration in particular affirmed the importance of active multi-stakeholder collaboration in this area and committed the governments involved to activelyinclude a wide range of stakeholders in AI-related discussions.A larger group of more than two dozen governments also committed to developing shared riskthresholds for frontier AI models to limit their harmful impacts in theSeoul Ministerial Statement, which highlighted the need for effective safeguards and interoperable AI safety testing regimes between countries.The agreements and pledges made by companies include16 AI global firms signing the Frontier AI Safety Commitments, which is a specific voluntary set of measures for how they will safely develop the technology, and 14 firms signing theSeoul AI Business Pledge, which is a similar set of commitments made by a mixture of South Korean and international tech firms to approach AI development responsibly.One of the key voluntary commitments made by the AI companies was not to develop or deploy AI systems if the risks cannot be sufficiently mitigated. However, in the wake of the summit, a group of current and former workers from OpenAI, Anthropic and DeepMind the first two of which signed the safety commitments in Seoul said these firms cannot be trusted to voluntarily share information about their systems capabilities and risks with governments or civil society.6. Invasive tracking endemic on sensitive support websitesDozens of university, charity and policing websites designed to help people get support for serious issues such as sexual abuse, addiction or mental health are inadvertently collecting and sharing site visitors sensitive data with advertisers.A variety of tracking tools embedded on these sites including Meta Pixel and Google Analytics mean that when a person visits them seeking help, their sensitive data is collected and shared with companies like Google and Meta, which may become aware that a person is looking to use support services before those services can even offer help.According to privacy experts attempting to raise awareness of the issue, the use of such tracking tools means peoples information is being shared inadvertently with these advertisers, as soon as they enter the sites in many cases because analytics tags begin collecting personal data before users have interacted with the cookie banner.Depending on the configuration of the analytics in place, the data collected could include information about the site visitors age, location, browser, device, operating system and behaviours online.While even more data is shared with advertisers if usersconsentto cookies, experts told Computer Weekly the sites do not provide an adequate explanation of how their information will be stored and used by programmatic advertisers.They further warned the issue is endemic due a widespread lack of awareness about howtracking technologies like cookies work, as well as the potential harms associated with allowing advertisers inadvertent access to such sensitive information.7. AI interview: Thomas Dekeyser, researcher and film directorComputer Weekly spoke to author and documentary director Thomas Dekeyser about Clodo, a clandestine group of French IT workers who spent the early 1980s sabotaging technological infrastructure, which was used as the jumping off point for a wider conversation about the politics of techno-refusal.Dekeyser says a major motivation for writing his upcoming book on the subject is that people refusing technology whether that be the Luddites, Clodo or any other radical formation are all too often reduced to the figure of the primitivist, the romantic, or the person who wants to go back in time, and its seen as a kind of anti-modernist position to take.Noting that technophobe or Luddite have long been used as pejorative insults for those who oppose the use and control of technology by narrow capitalist interests, Dekeyser outlined the diverse range of historical subjects and their heterogenous motivations for refusal: I want to push against these terms and what they imply.For Dekeyser, the history of technology is necessarily the history of its refusal. From the Ancient Greek inventor Archimedes who Dekeyser says can be described as the first machine breaker due to his tendency to destroy his own inventions to the early mercantilist states of Europe backing their guild members acts of sabotage against new labour devices, the social-technical nature of technology means it has always been a terrain of political struggle.8. Amazon Mechanical Turk workers suspended without explanationHundreds of workers on Amazons Mechanical Turk (MTurk) platform were left unable to work after mass account suspensions caused by a suspected glitch in the e-commerce giants payments system.Beginning on 16 May 2024, a number of US-based Mechanical Turk workers began receiving account suspension forms from Amazon, locking them out of their accounts and preventing them from completing more work on thecrowdsourcingplatform.Owned and operated by Amazon, Mechanical Turk allows businesses, or requesters, to outsource various processes to a distributed workforce, who then complete tasks virtually from wherever they are based in the world, including data annotation, surveys, content moderation and AI training.According to those Computer Weekly spoke with, the suspensions were purportedly tied to issues with the workers Amazon Payment accounts, an online payments processing service that allows them to both receive wages and make purchases from Amazon. The issue affected hundreds of workers.MTurk workers from advocacy organisation Turkopticon outlined how such situations are an on-going issue that workers have to deal with, and detailed Amazons poor track record on the issue.9. Interview: Petra Molnar, author of The walls have eyesRefugee lawyer and author Petra Molnar spoke to Computer Weekly about the extreme violence people on the move face at borders across the world, and how increasingly hostile anti-immigrant politics is being enabled and reinforced by a lucrative panopticon of surveillance technologies.She noted how because of the vast array of surveillance technologies now deployed against people on the move - entire border-crossing regions have beentransformed into literal graveyards, while people are resorting toburning off their fingertipsto avoid invasive biometric surveillance; hiding in dangerous terrain to evadepushbacksor being placed inrefugee campswith dire living conditions; andliving homelessbecause algorithms shielded from public scrutiny are refusing them immigration status in the countries theyve sought safety in.Molnar described how lethal border situations are enabled by a mixture of increasingly hostile anti-immigrant politics and sophisticated surveillance technologies, which combine to create a deadly feedback loop for those simply seeking a better life.She also discussed the inherently racist and discriminatory nature of borders, and how the technologies deployed in border spaces are extremely difficult, if not impossible, to divorce from the underlying logic of exclusion that defines them.10. AIs environmental cost could outweigh sustainability benefitsThe potential of AI to help companies measure and optimise their sustainability efforts could be outweighed by the huge environmental impacts of the technology itself.On the positive side, speakers at the AI Summit London outlined, for example, how the data analysis capabilities of AI can assist companies with decarbonisation and other environmental initiatives by capturing, connecting and mapping currently disparate data sets; automatically pin point harmful emissions to specific sites in supply chains; as well as predict and manage the demand and supply of energy in specific areas.They also said it could help companies better manage their Scope 3 emissions (which refers to indirect greenhouse gas emissions that occur outside of a companys operations, but that are still a result of their activities) by linking up data sources and making them more legible.However, despite the potential sustainability benefits of AI, speakers were clear that the technology itself is having huge environmental impacts around the world, and that AI itself will come to be a major part of many organisations Scope 3 emissions.One speaker noted that if the rate of AI usage continues on its current trajectory without any form of intervention, then half of the worlds total energy supply will be used on AI by 2040; while another pointed out that,at a time when billions of people are struggling with access to water, AI-providing companies are using huge amounts of water to cool their datacentres.They added AI in this context could help build in circularity to the operation, and that it was also key for people in the tech sector to internalise thinking about the socio-economic and environmental impacts of AI, so that it is thought about from a much earlier stage in a systems lifecycle.Read more about data and ethicsUN chief blasts AI companies for reckless pursuit of profit: The United Nations general secretary has blasted technology companies and governments for pursuing their own narrow interests in artificial intelligence without any consideration of the common good, as part of wider call to reform global governance.Barings Law plans to sue Microsoft and Google over AI training data: Microsoft and Google are using peoples personal data without proper consent to train artificial intelligence models, alleges Barings Law, as it prepares to launch a legal challenge against the tech giants.UK Bolt drivers win legal claim to be classed as workers: Employment Tribunal ruling says Bolt must classify its drivers as workers rather than self-employed, putting drivers in line to receive thousands of pounds in compensation from the ride-hailing and delivery app.
    0 Comments 0 Shares 1 Views
  • WWW.COMPUTERWEEKLY.COM
    Latest attempt to override UKs outdated hacking law stalls
    Two amendments to the Data (Access and Use) Bill that would have established a statutory legal defence for security professionals and ethical hackers to protect them from prosecution under the 1990 Computer Misuse Act (CMA) have failed to make it beyond a House of Lords committee hearing after being withdrawn.The 34-year-old CMA broadly defines the offence of unauthorised access to a computer that is frequently relied upon in the UK when prosecuting cyber criminals, but given it became law when Margaret Thatcher was prime minister, it has not been updated to reflect the emergence, and practices, of the legitimate cyber security profession.Campaigners say this is putting the UK at a competitive disadvantage because security pros fear they may be prosecuted simply for doing their jobs for example, by accessing a system during the course of an incident investigation while their employers lose out to companies located in more permissive jurisdictions.Introduced by Lord Chris Holmes and Lord Tim Clement-Jones, the changes would have introduced two amendments into the Data Bill to amend the CMA such that security professionals could prove their actions were necessary for the detection or prevention of crime or justified as being in the public interest.Speaking in support of the amendment on 18 December 2024, Holmes spoke about how the CMA was introduced to defend telephony exchanges in an era when 0.5% of the population was online, and if that was the acts sole purpose, that alone would indicate it needs updating given the profound advances in technology made in the past three-and-a-half decades.The Computer Misuse Act 1990 is not only out of date but inadvertently criminalising the cyber security professionals we charge with the job of keeping us all safe. They oftentimes work, understandably, under the radar, behind not just closed but locked doors, doing such important work. Yet, for want of these amendments, they are doing that work, all too often, with at least one hand tied behind their back, said Holmes. The Computer Misuse Act 1990 is not only out of date but inadvertently criminalising the cyber security professionals we charge with the job of keeping us all safe Lord Chris HolmesLet us take just two examples: vulnerability research and threat intelligence assessment and analysis. Both could find that cyber security professional falling foul of the provisions of the CMA 1990. Do not take my word for it: look to the 2024 annual report of the National Cyber Security Centre, which rightly and understandably highlights the increasing gap between the threats we face and its ability, and the ability of the cyber security professionals community, to meet those threats.These amendments, in essence, perform one simple but critical task: to afford a legal defence for legitimate cyber security activities, he said. That is all, but it would have such a profound impact for those whom we have asked to keep us safe and for the safety they can thus deliver to every citizen in our society.Its not time, its well over time that these amendments become part of our law. If not now, then when? If not these amendments, what amendment? And if not these amendments, what will the government say to all those people who will continue to be put in harms way for want of these protective provisions? added Holmes.During the hearing in Westminster, other parliamentarians, including the amendments co-sponsor Lord Clement-Jones and Lord James Arbuthnot, better known for his campaigning work in the Post Office Horizon scandal, spoke in favour of reform, but to no avail.Lord Timothy Kirkhope said: This just demonstrates, yet again, that unless we pull ourselves together, with better smart legislation that moves faster, we will never ever catch up with developments in technology and AI [artificial intelligence]. This has been demonstrated dramatically by these amendments. I express concerns that the government move at a pace that government always moves at, but in this particular field it is not going to work.Responding to the meeting, under-secretary of state at the Department for Science, Innovation and Technology (DSIT) Baroness Margaret Jones said the government agreed the UK needed a revised legislative framework to enable the authorities to tackle the harms posed by cyber criminals, and that it was committed to ensuring the CMA remains up to date and is effective in this regard.However, said Jones, reform is a complex and ongoing issue that is being considered as part of a Home Office review of the CMA itself.We are considering improved defences by engaging extensively with the cyber security industry, law enforcement agencies, prosecutors and system owners. However, engagement to date has not produced a consensus on the issue, even within the industry, and that is holding us back at this moment but we are absolutely determined to move forward with this and to reach a consensus on the way forward, she said.The specific amendments are premature, because we need a stronger consensus on the way forward, notwithstanding all the good reasons given for why it is important that we have updated legislation. With these concerns and reasons in mind, I hope that the noble Lord [Holmes] will feel able to withdraw his amendment, said Jones.Katharina Sommer, group head of government affairs at cyber firm NCC Group, said she was thrilled to see such passionate calls for reform, and that the session had rightly highlighted the outdated nature of the CMA and how it holds back cyber security professionals.We need a statutory defence, like that proposed by Lord Holmes welcome amendment, to allow this vital work to proceed unimpeded, at a time where the cyber threat is rising unabatedly. Reforming the CMA would unlock huge opportunities, strengthen our defences, and help the UK compete on the world stage, she said.It is heartening to see the minister recognise the need to provide legal protections for legitimate cyber security activities, and hear about her determination to reach consensus on the way forward, particularly as this follows her colleague the security ministers recent commitment to reviewing the CMA, said Sommer.We do hope sincerely that all those involved in keeping the UK safe in cyberspace are prepared to work together, and find compromise rather than risk deadlock. We look forward to working with the government and all partners to ensure the UKs cyber laws reflect 21st century threats.Timeline: Computer Misuse Act reformJanuary 2020: A group of campaigners says the Computer Misuse Act 1990 risks criminalising cyber security professionals and needs reforming.June 2020: The CyberUp coalition writes to Boris Johnson to urge him to reformthe UKs 30-year-old cyber crime laws.November 2020: CyberUp, a group of campaigners who want to reform the Computer Misuse Act, finds 80% of security professionals are concerned that they may be prosecutedjust for doing their jobs.May 2021: Home secretary Priti Patel announces plans to explore reforming the Computer Misuse Act as calls mount for the 31-year-old law to be updatedto reflect the changed online world.June 2022: A cross-party group in the House of Lords has proposed an amendment to the Product Security and Telecommunications Infrastructure Bill that would address concerns about security researchers or ethical hackers being prosecutedin the course of their work.August 2022: A study produced by the CyberUp Campaign reveals broad alignment among security professionals on questions around the Computer Misuse Act, which it hopes will give confidence to policymakersas they explore its reform.September 2022: The CyberUp coalition, a campaign to reform the Computer Misuse Act, has called on Liz Truss to push ahead with needed changes to protect cyber professionalsfrom potential prosecution.January 2023: Cyber accreditation association Crest International lends its support to the CyberUp Campaign forreform to the Computer Misuse Act 1990.February 2023: Westminster opens a new consultation on proposed reforms to the Computer Misuse Act 1990, but campaigners who want the law changed to protect cyber professionals have been left disappointed.March 2023: The deadline for submissions to the governments consultation on reform of the Computer Misuse Act is fast approaching, and cyber professionals need to make their voices heard,say Bugcrowds ethical hackers.November 2023: A group of activists who want to reform the UKs computer misuse laws to protect bona fide cyber professionals from prosecution have been left frustrated by a lack of legislative progress.July 2024: In the Cyber Security and Resilience Bill introduced in the Kings Speech, the UKs new government pledges to give regulators more teeth to ensure compliance with security best practiceand to mandate incident reporting.July 2024: The CyberUp Campaign for reform of the 1990 Computer Misuse Act launches an industry survey inviting cyber experts to share their views on how the outdated lawhinders legitimate work.December 2024: An amendment to the proposed Data (Access and Use) Bill that will right a 35-year-old wrong and protect security professionals from criminalisation is to be debated at Westminster.Andrew Jones, strategy director at The Cyber Scheme, a supporter of the CyberUp Campaign for legal reform, said: Whilst we are slightly disappointed by the governments decision not to seize this opportunity to bring the Computer Misuse Act into the 21st century, we are encouraged by their recent comments suggesting a review of the act is being considered. Until then, the CMA will remain an outdated piece of legislation, preventing our cyber security professionals from defending organisations effectively and leaving us lagging behind peer nations, as the US and EU move to safeguard ethical cyber security work as a cornerstone of national resilience.With the CEO of the National Cyber Security Centre recently acknowledging that hostile activity in UK cyberspace has increased in frequency, sophistication and intensity, it is vital that the UK takes measures to upgrade its cyber resilience.He added: The statutory defence we propose drafted in consultation with industry and legal experts would protect legitimate cyber security professionals, strengthen UK cyber defences, and reinforce its place as a cyber security leader. We are fully prepared to work with the government to help implement this necessary change in the future, as soon as it is ready to act.
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Challenging the cloud giants: Is a new era of competition on the horizon?
    Androm - stock.adobe.comOpinionChallenging the cloud giants: Is a new era of competition on the horizon?With the dominant hold that Amazon and Microsoft has on the global cloud market under scrutiny by regulators across the world, could 2025 usher in a new era of cloud competition?ByNicky StewartPublished: 19 Dec 2024 The UKs Competition and Markets Authority (CMA) sent shockwaves through the tech industry in October 2023 when it announced its investigation into potential anti-competitive practices in the UK cloud infrastructure services market.The CMA is not ploughing a lonely furrow: regulators across the world from Spain and Denmark to South Africa and (if reports are to be believed) the United States are examining various aspects of cloud computing and its impact on competition.This scrutiny is long overdue, and it marks a significant step forward. For too long, regulators have looked the other way as the Western worlds cloud market quietly amalgamated around just two cloud providers.While these tech giants have undoubtedly played their part in a global digital industrial revolution, their dominance is often accepted as an inevitable and unchangeable reality even if it may have been achieved by anti-competitive practices.This implicit acceptance of the status quo is a false narrative because there are alternatives. Challenger cloud providers stand ready to compete, asking for nothing more than a level playing field.For inquiries like the CMAs to succeed, it is crucial that decision-makers do not allow the dominant cloud providers to monopolise the conversation and they need to give equal weight to the voices of those challengers.At the beginning of next year, we will learn about the CMAs provisional opinion on the four theories of harm under investigation.These range from concerns about exploitative pricing practices to barriers that restrict customers from switching providers.During the summer, the CMA proposed numerous remedies to combat these. While we cant second guess the exact conclusions, one thing is clear: challenger cloud providers hold strong and united views, based on decades of cumulative experience.These challengers offer a vital dose of reality to what can often become dry, legalistic debates.While the industry may be guilty of using jargon like data egress fees and anti-competitive licensing practices, these terms have real-world consequences.Ask a challenger provider to explain what these practices mean for their business, and youll hear stories of dominant players charging exorbitant fees to customers who try to leave their platforms or dramatically increasing the cost of widely-used software when its run on a competitors cloud. These practices have profound implications for competition.If the CMA can create a framework that enables competition, the benefits will ripple through the market. Challenger cloud providers, with their agility and innovation, will drive down prices, expand consumer choice and spur further technological advances. They will also help to address critical concerns like cloud concentration risk and digital resilience, which become ever more pressing as our dependence on cloud services grows.The stakes couldnt be higher. This isnt just about todays challengers and consumers; its about future-proofing the entire cloud ecosystem. Emerging markets such as AI and quantum computing both heavily reliant on cloud infrastructure must not fall victim to a winner takes all scenario.Such an outcome would stifle innovation and concentrate power in ways that could threaten global digital resilience and even national security.The CMA, alongside its international counterparts, has a unique and urgent opportunity to reset the dial. This is a moment to usher in a new era of openness, competition, and fairness in the cloud market.Challenger cloud providers will be watching closely to see how the CMAs provisional decision translates into meaningful solutions that benefit not only the industry but also consumers, the wider economy, and the future of digital innovation.While the last twelve months may have fired the starting gun on investigating the cloud market, the next twelve could be when we see real change begin.In The Current Issue:What do the home secretarys policing reforms mean for the future of the Police Digital Service?What are the security risks of bring your own AI?Download Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    LockBit ransomware gang teases February 2025 return
    Despite being taken down and humiliated by the National Crime Agency (NCA) coordinated Operation Cronos in February 2024, an unknown individual(s) associated with, or claiming to represent, the LockBit ransomware gang has broken cover to announce the impending release of a new locker malware, LockBit 4.0.In screengrabs taken from the dark web that have been widely circulated on social media in the past day, the supposed cyber criminal invited interested parties to sign up and start your pentester billionaire journey in 5 minutes with us, promising them access to supercars and women. At the time of writing, none of the links in the post direct anywhere, while a countdown timer points to a launch date of 3 February 2025.Robert Fitzsimons, lead threat intelligence engineer at Searchlight Cyber, said it was hard to say at this stage what LockBit 4.0 entailed whether the gang was launching a new leak site, its old one having been seized, or whether it has made changes to its ransomware.It is worth noting that LockBit has already been through many iterations, its current branding is LockBit 3.0. It's therefore not surprising that LockBit is updating once again and given the brand damage inflicted by the law enforcement action Operation Cronos earlier this year there there is clearly a motivation for LockBit to shake things up and re-establish its credentials, keeping in mind that the LockBit 3.0 site was hijacked and defaced by law enforcement, said Fitzsimons.There has been a decrease in LockBit's victim output since Operation Cronos but this post shows that it is still trying to attract affiliates and continue its operations.The gangs sudden announcement comes just days after it emerged that the United States government is seeking the extradition from Israel of an alleged LockBit operative named as Rotislav Panev to face trial for wire fraud and cyber crime.Panev was arrested in Haifa in Israel in August according to Israeli news site Ynet, which was first to report the extradition request, news of his arrest has been restricted up to now in order to avoid tipping off other LockBit associates who may be located outside Russia and giving them a chance to escape to the relative safety afforded them there.Panev is accused of working as a software developer for LockBit and may have created the mechanism by which the gang was able to print ransom notes on printers connected to the compromised systems. Panevs lawyer told Ynet that he was a computer technician and was never aware of nor involved in any fraud, extortion or money laundering.Computer Weekly understands an extradition hearing in this case is scheduled for January 2025.Since Operation Cronos unfolded in early 2024, the NCA and other agencies that participated in the takedown have been drip feeding more information about the infamous cyber criminal operation.In May, the NCA unmasked its leader, LockBitSupp, naming him as Russian national Dmitry Khoroshev and targeting him with asset freezes and travel bans, concurrent with an indictment in the US that has seen him charged with a total of 26 counts of fraud, damage to protected computers and extortion. Khoroshev remains at large despite a multimillion dollar reward, and LockBitSupp has denied that this is their true identity.Later in the year the NCA named-and-shamed a high-profile LockBit affiliate, Aleksandr Ryzhenkov, aka Beverley, who was also a key player in the Evil Corp operation and served as a henchman to its leader Maksim Yakubets.Despite the apparent success of Operation Cronos, recent history has shown that even when law enforcement operations can be effective at disrupting their activities, cyber criminals are remarkably resilient and often able to stand up their operations again with relative ease.Although it is not currently possible to ascertain what the person behind LockBits announcement is actually planning, defenders should be alert to the possibility of attack in the coming weeks, and take appropriate anti-ransomware measures wherever possible.Read more about LockBitThe US authorities say they now have more than 7,000 LockBit decryption keys in their possession and are urging victims of the prolific ransomware gang to come forward.Coalition is the latest company to confirm LockBit activity against vulnerable ScreenConnect instances. But the insurer found significant differences between previous LockBit attacks.Reaction to the takedown of the LockBit ransomware gang is enthusiastic, but tempered with the knowledge that cyber criminals are often remarkably resilient.
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Look to the future: How the threat landscape may evolve next
    Maksim Kabakou - FotoliaOpinionLook to the future: How the threat landscape may evolve nextFrom Covid-19 to war in Ukraine, SolarWinds Sunburst, Kaseya, Log4j, MOVEit and more, the past five years brought cyber to mainstream attention, but what comes next? The Computer Weekly Security Think Tank looks ahead to the second half of the 2020sByElliott Wilkes, ACDSPublished: 18 Dec 2024 Its been quite the half-decade. In fact, its hard to know where to start when reflecting on it. The Covid-19 pandemic saw a (forced) mass shift towards hybrid working models, leaving security teams with a new and complex attack surface to secure quickly. Charges made against the CISOs of SolarWinds and Uber set a precedent of legal responsibilities for CISOs when it comes to cyberattacks and reporting. Elsewhere, new regulations are being written into law across the world to protect organisations and consumers everywhere, from NIS2 to the Cyber Resilience Act. Similarly, artificial intelligence (AI) has revolutionised cyber security, for good and bad. In some ways, AI has become a helpful ally for security teams when it comes to fighting threats, especially as teams are facing a barrage of new and novel threats daily. On the other hand, the uptick in attacks is likely due to the increased use of AI by cyber criminals to speed up and automate attacks. These notable events are just scratching the (attack) surface!The cyber industry has always been fast paced and security teams are no stranger to change. However, the last five years have challenged the industry significantly, with the unprecedented volume and sophistication of new threats, talent retention issues and burnout rise. As always, these challenges have exemplified the resilience of the industry. We learn from one another and, as a community, we have become more open to speaking of our collective challenges and helping one another. As we head into the unknown once again, its critical that we continue to foster a continued sense of openness and community.I find predictions difficult. This feels like using sticks to find hidden wells of water. I have no crystal ball that will reveal the spring of vulnerabilities going to be released upon us in the next five years. But, I have seen some trends over the past few years that have proven hardy and are representative of significant problems that arent going away any time soon. These are the best spots I can look to for what lies ahead.We might see the quantum computing event horizon in the next five years, in which case, all bets are off. I dont think that that day will be like the vaunted Y2K that was foretold, but will be more problematic over a longer period of time. It will still be a good amount of time before quantum computing is easily accessible by criminal groups in such a way that will make it an everyday threatgovernments protecting secrets though, are in a different boat.I will also make the very spicy take that the AI, at least in the current form using LLMs or things of a similar stripe, is going to sputter and fall flat. We havent seen massive increases in uptake by significant parts of the economy for any of the leading companies, despite them shovelling money into the AI furnace by the billions. There are also reports that the current flavour of AI LLMs have reached their limit, with diminishing returns as there are no longer any major corpuses of human-created data and content to consume and use for training. There, I said it. We are nearing peak AI. Cue sad trombone.And now for something completely differentOn a much more serious note, I think the major events relating to cyber security over the next five years will be driven largely by geopolitical crises, starting with China.Between now and 2030 we will see increased aggression by China with some form of conflict both hot and cold, brought on by the possible annexation of Taiwan. China has, for some time, been using police actions (and civilian fishing vessels) to encroach on the territorial sovereignty of regional nations including the Philippines and Taiwan. I worry that what happened in Hong Kong will be tried in a similar way, and these methods for attacking territorial water boundaries will continue, using this playbook in Taiwan, with a diminished role for some traditional western powers. If this comes to pass, and unfortunately it seems thats the direction things are heading, this will be a cataclysmic global event with truly massive implications. Western-based manufacturers of silicon will become parts of the national security apparatus as critical national infrastructure, in a way that they have escaped thus far but are increasingly moving towards.More critical national infrastructure will fail in larger ways, due to espionage, conflict or both, like we have seen with the actions of Volt Typhoon and Salt Typhoon, Chinese state-sponsored actors digging into infrastructure like ISPs and telcos and energy companies for use in a future potential conflict and to monitor communications of strategic importance. My fear is that disruption of telcos and other everyday critical infrastructure sectors that have not gone as far in their cyber security maturity journey will force governments to assert more explicit control through regulation and direct assistance. And some of this will be long overdue, for in the year 2024, is it really defensible to not require MFA for privileged (or all) users? Or not move away from memory unsafe languages? Or not keep logs on critical system events? These things shouldnt be acceptable now but Im afraid it will take an even bigger catastrophe than the cyber crises weve endured in the past few years for these requirements to get stated in a sufficiently forceful way that gets some orgs to take note.The Computer Weekly Security Think Tank looks aheadMike Gillespie and Ellie Hurst, Advent IM:CISOs will face growing challenges in 2025 and beyond.Elliot Rose, PA Consulting:The most pressing challenges for CISOs and cyber security teams.Pierre-Martin Tardif, ISACA:Six trends that will define cyber through to 2030.Stephen McDermid, Okta:In 2025: Identities conquer, and hopefully unite.Deepti Gopal, Gartner:CISOs:Don't rely solely on technical defences in 2025.Paul Lewis, Nominet:Decoding the end of the decade: What CISOs should watch out for.Rob Dartnall, SecAlliance: 2025-30: Geopolitical influence on cyber and the convergence of threat.Russia will continue its role as global bully, but we will see more cracks emerge when they struggle running out of updates to Windows devices and other western technologies that are no longer available due to sanctions. Russian-based ransomware groups will move in more close alignment with the government and become proxy actors of the Kremlin, even more explicitly than they are now.Supply chains will get hit, again, and again, and some more. Unfortunately this is a growing trend over the past few years and as we saw with CrowdStrike this year (which wasnt a supply chain attackbut the disruption of their software caused a global technology event that impacted millions of people, disrupted businesses, cancelled flights, and more) these technologies have become almost irreversibly intertwined with corporate enterprise IT to such an extent that they can cause cascade failures.Whether the attackers are aggravated aggressor nation-states like Russian and China or neo-organised crime in the form of ransomware gangs, the next years will see disruptions with increasing frequency and magnitude. Eventually there will be a counterforce, deployed by governments, in the form of policy, law and cyber action. My hope for my friends still working in the halls of power in Washington and Whitehall, is that we can mount an effective response to acts of aggression in a way that is proportionate and lasting, not overcorrecting but likewise not wasting an opportunity to help set and enforce some norms around responsible stewardship of user data, technology and public services, as well as norms for conflict in cyberspace that are rooted in our principles and values as a society.Elliott Wilkes is chief technology officer at Advanced Cyber Defence Systems(ACDS). A seasoned digital transformation leader and product manager, Wilkes has over a decade of experience working with both the American and British governments, most recently as a cyber security consultant to the Civil Service.In The Current Issue:What do the home secretarys policing reforms mean for the future of the Police Digital Service?What are the security risks of bring your own AI?Download Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Top 10 cyber security stories of 2024
    Maksim Kabakou - stock.adobe.comNewsTop 10 cyber security stories of 2024Data breaches, data privacy and protection, and the thorny issue of open source security were all hot topics this year. Meanwhile, security companies frequently found themselves hitting the headlines, and not always for good reasons. Here are Computer Weekly's top 10 cyber security stories of 2024ByAlex Scroxton,Security EditorPublished: 18 Dec 2024 12:00 The year 2024 threw up another diverse crop of stories in the world of cyber security, with much to pay attention to, particularly in the realm of artificial intelligence (AI), which continued to dominate the headlines.This year, we steer away from AI fear, uncertainty and doubt to focus on some of the other big issues, such as data privacy and protection, large scale breaches, and the tricky issues surrounding the security of widely used open source components.There was also trouble at the mill for cyber security companies themselves, which often found themselves in the headlines, often after the privileged access afforded by their products and services was abused to attack their customers. Ivanti, Microsoft and Okta all make our top 10 this year and we would be remiss not to mention CrowdStrike.Here are Computer Weeklys top 10 cyber security stories of 2024.1. Leak of 26 billion records may prove to be mother of all breachesAt the end of January 2024, a data dump comprising 26 billion records and totalling more than 25GB in size was discovered by researchers. Dubbed the largest leak in history, and the mother of all breaches, the majority of the data related to Chinese social media platforms, but the likes of Adobe, Dropbox, LinkedIn, MyFitnessPal, Telegram and X were also included.Much of the data appeared to have been compiled from various smaller leaks, likely a broker who intended to sell it on to others for use in identity theft, phishing attacks and account takeovers.2. Okta doubles down on cyber in wake of high-profile breachesIn February, identity and access management (IAM) provider Okta announced plans to double its investment in security over the next 12 months andlaunched a Secure Identity Commitment. This came in the wake of the exploitation of its products and services during a series of cyber attacks during 2023, and earlier.The companys leadership said that as a security leader it recognised it needed to work a lot harder to stop neer-do-wells from taking advantage of the identity data its customers entrust to it.3. Widespread Ivanti vulnerabilities make wavesAnother cyber company was in the news at the start of 2024, Ivanti, a specialist in asset, identity and supply chain management found a series of vulnerabilities in its Policy Securenetwork access control(NAC), Ivanti Connect Securesecure socket layer virtual private network(SSL VPN), and Ivanti Neurons forzero-trust access(ZTA) products caused concern at organisations worldwide after being exploited by a threat actor.The three vulnerabilities in question enabled attackers to access privileged data and obtain elevated access rights on their victims systems.4. Open source alert over intentionally placed backdoorIn April, users of the open sourceXZ Utilsdata compression library narrowly avoided falling victim to a major supply chain attack, after evidence of an apparently intentionally placedbackdoorin the code was revealed. The malicious code, embedded in versions 5.6.0 and 5.6.1 of the library, enabled unauthorised access to affected Linux distributions.It later emerged that the dodgy code was placed there by a malicious actor who intentionally worked hard over a long period to gain the trust of the projects developers. The security of widely used open source components was to be one of the big themes of the year.5. Microsoft beefs up cyber initiative after hard-hitting US reportIn May, Microsoft doubled down on itsSecure Future Initiative(SFI), expanding the programme which set out to address the software and vulnerability issues frequently exploited by threat actors in the wake of a damning US government Cyber Safety Review Board (CSRB) report.Redmond said the rapid evolution of the threat landscape underscored the severity of the threats that face both its own operations and those of its customers, and admitted that given its central role in the worlds IT ecosystem, it had a critical responsibility to earn and maintain trust.6. CrowdStrike update causes worldwide chaos The biggest IT story of 2024 arguably was not strictly speaking a security incident, but appears here since it originated at a security company. On 19 July, IT pros all over the UK and beyond awoke to a fast spreading IT outage downing key systems, originating at cyber firm CrowdStrike after it pushed a flawed rapid response update to key threat detection sensors that caused Windows computers to enter a so-called boot loop.The extensive disruption caused no major security incidents at the time, but the ramifications continue to this day, with CrowdStrike execs facing legal repercussions and even being called to account for the incident in front of politicians. As with the XZ Utils scare a couple of months previously, the CrowdStrike incident shows again the importance of paying close attention to ones code.7. Campaigners call for evidence to reform UK cyber lawsThose who have been following the CyberUp campaign for legal reform over the past few years will know well the difficulties the group has had in convincing Britains politicians that the time has come to reform the outdated Computer Misuse Act of 1990, which thanks to archaic wording in regard to the offence of unauthorised access to a computer puts security professionals in the UK at risk of prosecution simply for doing their jobs.With Keir Starmer moving into 10 Downing Street, the campaign team seized the opportunity to launch a fresh call for evidence and views during the summer, saying that about a third of UK security firms had experienced monetary losses due to the law, putting at risk 3bn of the sectors 10.5bn annual contribution to the economy.8. NCSC celebrates eight years as Horne blows inIn eighth place on the Computer Weekly list, the National Cyber Security Centre celebrated its eighth birthday this year, although its new leader, Richard Horne, who took up the post in October, is only the organisations third official CEO.Eight years may not be a particularly long time the Brexit referendum was eight years ago but the cyber security landscape has changed radically in that time, and looking ahead, as the interdependency between security and intelligence would become more critical, and the risks and opportunities of new technologies and more sophisticated threats increase, the NCSCs work to get better at addressing the security of those technologies and how to use them to the UKs advantage continues.9. Zero-day exploits increasingly sought out by attackersIn November, the NCSC and its US equivalent, CISA, published new annual data revealing that of the 15 most exploited vulnerabilities of 2023, the majority were zero-days compared with less than half in 2022. The trend has continued through 2024, and the NCSC warned that defenders need to dramatically up their game when it comes to vulnerability management and patching.Among some of the most heavily exploited CVEs were some that are now widely known, including infamous issues in Progress Softwares MOVEit Transfer, Log4Shell and Citrix, many of them dating back years.10. US TikTok ban imminent after appeal failsAt the end of 2024 came the news that TikTok is likely to be banned in the US in mere weeks after a Washington DC appeal court rejected representations from the China-owned social media platform, which claimed its First Amendment rights were being violated.Legitimate concerns about the firms data protection and privacy practices and the possibility that the data TikTok holds may be exploited by the Chinese government lie at the core of the potential ban which would have global ramifications and impact millions of users, influencers and businesses alike.Somewhat ironically, given he once tried to ban it himself, the platforms best hope for a reprieve may now lie with president-elect Donald Trump, who will undoubtedly be an impactful force in the cyber security world in 2025.In The Current Issue:What do the home secretarys policing reforms mean for the future of the Police Digital Service?What are the security risks of bring your own AI?Download Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    The Security Interviews: Martin Lee, Cisco Talos
    The first thing worth knowing about the first ever ransomware locker is that its use was apparently motivated by revenge rather than outright criminality. The second thing worth knowing is that there was not a Russian speaker in sight.In fact, its author, Joseph Popp, grew up in Ohio and was educated at Harvard University. He was an anthropologist and biologist and an expert on HIV/AIDS, who worked closely with the World Health Organisation (WHO) in Africa and was passed over for a job there, something that may have led to the apparent mental breakdown that resulted in the creation of the concept of ransomware.The AIDS Trojan that Popp unleashed on the world in December 1989 was a simple piece of software by any standard. Technically, it was really a denial of service (DOS) scrambler, which replaced the AUTOEXEC.bat file used to execute commands when the computer system started up.It then counted the number of boot cycles the system went through until it hit 90, at which point it hid directories and encrypted the names of the C drive files on the system. Victims, or targets, then saw a message informing them that their systems were infected by a virus.Remember, there is NO cure for AIDS, the message chillingly read.How were they infected? Popp posted 20,000 floppy disks to fellow attendees of a WHO AIDS conference, and created what we would now know as a phishing lure by labelling them AIDS Information Introductory Diskettes.Victims were told to send $189 (about $480, or 378 adjusted to 2024) to a PO Box number belonging to the PC Cyborg Corporation in Panama. The software also included an end user licence agreement (EULA) informing users that they would be liable for the cost of leasing it.Popp, who was arrested in the US and extradited to the UK, never stood trial after a British judge ruled him mentally unfit to do so he had developed a habit of wearing condoms on his nose, hair curlers in his beard, and cardboard boxes on his head, according to media reports at the time. Whether or not this was a deliberate ploy rather than an expression of insanity remains unclear. Back in the States, Popp went on to open an eponymously named butterfly sanctuary and tropical garden in upstate New York, and died in 2007.Reflecting on the weird story behind the AIDS Trojan, Martin Lee, technical lead for security research at Ciscos Talos intelligence and research unit, describes the malware as the creation of an insane criminal genius.It really was something completely new, a new dimension that hadnt been mentioned before, Lee tells Computer Weekly. If we think back to the year 1989, the internet was still basically a dozen computers in universities and the military. The internet, as we know it, had not taken off, the World Wide Web had not taken off. Most computers were not networked at all, even hard disk drives were very much a luxury optional extra.All of these things that we now take for granted distribution over a network, payment by cryptocurrency none of this existed. It was a fairly limited attackIt is not known, but it is not believed, that anybody paid the ransom.Moreover, the cyber security profession simply did not exist in its current form in 1989. It was nowhere near what it is today. It was a different world, says Lee, who characterises the IT of the day as prehistoric.The term cyber security didnt exist and the industry didnt exist. There were individuals we would recognise as practicing information security, but they tended to be in the types of environments that required security clearance, like the military or governments. It would have been a tight community where everyone knew each other.Certainly at the time, the first ransomware did not make a big splash in the news, he adds.That Popp was somewhat ahead of his time is clear in that the idea of ransomware didnt really rear its head again until the mid-90s, when academics and computer scientists first starting playing around with the idea of combining computer virus or malware functionality with cryptography.But even then, the world spent another decade in blissful ignorance before the first attempt was made at a criminal ransomware attack of the type we would recognise in the 2020s.Gpcode, as it was termed, first popped up in Russia in December 2004, 20 years ago, when reports started to emerge that individual peoples files were being encrypted by some strange new form of cyber attack.Ultimately, it turned out that an individual was, if I remember correctly, harvesting information from Russian job sites and emailing jobseekers saying, Hey, we would like you to apply for this job, says Lee.The lure document purported to be a job application form, but in fact it was ransomware which encrypted the files, and the ransom was to be paid by money transfer. This is really the first modern criminal ransomware where the objective to make money is clear.Gpcode was incredibly rudimentary as ransomware goes it used a 600-Bit RSA public key to encrypt its victims files, and Lee says that demanding the ransom be paid by money transfer (Bitcoin was still a few years off) was a dangerous gamble for the cyber criminals behind Gpcode, because it left them open to being tracked by law enforcement.Why Russia?The modern-day world of ransomware is now intrinsically linked to Russia ransomware attacks by English speakers stand out for their rarity, and even when they do occur often have a link to a ransomware strain developed by Russian speakers.So it is interesting that this link appears to go all the way back to the early 2000s with Gpcode. According to Lee, there may be a good reason that this connection developed.I think it is probably linked to the dissolution of the Soviet Union, says Lee. There was a lot of hardship in Russia in the 1990s. Youd seen the complete disintegration of a way of life and many jobs that went with that.This left a lot of very skilled people who are very innovative and very good at what they do, struggling. I think this created an environment where people who probably wouldnt normally be drawn into criminality had to do something to survive.In essence, the chaos of post-Soviet Russia as shown to great effect by documentarian Adam Curtis in the landmark Russia 1985-1999: TraumaZone created a fertile breeding ground for new types of criminality that drew in erstwhile well-educated professionals, while the nascent internet attracted technically minded innovators and hackers.Had Boris Grishenko (Alan Cumming), survived the events of Goldeneye, and not been killed by James Bond, might he now be a ransomware don?Gpcode was not a runaway success in that it did not net millions for its creators as ransomwares do today but it was notable in that it meant ransomware was starting to cut through, both in the still-emerging cyber security community and among laypeople.Gpcode also helped to establish some of the popular tropes around ransomware phishing lures today, phantom job offers are frequently used against victim organisations, particularly when executed as part of a targeted attack via a highly placed executive, for example.Over the decade that followed, the story of ransomware became one of almost continuous innovation, as cyber criminals became more motivated to extort money and to avoid capture and prosecution.Anonymity during the payment process was a particularly thorny problem that the criminal underground needed to overcome, says Lee.In 2004, Gpcode had a single software engineer slash operator conducting the attacks, and they had this problem of how are they going to get the ransom paid to them in a way thats easy for the victim, but provides anonymity for the criminal, he says.Initially, we have the rise of digital currencies, E-Gold and Liberty [Reserve] to name but two, which were mechanisms outside of the traditionally regulated banking industry for transferring value between individuals, says Lee. They were how should we put this abused.The big disadvantage of these digital currencies is that they both had a single point of failure from the cyber criminals perspective, in that law enforcement agencies and regulators could act to disrupt the flow of illicit payments traversing them, which of course is exactly what happened.This then coincides with the rise of cryptocurrencies, giving an alternative way for criminals to collect their ransom through crypto, says Lee.The other big innovation addressed the weak point of early ransomware is it was one developer and operator so we did see in the mid-2000s the development of the first ransomware as a service.Malicious software engineers who were very good at writing code but maybe not so good at distributing ransomware or coming up with social engineering lures could focus on the code and then develop a partner portal so that less technically sophisticated cyber criminals could participate in attacks they could be hired, or enter into a partnership, says Lee. If they divide up the tasks, it makes it more efficient.Though it may surprise some to learn that the concept of ransomware as a service, or RaaS, is well over 10 years old, it emerged at a very different time, and the ransomware ecosystem had to go through a few more evolutions to reach its present, devastating form.Lee explains: The next big change comes in 2016 with the gang using SamSam. Prior to that, ransomware was a mass-market attack, distributing as much ransomware as possible to as many end-users as possible, getting it onto PCs, and demanding a few hundred dollars for the victim to get whats on their endpoints back.The big innovation was the gang distributing SamSam chose their victims in a different way. Instead of going for sheer numbers, they would identify businesses, get inside their networks, and combine traditional hacking techniques infiltrating the network, finding key servers that businesses relied on, and getting the ransomware on those key servers.In encrypting the files and stopping the functionality of those key servers, says Lee, SamSam brought the entire business to a half, and at that point the gang could ask for a much, much larger ransom.Read more about ransomwareWe look at ransomware attacks, and the importance of good backup practice as well as immutable snapshots, air-gapping, network segmentation,AI anomaly detection and supplier warranties.Anomaly detection and immutable copies can be frontline tools against ransomware we look at the role storage can play against the latest techniquesemployed by ransomware gangs.Threat intel specialists at Recorded Future have shared details of newly developed techniques they are using to disrupt Rhysida ransomware attacks before the gangeven has a chance to execute them.This is not to say that mass-market, end-user focused ransomware has gone away, it is very much still a threat, and in many ways, it is more devastating for the average person to be hit with ransomware than it is for a well-insured, regulated corporation.Ive had people reach out to me with an elderly parent whose laptop has been hit with ransomware and it had the last photos of their deceased spouse on it, is there a way of getting it back? says Lee.Its heartbreaking, and nine times out of 10 the answer is no. So, this has not gone away and its not going to. Businesses may have more to lose than an end-user, but thats not to say that end-users cant suffer significant pain.But the big money for the bad guys is in businesses, getting inside businesses, causing high-value disruption and destroying large amounts of value, because the profits are so much higher.This brings us neatly to the developments we have seen since 2020, when the scourge of ransomware really took off, and cyber security broke out of its niche and started to make national headlines. These have all been well-documented, including the rise of double extortion attacks and the emergence of an extensive underground economy of affiliates and brokers. We are even seeing what looks like collaboration between financially motivated cyber criminal gangs and politically motivated cyber espionage operators.This year, we have seen the beginnings of a new trend in which ransomware gangs actually forego the ransomware locker entirely. Just last month, the Australian and American authorities released new intelligence on the work of the BianLian ransomware gang, which has shifted solely to extortion without encryption.Could it be that ransomware, in its traditional form, is starting to reach the end of the line?Probably not, says Lee, looking ahead, although it will look different: You know IT brings enormous positives to our lives and enables so much but anywhere where IT is creating value, criminals are looking for ways to piggyback and steal that value. Ransomware has proved to be a very profitable way for them do it.I think that for any new ways in which we use IT in the near- and medium-term future, we can expect there will be criminals looking to make money off that, and one of the ways that theyre going to do it, for certain, is going to be through ransomware.From ransomwares birth pangs as the howl of the frustrated and aggrieved Joseph Popp, we can chart a clear line to the big bucks ransomware hits of the 2020s, and this continuity of criminality and innovation leads Lee to a simple conclusion.We need to be much more aware that for anything IT touches, we need to think about cyber security, we need to think about how the bad guys might disrupt it, because for certain, theyre going to be thinking too and someones going to try it.The history of ransomware has been one of constant innovation, and we can expect that to continue into the future, he says.The Security Interviews seriesOkta regional chief security officer for EMEA sits down with Dan Raywood to talk about how Okta is pivoting to a secure-by-design champion.We speak to Googles Nelly Porter about the companys approach to keeping dataas safe as possible on Google Cloud.Matt Riley, data protection and information security officer at Sharp Europe, discusses balancing cyber riskswith business leaders goals.Former NCSC boss Ciaran Martin talks about nation-state attacks and how the UK is in danger ofmisunderstanding its adversaries.Alex Yampolskiy conceived the idea for risk management specialist SecurityScorecard after getting stung by a SaaS supplier that was being cavalier with its customer data. He tells his story to Computer Weekly.In October 2023, Rebecca Taylor of SecureWorks was recognised at the annual Security Serious Unsung Heroes Awards for her work on diversity in the sector. Computer Weekly caught up with her.
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Conservative MP adds to calls for public inquiry over PSNI police spying
    A court ruling that the Metropolitan Police and the Police Service of Northern Ireland unlawfully placed journalists under surveillance has led to renewed calls for a public inquiry into potential abuses of surveillance powers by police forces.The Investigatory Powers Tribunal found today that a former chief constable of the PSNI, George Hamilton, acted unlawfully by signing off on a directed surveillance operation to identify the suspected source of two Northern Ireland journalists.The PSNI unlawfully targeted investigative journalists Barry McCaffrey and Trevor Birney after they produced a film documentary, No stone unturned, exposing police collusion with a paramilitary group that murdered six innocent Catholics watching a football match in a pub in Loughinisland, County Down, in 1994.The PSNI admitted in a report published during the course of the legal proceedings that it had placed over 500 lawyers and 300 journalists under surveillance.Those targeted included more than a dozen journalists working for the BBC.PSNI chief John Boutcher appointed special advocate Angus McCullough KC to review matters of concern following disclosures that police had used surveillance powers in an attempt to identify journalists confidential sources in June 2024.But campaigners said today that the review does not go far enough, and called for the government to set up a public inquiry into police surveillance of journalists in Northern Ireland and the rest of the UK.In an open letter to McCullogh released today, Conservative MP David Davis said it was clear that the PSNI had contributed to fostering a culture of intimidation and hostility towards journalists, as well as harbouring contempt for both safeguards and the law.Davis told McCullogh he was concerned that his independent review lacked the necessary powers to uncover the true extent of the PSNIs behaviour.Without the authority to compel full disclosure from the PSNI and demand the release of all relevant documents, there is a significant risk that crucial evidence will remain concealed, he added.Davis said the PSNI had a habit of withholding critical evidence, and that much of the evidence of the PSNIs surveillance practices was only considered by the Investigatory Powers Tribunal because it was disclosed by Durham Police.This pattern of behaviour raises doubts as to whether the PSNI will engage in good faith with this review, he said. I am particularly worried that covert surveillance to ascertain and identify journalistic sources appears to be common practice for the PSNI.The joint case of Mr Birney and Mr McCaffrey reveals a sustained approach by the PSNI to target anyone who dares to investigate the Police Services actions in the Troubles, added Davis.Evidence heard by the Investigatory Powers Tribunal showed the PSNI systematically bypassing judicial oversight and deciding for itself whether to use targeted surveillance mechanisms. Time and again, directed surveillance, seizing communications data and data preservation requests were executed without appropriate judicial or independent authorisation, the letter states.Davis told McCullogh there was a consistent pattern in the PSNI of misrepresenting the reasons for surveillance.The PSNI justified its surveillance of Birney and McCaffrey by claiming to target a public official allegedly leaking sensitive documents. However, it is clear that the focus of these operations were the journalists themselves, he said.What this suggests is that the PNSIs true objective was to stifle reporting on police misconduct rather than to address any substantive security threats, said Davis. This is a grave infringement on both press freedoms and the public interest.The PSNI displayed nothing but contempt for the IPT hearings, and as a direct result of their failure to disclose relevant documents, the timetable for the hearings was repeatedly disrupted. This attempt to obstruct the IPTs ability to access the full scope of unlawful actions is, once again, part of a wider pattern.Speaking after the verdict, Birney said the judgement had raised serious concerns about the lack of safeguards and oversight of police operations.Only a public inquiry can properly investigate the full extent of unlawful and systematic police spying operations targeting journalists, lawyers and human rights defenders in the north of Ireland, he added.McCaffrey called for further investigations after the IPT criticised former PSNI chief constable George Hamiltonfor approving the unlawful undercover surveillance operation against a civilian member of staff at Northern Irelands Police Ombudsmans Office (PONI).That a chief constable has acted unlawfully, we think, is a major embarrassment, and its something that needs there to be a public inquiry, he said.David Davis MP said that the ruling was the most dramatic and far-reaching by the Investigatory Powers Tribunal to date. We might have been critical of them in the past, he said, but the IPT had highlighted the sheer depth of activity and, frankly, concealed dishonesty on the part of state agencies and in this case two police forces.Daniel Holder, director of the Committee on the Administration of Justice, said this was a case where police had very clearly stepped outside the rule of law.Were dealing with something that seems to be part of a much broader pattern and practice of seeking to conceal the involvement of police and other state agents in conflict era violations, and thats deeply concerning, he said. But the fact that this has happened once doesnt mean it hasnt happened on other multiple occasions, and thats what we now need to look into.Patrick Corrigan, Northern Ireland director for Amnesty International, described the case as a landmark for press freedom and the rights of journalists to protect their sources, which he described as the cornerstone of any free society.The revelation by the tribunal that the PSNI spied on staff from the office of the police ombudsman, the very statutory body which investigates police wrongdoing, should worry everyone who cares about policing and police oversight in Northern Ireland, he said.Disclosures in the tribunal hearing revealed that the PSNI conducted a defensive operation to monitor phone calls made by police officers and to compare them with phone numbers of journalists who had given their numbers to the PSNI press office.The purpose of the operation was, among other objectives, to identify officers who may have leaked information to journalists. Although the PSNI suspended the operation for the course of the tribunal hearing, it had told the IPT that it plans to resume the operation today.Birney said the PSNI had effectively conducted a drag net operation against journalists, lawyers and human rights activists.Read more about Barry McCaffrey and Trevor Birneys case against PSNITribunal criticises PSNI and Met Police for spying operation to identify journalists sources.Detective wrongly claimed journalists solicitor attempted to buy gun, surveillance tribunal hears.Ex-PSNI officer deeply angered by comments made by a former detective at a tribunal investigating allegations of unlawful surveillance against journalists.Detective reported journalists lawyers to regulator in unlawful PSNI surveillance case.Lawyers and journalists seeking payback over police phone surveillance, claims former detective.We need a judge-led inquiry into police spying on journalists and lawyers.Former assistant chief constable, Alan McQuillan, claims the PSNI used a dedicated laptop to access the phone communications data of hundreds of lawyers and journalists.Northern Irish police used covert powers to monitor over 300 journalists.Police chief commissions independent review of surveillance against journalists and lawyers.Police accessed phone records of trouble-making journalists.BBC instructs lawyers over allegations of police surveillance of journalist.The Policing Board of Northern Ireland has asked the Police Service of Northern Ireland to produce a public report on its use of covert surveillance powers against journalists and lawyers after it gave utterly vague answers.PSNI chief constable John Boutcher has agreed to provide a report on police surveillance of journalists and lawyers to Northern Irelands policing watchdog but denies industrial use of surveillance powers.Report reveals Northern Ireland police put up to 18 journalists and lawyers under surveillance.Three police forces took part in surveillance operations between 2011 and 2018 to identify sources that leaked information to journalists Trevor Birney and Barry McCaffrey, the Investigatory Powers Tribunal hears.Amnesty International and the Committee on the Administration of Justice have asked Northern Irelands policing watchdog to open an inquiry into the Police Service of Northern Irelands use of surveillance powers against journalists.Britains most secret court is to hear claims that UK authorities unlawfully targeted two journalists in a covert surveillance operation after they exposed the failure of police in Northern Ireland to investigate paramilitary killings.The Police Service of Northern Ireland is unable to delete terabytes of unlawfully seized data taken from journalists who exposed police failings in the investigation of the Loughinisland sectarian murders.The Investigatory Powers Tribunal has agreed to investigate complaints by Northern Ireland investigative journalists Trevor Birney and Barry McCaffrey that they were unlawfully placed under surveillance.
    0 Comments 0 Shares 3 Views
  • WWW.COMPUTERWEEKLY.COM
    Using AI to build stronger client relationships in 2025
    In an era where customer expectations are higher than ever, integrating artificial intelligence (AI) into client relationships has become a game-changer. AI can significantly enhance client relationships and strengthen client relationships when adopted and managed correctly. One AI-powered tool making significant strides is Copilot for 365, but businesses must address key concerns when adopting AI to avoid getting a lump of coal from clients.AI can transform how businesses interact with their customers. For example, it can quickly push out business cases to customers, freeing up sales managers to focus more on building relationships and engaging directly with clients. Like Santas elves working tirelessly behind the scenes, AI tools handle administrative tasks, enabling more face-to-face interactions, fostering stronger connections, and enhancing overall client satisfaction.AI capabilities allow businesses to process and analyse vast amounts of data, from spending habits and frequent buys to the time spent looking at a specific product. These insights enable teams to provide a more tailored experience via personalised recommendations, unique suggestions, and substitution offers when a product is out of stock. AI can also offer timely responses to any customer questions or requests through AI chat functions, improving the overall customer experience. One example of a company that has thoroughly harnessed customer data is Spotify from personalised playlists to the famous Spotify Wrapped, its CEO labelled AI as a huge unlocking technology.While the benefits of AI are clear, there is a noticeable divide among customers when it comes to AI adoption. Some are eager to embrace these technologies, recognising the potential for enhanced service and efficiency. Others prefer to wait and see, often due to concerns about privacy, security, and the reliability of AI.This hesitation is understandable, as recording meetings and conversations might lead to clients holding back their true thoughts and concerns out of fear of being misinterpreted. So, getting buy-in from customers and ensuring transparency about the use of AI tools is essential for successful implementation. Customers need to trust and understand the benefits of AI to fully embrace it. Key factors for customers mean getting data into the right place to truly harness AI capabilities. Thats where trusted experts can come in to drive adoption in the future of AI.The 12 Days of AIOn the First day of AI, we explore how AI is being used in marketing, the benefits and key use cases, as well as concerns and how marketerscan best take advantage of the technology.On the Second Day of AI, we look at the importance of truly understanding what AI isto enable true organisational transformation.On the Third Day of AI, we explore some of the key trends to keep an eye on, and prepare for,in 2025.On the Fourth Day of AI, we discuss the value of adopting AI responsibly, and outlines how businesses can build responsible adoptioninto their plans.On the Fifth Day of AI, we explore how AI is reshaping HR boosting productivity, addressing concerns,and preparing organisations for the future.On the Sixth Day of AI, we explore how leveraging AI and cloud can enhance business performance and sharetips for successful implementation.On the Seventh Day of AI, we explore the double-edged sword of AI in cybersecurity and how businesses can protect themselvesagainst the cyber grinches.On the Eighth Day of AI, we explore the key considerations and strategic frameworks essential for extractingmaximum value from AI projects.On the Ninth Day of AI, we explore the critical role data plays in AI implementation and the key steps business leaders must take to prepare their datafor a successful AI future.On the Tenth Day of AI, we explore the evolving role of AI in managed servicesand what to expect in 2025.On the Eleventh Day of AI, we explore the differences between private and public AI, the key benefits and downsides of private AI, future trends, and how to get started with a hybrid AI approach.To mitigate the risks and maximise the benefits of AI in client relationships, it is crucial to ensure ethical and effective use. Businesses must get on board with data insights and partner with experts to navigate the complexities of AI integration. Establishing clear guidelines and maintaining transparency with customers about how their data will be used can help build trust.Also, as AI use becomes more regulated, businesses must stay updated with the latest rules and legislative changes. Compliance not only maintains trust with customers and employees but also avoids reputational damage caused by data breaches. Embracing AI responsibly ensures sustainable success and innovation. Just as Santa carefully manages his operations to deliver joy worldwide, businesses must manage AI responsibly to deliver exceptional service.As we look towards the future, AI tools are expected to become even more beneficial for businesses. In 2025, AI could become more tailored to the specific needs of businesses, offering cost-effective solutions and enabling even greater personalisation of customer interactions.For instance, AI tools like Copilot for 365 can conduct analysis while representatives are away on leave, ensuring they can make the most significant impact upon their return. Going further, advancements in sentiment analysis can provide insights into customers body language and engagement levels during video calls, helping representatives address any issues proactively. AI can also prepare detailed briefs for meetings, including understanding sector trends and customer backgrounds, making interactions more productive and relevant.AI holds tremendous potential to revolutionise client relationships. By addressing the barriers to adoption, ensuring ethical use, and leveraging the benefits, businesses can significantly enhance their customer relationships and satisfaction. As we move forward, embracing these technologies will be crucial for staying competitive and meeting the ever-evolving expectations of customers. Businesses of all sizes must prepare and invest in AI to remain fierce competitors in a continuously evolving landscape.Much like Santas preparation for the festive season, thoughtful and strategic adoption of AI can bring about significant positive changes. So, whether youre looking to join the nice list or simply want to improve your client relationships, now is the time to embrace AI and let tools like Copilot guide your sleigh to success.Josie Rickerd is director of enterprise accounts at ANS, a digital transformation provider and Microsofts UK Services Partner of the Year 2024. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations.
    0 Comments 0 Shares 3 Views
  • WWW.COMPUTERWEEKLY.COM
    2025-30: Geopolitical influence on cyber and the convergence of threat
    Maksim Kabakou - FotoliaOpinion2025-30: Geopolitical influence on cyber and the convergence of threatFrom Covid-19 to war in Ukraine, SolarWinds Sunburst, Kaseya, Log4j, MOVEit and more, the past five years brought cyber to mainstream attention, but what comes next? The Computer Weekly Security Think Tank looks ahead to the second half of the 2020sByRob Dartnall,SecAlliancePublished: 17 Dec 2024 The early 2020s saw multiple threads of the cyber threat landscape evolve. This was mostly as individual strands, rather than symbiotically. However, as we move into the latter part of the decade, we will see the big convergence of these. Over the last five years, we have been resigned to the fact that the supply chain has become one of our biggest threats, geopolitics and conflicts have driven real world affects in the cyber domain, technology providers still dont deliver secure products, governments fail to recognise the digital environment as the new critical national infrastructure (CNI) and artificial intelligence (AI) is creeping into offensive tools used by threat actors.As we approach the end of the decade and the early 2030s, we will still have failed to address these individual threats, and they will come together, causing significant disruption and harm.For the past couple of years, it has become more evident that technology providers are focused on pushing products to market, with little care for their security. Recently we have even seen the FBI and CISA demand providers start to properly adopt secure-by-design. This laissez-faire approach has already led to huge numbers of unnecessary compromises and entities having to invest significantly in EASM (External Attack Surface Management) and vulnerability intelligence programmes.For many of my clients, the majority of the breaches they suffer, emanate from their supply chain. Traditionally the supply chain was targeted by sophisticated nation-state actors as they fully understood and could leverage the one to many attack. However, as criminal actors became more aware and capable, they began to adopt this technique as well. And more recently we have also seen hacktivists, aligned to social and geopolitical issues, take this approach.As they, and we, move away from old media to newer, social and open platforms, where both mis- and disinformation are rampant, we will also see a broader set of entities being targeted, increasing the need for entities to monitor a wider set of platforms for negative comments and sentiment. We have already started to see centralised functions that have been outsourced being targeted by these types of actors following disinformation campaigns on these platforms.The Computer Weekly Security Think Tank looks aheadMike Gillespie and Ellie Hurst, Advent IM:CISOs will face growing challenges in 2025 and beyond.Elliot Rose, PA Consulting:The most pressing challenges for CISOs and cyber security teams.Pierre-Martin Tardif, ISACA:Six trends that will define cyber through to 2030.Stephen McDermid, Okta:In 2025: Identities conquer, and hopefully unite.Deepti Gopal, Gartner: CISOs:Don't rely solely on technical defences in 2025.Paul Lewis, Nominet: Decoding the end of the decade: What CISOs should watch out for.These and future successful attacks will be down to the fact that most governments, and even regulators, do not fully understand or, even if they do, can not properly map these critical suppliers and central functions to the digital environment. As such, they have not rolled out the required legislation or regulation to properly protect it, or the society that relies upon them.Finally, we get to AI; not the Hollywood type, but the Narrow AI, we are starting to leverage now, and likely to be using into the end of the decade, albeit becoming slightly more capable and advanced. While AI will be great for cyber defence, it will of course by used by nefarious actors. Some of the likely uses of AI in offensive operations are already being seen; enhanced social engineering (such as better phishing emails and the adoption of AI-supported deepfake videos and voice notes) to the development of supporting attack infrastructure and the development and deployment of malware. This is not an article on AI and all of its uses by bad actors, but one significant area of concern is the use of AI to identify vulnerabilities (and variants) and rapidly and automatically develop and deploy exploit code, reducing n-day exploitation times down to minutes or worse.So, as we move into the end of the decade, what do I see in my magical mystical ball, also known as an intelligence assessment? Its the convergence of all of this. Software will still be like Swiss cheese, more actors will have more capabilities due to AI, supply chain compromise will be commonplace, digital CNI will not have been protected and single point of failure attacks against the supply chain will constituently take critical services offline. As more nations with developing economies acquire offensive capabilities, geopolitics becomes more fractured. The digital environment will simply be a more dangerous place to do business and, though unlikely, some nations may even be taken offline for days at a time.In The Current Issue:What do the home secretarys policing reforms mean for the future of the Police Digital Service?What are the security risks of bring your own AI?Download Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 3 Views
  • WWW.COMPUTERWEEKLY.COM
    Ofcom publishes Illegal Harms Codes of Practice
    Sergey Nivens - FotoliaNewsOfcom publishes Illegal Harms Codes of PracticeThe codes of practice and guidance from Ofcom outline the steps online services providers can take to protect their users from illegal harmsBySebastian Klovig Skelton,Data & ethics editorPublished: 17 Dec 2024 13:06 Online harms regulator Ofcom has published its first code of practice for tackling illegal harms under the Online Safety Act (OSA), giving businesses three months to prepare before enforcement begins in March 2025.Published 16 December 2024, Ofcoms Illegal Harms Codes and guidance outlines the steps providers should take to address illegal harms on their services.This includes nominating a senior executive to be accountable for OSA compliance, properly funding and staffing content moderation teams, improved algorithmic testing to limit the spread of illegal content, and removing accounts that are either run by or are on behalf of terrorist organisations.Covering more than 100,000 online services, the OSA applies to search engines and firms that publish user-created content, and contains 130 priority offences covering a variety of content types including child sexual abuse, terrorism and fraud that firms will need to proactively tackle through their content moderation systems.With the publication of the codes, providers now have deadline of 16 March 2025 to fulfil their legal duty to assess the risk of illegal harms taking place on their services, after which they will be immediately expected to implement the safety measures set out in the codes,or use other effective measures to their protect users.Ofcom has said it is ready to take enforcement action if providers do not act promptly to address the risks on their services.Under the OSA, failure to comply with its measures including a failure to complete the risk assessment process within the three month timeframe could see firms fined up to 10% of their global revenue or 18m (whichever is greater).For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritise peoples safety over profits. That changes from today, said Ofcom chief executive Melanie Dawes.The safety spotlight is now firmly on tech firms and its time for them to act. Well be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year.Technology secretary Peter Kyle who set out his draftStatement of Strategic Priorities(SSP) to the regulator in November 2024 described the codes as a material step change in online safety that mean platforms will have to proactively taken down a host of illegal content.This government is determined to build a safer online world, where people can access its immense benefits and opportunities without being exposed to a lawless environment of harmful content, he said.If platforms fail to step up the regulator has my backing to use its full powers, including issuing fines and asking the courts to block access to sites. These laws mark a fundamental re-set in societys expectations of technology companies. I expect them to deliver and will be watching closely to make sure they do.While theSSP is set to be finalised in early 2025, the current version contains five focus areas, including safety by design, transparency and accountability, agile regulation, inclusivity and resilience, and innovation in online safety technologies.Under the OSA, Ofcomwill have to report back to the secretary of state on what actions it has taken against these priorities to ensure the laws are delivering safer spaces online, which will then be used to inform next steps.Ofcom said it will be holding a further consultation in spring 2025 to expand the codes, which will include looking at proposals on banning accounts that share child sexual abuse material, crisis response protocols for emergency events such as the August 2024 riots in England, and the use of hash matching to prevent the sharing of non-consensual intimate imagery and terrorist content.Under Clause 122 of the OSA, Ofcom has the power to require messaging service providers to develop and deploy software that scans phones for illegal material. Known as client-side scanning, this method compares hash values of encrypted messages against a database of hash values of illegal content stored on a users device.Encrypted communication providers havesaidOfcoms power to require blanket surveillance in private messaging apps in this fashion would catastrophically reduce safety and privacy for everyone.Responding to the publication of the codes, Mark Jones, partner at law firm Payne Hicks Beach, said the fact that there will have been 14 months between the OSA receiving Royal Assent in October 2023 and the codes coming into force in March 2025 shows there has been no urgency in tackling illegal harms.Lets be clear this is, to a degree, self-regulation. Providers decide for themselves how to meet the legal duties and what is proportionate for them, he said. Ofcom does, however, have enforcement powers such as fines of up to 18m or 10% of global turnover or even blocking sites in the most serious of cases. But will we see these powers being used swiftly, or at all? Critics say the Codes of Practice do not go far enough and that a gradualist approach is being taken to illegal harms.Xuyang Zhu, partner at global law firm Taylor Wessing, added that while there are further codes of practice set to be published, companies now have strict timelines to adhere to and can no longer delay taking action on implementing safety measures.Companies need to act now if they want to avoid failing compliance and facing potentially significant fines, she said. For many services, it will take substantial time and effort to do the risk assessment, going through the system and data to identify risks as well as putting in compliance measures to mitigate the identified harms. It wont be an easy task and, to ensure that companies can make it by the deadline, they need to start now.Ofcom previously published its draft online child safety codes for tech firms in April 2024. Under the codes, Ofcom expects any internet services that children can access (including social media networks and search engines) to carry out robust age checks, to configure their algorithms to filter out the most harmful content from these childrens feeds, and implement content moderation processes that ensure swift action is taken against this content.The draft codes also includemeasures to ensure tech firms compliance, including by having a named senior person accountable for compliance with the childrens safety duties, an annual senior-body review of all risk management activities relating to childrens safety, and an employee code of conduct that sets standards for employees around protecting children.Read more about online safetySchools go smartphone-free to address online harms: Schools are implementing smartphone-free policies in an attempt to curb students exposure to online harms, but teachers and parents are worried the Online Safety Act will only partially address concerns.Ofcoms online safety preparedness efforts hobbled by government: Despite Ofcoms progress so far, UK government changes to the scope and timetable of the Online Safety Bill are hobbling the ability of the regulator to successfully prepare for the new regime.Online Safety Bill screening measures amount to prior restraint: The Open Rights Group is calling on Parliament to reform the Online Safety Bill, on the basis that its content-screening measures would amount to prior restraint on freedom of expression.In The Current Issue:What do the home secretarys policing reforms mean for the future of the Police Digital Service?What are the security risks of bring your own AI?Download Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Driving government efficiency through better software and IT managment
    Moreover, ITAMs emphasis on transparency and accountability resonates strongly with the public sectors commitment to stewardship. By adopting practices that eliminate waste while safeguarding IT services, government departments can set a powerful example of how efficiency and effectiveness can go hand in hand. This should be ingrained in the culture, from a single request for a piece of hardware to negotiating a multi-year contract.The role of collaborationSuccess will also depend on collaboration. Departments should collaborate to share best practices, pool resources, and negotiate better deals with suppliers. Engaging with forums like the ITAM Forum can provide invaluable insights and support, helping departments maximise the impact of their ITAM initiatives.The UK governments efficiency drive offers an opportunity to rethink how IT resources are managed and deployed. By embracing ITAM and complementary strategies, departments can deliver significant savings while maintaining and enhancing public service quality. This compelling vision for the future requires leadership, commitment, and a clear focus on value. As departments rise to meet this challenge, the ITAM community stands ready to support them every step of the way.Read more ITAM stories8 top enterprise asset management software products: Neglecting enterprise asset management can lead to higher equipment costs and delayed operations. Learn more about EAM software and which is best for your company.8 top inventory management software products: Inventory management software can help companies manage their supply chain in a turbulent world, but using the right product is crucial. Learn more about which software to choose.
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Top 10 business applications stories of 2024
    Blue Planet Studio - stock.adobeNewsTop 10 business applications stories of 2024Business applications are now AI-enabled, but it is not easy to deploy these products successfully to improve business metrics ByCliff Saran,Managing EditorPublished: 16 Dec 2024 17:50 There is no denying the impact artificial intelligence (AI) in particular, generative AI (GenAI) is having on business applications. In many ways, the intensity of industry excitement over AI suggests the technology is at the peak of the Gartner hype cycle, which means businesses are about to realise how complex doing AI well really is.The challenge for many is finding use cases that can deliver a fast return on investment and scale beyond a pilot roll-out cost-effectively. This is particularly relevant when deploying applications that use large language models (LLMs) as costs can quickly escalate.From what Computer Weekly is seeing, there appears to be a shift in business away from public LLMs to small language models and AI models that can be tuned and run on-premise.A key message coming from industry experts with regards to training AI using internal data sources is that the data needs to be of high quality. Enterprise data is often spread across multiple applications and there is generally no single version of the truth. Trying to consolidate data into one store can be a challenge as applications are owned by different areas of the business and a single data architecture may not seem like a high priority.AI is often linked to enterprise resource planning (ERP) providers strategies to migrate customers onto their cloud services. This can be a challenge for many organisations, which tend to run heavily customised versions of an ERP system. Migrating these customisations is extremely costly, even when the project goal is not to customise the software. For instance, when Birmingham City Council began a migration from an SAP ERP system it had been running for years, the project to move to Oracle cost the council millions more than had been budgeted for. The migration also needed manual steps, which prevented the council from completing its annual financial report, leaving a gaping hole in its budget.One of the issues IT leaders face during a multi-year ERP upgrade project is that the software acquired at the start of the project may not be what is needed a year or so later. IT leaders have always needed to deal with shelfware. Sometimes, software providers offer tempting discounts on product bundles. However, while the purchase price may be tempting, ongoing software maintenance can quickly erode these savings.Third-party support providers can offer a way to reduce these ongoing costs and help IT departments keep older enterprise software platforms running for longer. In effect, the third party software support provider has access to any of the patches and support documentation a customer has downloaded, before they switch over. After switching, subsequent patches and support are no longer available from the software provider. However, given that many of the products that move to third-party support have been run for several years, entirely new issues that require code changes are unlikely.Here are Computer Weeklys top 10 business applications articles this year.control of SaaS costsThe ease with which someone with a corporate credit card can buy a software-as-a-service (SaaS) product can lead to runaway costs and data leakage risks.VMware cost increases after Broadcom ends discountsUK non-profit London Grid for Learning and Belgian university KU Leuven are just two of the academic organisations facing huge licensing cost increases after Broadcom scraps VMware academic discount scheme.failed Birmingham Oracle implementationBirmingham City Councils Oracle system the biggest of its kind in Europe went live in April 2022, resulting in a catastrophic IT failure.GenAI in actionMany organisations are testing out uses for generative AI, but how are they getting on? We speak to five early adopters to find out the lessons learned so far.long-term SAP Rise strategyIndependent UK and Ireland SAP User Group chair describes upgrade as a marathon, not a sprint, but says SAP Rise is also inevitable.AI readiness: What is it, and is your business ready?Artificial intelligence readiness encompasses all the elements, processes and steps needed to prepare an organisation to implement AI systems.AI assurance platform for enterprisesThe platform is designed to drive demand for the UKs artificial intelligence assurance sector and help build greater trust in the technology by helping businesses identify and mitigate a range of AI-related risks.AI platform for business transformationWe report on how a deal between ServiceNow and Rimini Street may offer IT leaders an alternative route to enterprise AI.customer experienceThe flag carrier of the Philippines discusses how its digital transformation journey has improved customer experience and streamlined internal processes.Open source is not a trust issue, its an innovation issueBusinesses are seeing greater value from open source software, such as greater levels of productivity and reduced operational costs.In The Current Issue:CIO interview: Steve OConnor, Aston MartinNCSC boss calls for sustained vigilance in an aggressive worldDownload Current IssueMicrosoft Copilot: A Year of Learning Write side up - by Freeform DynamicsPrint Industry Trends, 2025 Quocirca InsightsView All Blogs
    0 Comments 0 Shares 5 Views
  • WWW.COMPUTERWEEKLY.COM
    Private vs public AI: Which should your business use in 2025?
    Imagine a Christmas where your business predicts market trends before they happen, streamlines operations effortlessly, and secures sensitive data with elf-like precision. This isn't a far-off dream it's the reality of artificial intelligence (AI) today. But AI is not a one size fits all solution there are different types of AI to consider, and steps to take to lay the groundwork for successful AI adoption. In fact, the AI industry value is projected to increase by over 13 times over the next six years due to the ever increasing advancements in this space. Two of these variations are private and public AI both with their own set of capabilities and drawbacks.As a business leader, you face a festive decision: should you harness the power of private AI, or leverage the vast resources of public AI? Public AI operates on hyperscale cloud-based platforms and tends to be accessible to multiple users and businesses. These platforms leverage vast amounts of data from various sources, providing powerful, general-purpose AI capabilities. However, this accessibility comes with trade-offs in terms of security and data privacy.Private AI, on the other hand, is tailored and confined to a specific organisation. It offers bespoke solutions, retrained to meet the unique needs of a business while ensuring data remains secure within the organisations cloud or private infrastructure. This approach mitigates the risks associated with public AI, such as unauthorised data sharing and security breaches.Security: One of the key advantages of private AI is enhanced security. By operating with a dedicated model and within a private environment, businesses can protect sensitive information and ensure compliance with data privacy regulations. This is particularly crucial for sectors handling confidential data, such as healthcare, fintech, and government agencies.Performance: Private AI can deliver a more tailored performance, customised to specific business requirements. With dedicated hardware, businesses can optimise AI workloads for speed and efficiency, leading to more accurate and timely insights.Control and customisation: Private AI offers greater control over the AI environment. Businesses can customise their AI models to align with their strategic goals and operational needs. This level of control is invaluable for developing bespoke solutions that drive competitive advantage this also provides a wider choice of customised models that can be deployed.These benefits might look tempting to business leaders, but its also important to consider the downsides.Costs: Implementing and maintaining private AI infrastructure can be expensive. The costs associated with dedicated hardware, specialised talent, and ongoing maintenance can be a significant barrier for smaller organisations.Complexity: Managing private AI requires a deep understanding of both AI technologies and the specific business context. This complexity can make it challenging to deploy and scale AI solutions effectively without the right technology partner.Scalability: While private AI offers tailored solutions, it may lack the scalability of public AI platforms. Businesses need to carefully plan their AI strategy to ensure they can scale their AI initiatives as needed without compromising performance or security.In 2024, we have seen significant advancements in AI infrastructure, making software more accessible and flexible, though hardware costs remain high. The trend towards making private AI more consumable for smaller players is expected to continue into 2025. Large organisations will continue to lead in adopting private AI, but we anticipate a shift towards more experimental and flexible AI environments, enabling businesses to develop and refine their AI capabilities internally.The introduction of regulatory frameworks like the General AI Bill will also shape the future of AI deployment. Businesses must ensure their AI models are trained on unbiased data and adhere to ethical standards, avoiding issues like AI hallucinations and misinformation.The 12 Days of AIOn the First day of AI, we explore how AI is being used in marketing, the benefits and key use cases, as well as concerns and how marketerscan best take advantage of the technology.On the Second Day of AI, we look at the importance of truly understanding what AI isto enable true organisational transformation.On the Third Day of AI, we explore some of the key trends to keep an eye on, and prepare for,in 2025.On the Fourth Day of AI, we discuss the value of adopting AI responsibly, and outlines how businesses can build responsible adoptioninto their plans.On the Fifth Day of AI, we explore how AI is reshaping HR boosting productivity, addressing concerns,and preparing organisations for the future.On the Sixth Day of AI, we explore how leveraging AI and cloud can enhance business performance and sharetips for successful implementation.On the Seventh Day of AI, we explore the double-edged sword of AI in cybersecurity and how businesses can protect themselvesagainst the cyber grinches.On the Eighth Day of AI, we explore the key considerations and strategic frameworks essential for extractingmaximum value from AI projects.On the Ninth Day of AI, we explore the critical role data plays in AI implementation and the key steps business leaders must take to prepare their datafor a successful AI future.On the Tenth Day of AI, we explore the evolving role of AI in managed services and what to expect in 2025.Adopting a hybrid AI approach, which combines the strengths of both private and public AI is an increasingly attractive proposition for business leaders. Using both ways of implementing AI can be a more accessible way to leverage certain private AI capabilities, while keeping costs and time investments to a minimum by supplementing with public AI. But adopting a hybrid approach to AI is not just a technology choice but a strategic business decision, and business leaders need to consider the following steps:Evaluate your AI needs: Assess the specific requirements of your business and determine where AI can add the most value. Identify the types of data you need to protect and the AI capabilities you require.Find the right partner: Collaborate with partners who understand the AI stack and can provide the necessary expertise and support. Look for partners with a proven track record in AI implementation and security.Focus on security and ethics: Ensure your AI solutions adhere to stringent security protocols and ethical guidelines. Implement secondary AI layers for fact-checking and to prevent AI-generated misinformation/hallucinations.Plan for scalability: Develop a roadmap for scaling your AI initiatives. Consider how you will manage and grow your AI infrastructure as your business needs evolve.By carefully considering these factors, businesses can effectively leverage AI technologies, harnessing the power of both private and public AI to drive innovation, enhance performance, and maintain a competitive edge. A hybrid approach to AI is not merely a Christmas toy; it is a strategic imperative for businesses aiming to thrive in the AI-driven future.Chris Folkerd is director of core infrastructure at ANS, a digital transformation provider and Microsofts UK Services Partner of the Year 2024. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations.
    0 Comments 0 Shares 5 Views
  • WWW.COMPUTERWEEKLY.COM
    IT Sustainability Think Tank: The economic benefits of going green
    We have witnessed an undeniable backlash against sustainability this year.Despite the news cycle being awash with coverage of the dire consequences of climate change, environmental advocates have struggled to make the case for a sustainable transition in 2024.In some parts of the world, politicians have weaponised action on the environment, pedalling the misconception that sustainability is expensive, burdensome, and a threat to affordability and prosperity. Amidst a cost-of-living crisis and rising global energy prices, this has resonated, weakening support for a swift end to fossil fuels.The EUs resolve on climate policy is under pressure, too. Last month, the European Commission President Ursula von der Leyen announced that the EU would simplify three core pieces of environmental legislation, reversing changes that were made only in the last Parliament.In this complex environment, it is not surprising that some businesses have become more hesitant to embrace sustainable alternatives and even revised their environmental, social and governance (ESG) targets, wary of the complexities of sustainability reporting, the perceived cost of implementing green solutions, and the impact on competitiveness.However, the evidence tells a different story.A 2014 Harvard University study comparing 90 "High Sustainability" companies with 90 "Low Sustainability" companies over 18 years (1993-2010)found that high-sustainability companies significantly outperformed their counterparts across key economic indicators, including proxies for economic growth and financial leverage.More recently, EYs 2023 Sustainable Value Study highlighted that delivering on sustainability initiatives has significant financial benefits, with 52% of those surveyed experiencing financial value exceeding their expectations. Additionally, 63% of respondents witnessed product and brand value improvements that were better than expected.In the tech sector, Capgemini research shows that organisations scaling sustainable IT use cases have, for instance, achieved an average cost reduction of 12%.All of this unfolds at a moment when the urgency of action has never been clearer, with scientists warning weve now breached the all-important 1.5-degree limit on global warming, with 2024 set to be the hottest year on record.As we approach another complex year's end, we must ask: why isnt the message hitting home?To reinstate momentum behind action on climate change, its clear a new approach is needed. Yes, policymakers must carefully manage the impact of green initiatives on businesses and communities. Still, the key to rebuilding support might be redefining how we talk about sustainability and its impact and value.Its time to shift the focus from the perceived costs and complexities of sustainability to the immense opportunities and tangible advantages it presents, not just for the planet but also for businesses and the economy. Lets reframe the climate discussion and tell a more persuasive story about the measurable wins we can achieve, like jobs, new partnerships, business growth, resilience, and innovation.Sustainability and profitability arent at odds they are powerful partners.Often at the forefront of change, the business community will be crucial to reinventing this narrative. In the past, organisations have been reactive to climate policy, pushed to respond by regulatory pressure, market demands, or reputational risk. But that is changing.I see this in my own interactions with customers and partners many organisations no longer believe that implementing sustainable solutions just means doing the right thing. They recognise the value of these initiatives as tools to achieve risk mitigation, commercial advantage, customer retention, productivity and efficiency gains, and improvements to the bottom line.Analysts are seeing this too: Forrester has forecast that organisations will transition from a reactive approach to sustainability commitments to a proactive one, as operational efficiencies and financial benefits eclipse ESG regulation as the key drivers for change.As Ursula von der Leyen rightly said in 2019, Its a strategy for growth that gives more back than it takes away. Unlocking its full potential, though, requires businesses to rethink their mindset because true progress is rarely linear.The circular economy which reduces consumption by keeping resources and assets in use for as long as possible - is a perfect example of this in action.Take technology, for example as the digital economy grows, so do the number of devices we use and the volume of natural resources required to manufacture them. By extending the lifespan of tech assets through circular management practices, repair, refurbishment, and re-sale on secondary markets, organisations can directly lower the impact of their tech consumption.However, while the environmental case easily stacks up, the economic story needs telling, too. At its heart, the circular economy is a financial model based on optimising resources, improving efficiency, and minimising waste all key components of profitability. When a business implements circularity into its tech operations, it essentially says: We are committed to deriving the greatest value from all our digital assets while reducing waste. Now, how could the CFO argue with that?Ive seen that lightbulb moment time and time again because when organisations understand that implementing circularity can drive down costs, improve efficiency, and recoup value on tech investment, its much easier to make the case change.All business leaders need then are practical solutions they can implement at speed and trusted partners to make it happen.The next 12 months will undoubtedly bring more global challenges and roadblocks to progress on climate change.It will be up to savvy tech providers to keep championing the cause and clearly highlighting the full spectrum of business benefits sustainable business models can deliveroperational, financial, reputational, and beyond.For me, next year is about making sure everyone is onboard our people, partners, and customers to understand just how powerful a circular economy for technology can be in helping organisations remain competitive with the latest technology while managing legacy tech in a way that recoups its value and minimises its environmental impact.We can only foster a shared understanding of its transformative potential by engaging in open and transparent dialogue about the challenges and opportunities sustainability can create. If organisations are armed with the information, evidence, and tools to make the case for sustainable investment, positive change will certainly be on the horizon.Read more from the IT Sustainability Think TankAs the global transition towards developing low-carbon economies continues apace, Gartner shares its take on the actions enterprises must take now to navigate an increasingly volatile energy landscape.The hype around AI is increasingly being matched with discussions about how the technology's adoption will affect the environment, so what can IT leaders do to ensure they keep the companies they work for on the forefront of innovation, without compromising the environment - or their firm's own corporate sustainability agenda?
    0 Comments 0 Shares 4 Views
  • WWW.COMPUTERWEEKLY.COM
    Computer Misuse Act reform gains traction in Parliament
    Corgarashu - stock.adobe.comNewsComputer Misuse Act reform gains traction in ParliamentAn amendment to the proposed Data (Access and Use) Bill that will right a 35 year-old wrong and protect security professionals from criminalisation is to be debated at WestminsterByAlex Scroxton,Security EditorPublished: 13 Dec 2024 13:49 Cross-party parliamentarians will next week debate proposals that aim to fix a glaring flaw in the Computer Misuse Act of 1990 (CMA) as momentum gathers behind the need to reform the nearly 35 year-old law.An amendment to the proposed Data (Access and Use) Bill, led by Conservative peer Lord Holmes and Liberal Democrat peer Lord Clement-Jones, that will override outdated aspects of the CMA that inadvertently criminalise good faith, legitimate security activities, will now be debated in Committee on Wednesday 18 December.Created largely in response to a famous incident in which professional hackers and technology journalists broke into British Telecoms Prestel system in the mid-80s, the CMA received Royal Assent in June 1990, barely two months after Tim Berners-Lee and CERN made the world wide web publicly available for the first time.Although it has been frequently amended over the years to reflect the changing world of technology, the CMA still vaguely defines the offence of unauthorised access to a computer, which opponents have long argued inadvertently criminalises cyber security threat researchers and incident responders and forces ethical hackers to work with one hand tied behind their back out of fear of prosecution.According to the CyberUp campaign, which has been pushing for reform for years, the CMA could be costing the UK economy up to 3.5bn.The UKs outdated cyber laws are preventing our cyber security professionals from defending organisations effectively, Rob Dartnall, SecAlliance CEO, Crest UK chair, and CyberUp representative, told Computer Weekly.In no other sector do security professionals face risks of breaking the law for simply doing their jobs. Campaign research shows that nearly two-thirds of cyber professionals say the CMA hinders their ability to safeguard the UK an untenable situation as cyber threats grow.Holmes and Clement-Jones amendment proposes a statutory defence for researchers who can demonstrate either a reasonable belief that the IT system owner would have consented to their work, or that the activity was strictly necessary for the detection of cyber crime.This will give British cyber pros similar protections to those already in force in other European countries such as Belgium, Germany, France, Malta and the Netherlands, all of which have either recently updated their legal frameworks to address professional hacking, or already had more appropriate legal regimes.Dartnall said that change was vital to fostering a safe environment for researchers and allowing them to play a more effective role in safeguarding digital systems and data in the UK a need urgently highlighted by the National Cyber Security Centre (NCSC) in its recent Annual Review.We are delighted to see an amendment tabled that could bring the Computer Misuse Act into the 21st century by introducing a statutory defence. Updating this Act would represent a landmark moment for UK cyber security legislation, which is outdated when compared to the cyber threat landscape we face, he said.The last two years have seen unprecedented levels of critical vulnerabilities, ransomware breaches and third party system breaches, all of which have had a massive effect on peoples data privacy and the UKs economy.By introducing a statutory defence, the UK could protect legitimate cyber security professionals, strengthen its cyber defences, and reinforce its place as a cyber security leader. It is time we updated the law to fit with the digital age, added Dartnall. With support from across parliament, we believe this amendment could be a catalyst for a change that would better protect the country.Timeline: Computer Misuse Act reformJanuary 2020: Group of campaigners says the Computer Misuse Act 1990 risks criminalising cyber security professionals andneeds reforming.June 2020: The CyberUp coalition writes to Boris Johnson to urge him to reformthe UKs 30 year-old cyber crime laws.November 2020: CyberUp, a group of campaigners who want to reform the Computer Misuse Act, finds 80% of security professionals are concerned that they may be prosecutedjust for doing their jobs.May 2021: Home secretary Priti Patel announces plans to explore reforming the Computer Misuse Act as calls mount for the 31-year-old law to be updatedto reflect the changed online world.June 2022: A cross-party group in the House of Lords has proposed an amendment to the Product Security and Telecommunications Infrastructure Bill that would address concerns about security researchers or ethical hackers being prosecutedin the course of their work.August 2022: A study produced by the CyberUp Campaign reveals broad alignment among security professionals on questions around the Computer Misuse Act, which it hopes will give confidence to policymakersas they explore its reform.September 2022: The CyberUp coalition, a campaign to reform the Computer Misuse Act, has called on Liz Truss to push ahead with needed changes to protect cyber professionalsfrom potential prosecution.January 2023: Cyber accreditation association Crest International lends its support to the CyberUp Campaign forreform to the Computer Misuse Act 1990.February 2023: Westminster has opened a new consultation on proposed reforms to the Computer Misuse Act 1990, but campaigners who want the law changed to protect cyber professionalshave been left disappointed.March 2023: The deadline for submissions to the governments consultation on reform of the Computer Misuse Act is fast approaching, and cyber professionals need to make their voices heard,says Bugcrowds ethical hackers.November 2023: A group of activists who want to reform the UKs computer misuse laws to protect bona fide cyber professionals from prosecution have been left disappointed by a lack of legislative progress.July 2024: In the Cyber Security and Resilience Bill introduced in the Kings Speech, the UKs new government pledges to give regulators more teeth to ensure compliance with security best practiceand to mandate incident reporting.July 2024: The CyberUp Campaign for reform of the 1990 Computer Misuse Act launches an industry survey inviting cyber experts to share their views on how the outdated law hinders legitimate work.In The Current Issue:CIO interview: Steve OConnor, Aston MartinNCSC boss calls for sustained vigilance in an aggressive worldDownload Current IssueData engineering - FlowX.ai: Orchestrating data pipelines for vertical AI agents CW Developer NetworkLooking ahead at long, mid and short term IT plans Cliff Saran's Enterprise blogView All Blogs
    0 Comments 0 Shares 6 Views
  • WWW.COMPUTERWEEKLY.COM
    Decking the halls with AI: the evolution of managed services
    As we celebrate the festive season and look ahead to 2025, its clear that artificial intelligence (AI) and automation are transforming the way businesses operate. These technologies are not only driving efficiency and saving time but also enabling a shift towards more strategic, value-driven work.For managed services teams in particular, AI and automation have traditionally been used to streamline processes, eliminate human error, and enhance customer value. But the role of AI is expanding rapidly in this space, snowballing to elevate standards of service delivery and giving teams a competitive edge.For years, AI has been primarily employed to automate repetitive tasks, freeing up human resources for more complex and strategic activities. Automation has been invaluable in reducing the time spent on routine processes, minimising errors, and ensuring consistency. Managed Service teams have leveraged AI to manage success metrics and deliver services efficiently, even when under pressure to meet rising customer expectations. These traditional applications of AI have played a critical role in delivering quick responses and maintaining high levels of customer satisfaction.As we wrap up warm for the festive season, its worth reflecting on how AI has ensured smoother operations and happier customers throughout the year. The efficiency gains and reduction of errors achieved through automation have become vital components in the ongoing drive for service excellence.Today, managed service teams are using AI beyond simple automation. AI is increasingly being integrated into sophisticated applications from predictive analytics to customer interaction and proactive maintenance. Advanced AI tools now analyse vast datasets, providing businesses with insights that allow them to anticipate issues before they arise and optimise their operations accordingly.Looking towards 2025, the future of AI in managed services is filled with potential. AI will continue to enhance customer engagement through personalised services, enabling businesses to offer tailored experiences that were previously impossible. Additionally, AI will play a central role in improving decision-making processes by providing deeper insights into operational data. This will give businesses the power to make more informed, data-driven decisions and further elevate the customer experience.The evolution of AI promises a new era of intelligent service delivery. Predictive analytics and proactive maintenance will become industry standards. As AI becomes increasingly integrated into everyday operations, the way managed services are delivered will be transformed, with service providers offering more anticipatory, data-driven solutions.Despite clear benefits, there are still barriers to overcome to enable the widespread adoption of AI in managed services.A key challenge is bringing everyone onboard. While younger employees may be enthusiastic about AI, others may be sceptical or resistant, fearing that AI might threaten their roles or make their jobs obsolete. Addressing employee concerns with unity and collaboration, much like the spirit of the holiday season, is essential. Emphasising how AI can enhance, rather than replace, employee roles will be critical for a smooth transition to AI-driven workflows.Training and upskilling initiatives will also help highlight how AI can make jobs more engaging by taking over mundane tasks. When employees see the tangible benefits of AI in their day-to-day work, they will be more likely to embrace these changes.The 12 Days of AIOn the First day of AI, we explore how AI is being used in marketing, the benefits and key use cases, as well as concerns and how marketerscan best take advantage of the technology.On the Second Day of AI, we look at the importance of truly understanding what AI isto enable true organisational transformation.On the Third Day of AI, we explore some of the key trends to keep an eye on, and prepare for,in 2025.On the Fourth Day of AI, we discuss the value of adopting AI responsibly, and outlines how businesses can build responsible adoptioninto their plans.On the Fifth Day of AI, we explore how AI is reshaping HR boosting productivity, addressing concerns,and preparing organisations for the future.On the Sixth Day of AI, we explore how leveraging AI and cloud can enhance business performance and sharetips for successful implementation.On the Seventh Day of AI, we explore the double-edged sword of AI in cybersecurity and how businesses can protect themselvesagainst the cyber grinches.On the Eighth Day of AI, we explore the key considerations and strategic frameworks essential for extractingmaximum value from AI projects.On the Ninth Day of AI, we explore the critical role data plays in AI implementation and the key steps business leaders must take to prepare their data for a successful AI future.As we look ahead to 2025, the role of AI in managed services is poised to expand even further. AI will increasingly be relied upon to handle customer alerts, analyse large volumes of data from monitoring platforms, and generate more accurate and insightful reports. This will allow customer success managers to engage more deeply with their clients, fostering stronger relationships and driving higher renewal rates.The journey towards full AI integration in managed services may be challenging, but the rewards of increased efficiency, customer satisfaction, and strategic insight make it a worthwhile endeavour. As we move into the new year, embracing AIs potential will be crucial for organisations looking to stay ahead in a rapidly evolving industry.Ben Clarke is director of managed services support atANS, a digital transformation provider and Microsofts UK Services Partner of the Year 2024. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations.
    0 Comments 0 Shares 20 Views
  • WWW.COMPUTERWEEKLY.COM
    Decoding the end of the decade: What CISOs should watch out for
    Its that time of year where we, in the industry, attempt to be cyber soothsayers. A tall order even more so when youre trying to look ahead to 2030.The cyber security landscape is in a state of flux, and the past five years has kept us on our toes. As I do my best to peer into the crystal ball of the late 2020s, its clear that the challenges facing CISOs and their teams will become even more complex. From the persistent threat of ransomware to the rise of cyber sabotage, the threat landscape is undergoing a big transformation. And the implications go beyond just the technical the potential for personal liability for security leaders is also a looming issue that could reshape our roles.Here are my thoughts on the exciting and chaotic opportunities that we might see emerging in the next five years.Ransomware will persist, but a blurring of cyber and physical sabotage attacks targeting critical infrastructure specifically may become more prevalent. This is due to the blurred lines between state-sponsored and criminal activities.Sabotage in cyber security means intentionally causing damage to, or manipulation of, digital data or systems, with the intent to disrupt operations, cause damage, or compromise security. Cyber attackers may aim to disrupt operations and compromise the integrity of computer systems and networks. This malicious activity can have severe consequences ranging from temporary disruption to serious long-term issues, financial losses, and data breaches.Sabotage is interesting as it represents a departure from where we were five to 10 years ago in the cyber security landscape. Previously, cyber security professionals didnt have to consider sabotage as a primary threat but thats changing. The interesting thing here is that cyber sabotage isnt new, but the impact that it can have is increasing and will continue to do so.Recent incidents that suggest sabotage is more of a concern, exemplified by the Nord Stream gas pipeline attacks and a recent fibre optic cable incident in the Baltic Sea. These types of physical attacks on critical infrastructure are being viewed as potential acts of sabotage. Sabotage is quite a political issue, which means cyber security professionals may need to be careful in the coming years to avoid getting involved in sensitive geopolitical matters.The advent of new technologies like artificial intelligence (AI) will introduce new risks and unintended consequences that organisations will need to manage, such as data ownership and privacy issues. This is alongside the fact that if we start to make key decisions using AI, we need to ensure that they have robust and explainable safeguards around them. One exciting area is the UKs AI Safety Institute and the way they are looking at the safe usage of Frontier AI models.AI is a powerful technology that can be used both beneficially and maliciously. While it can enable efficiency gains and help defend against threats, it also has the potential for misuse. The growth of these technologies will introduce new and accidental risks and consequences that organisations will need to manage.What if an organisation puts all their data into an AI-enabled system, and then the system fails or the company goes bankrupt? There could be issues around who owns the data and what happens to it, such as it being sold off to the highest bidder. Take 23andMe who owns that data now?We need to carefully consider the ethical implications of adopting AI and other emerging technologies to avoid negative outcomes like these.The Computer Weekly Security Think Tank looks aheadMike Gillespie and Ellie Hurst, Advent IM:CISOs will face growing challenges in 2025 and beyond.Elliot Rose, PA Consulting:The most pressing challenges for CISOs and cyber security teams.Pierre-Martin Tardif, ISACA:Six trends that will define cyber through to 2030.Stephen McDermid, Okta:In 2025: Identities conquer, and hopefully unite.Deepti Gopal, Gartner: CISOs: Don't rely solely on technical defences in 2025.Cyber security is something people are talking about at the dinner table. I cant decide whether its good or sobering that my mum now talks about it. This increased awareness and attention on cyber security is leading to a situation where CISOs are held to a higher standard and face greater pressure to make the right decisions.The decisions a CISO makes are reflective of risks and usually were just trying to stop someone from making an accidental problem. If we make the wrong call, are CISOs accountable and responsible from a legal perspective?There is an ongoing discussion around whether CISOs should have personal liability insurance, like how company directors do. This is because the decisions made by CISOs on behalf of the organisation can be seen as risk decisions, and if those decisions turn out to be wrong, the CISO could potentially be held accountable.We might start to see CISOs held legally accountable, either in a civil suit or even a criminal case, if they make a decision that leads to a security incident or breach much like the Uber case.While my crystal ball might be a bit hazy for the next five years, one thing is clear: CISOs and security teams will face a variety of challenges. With more regulations, a heightened threat environment, and the potential for cyber sabotage, we'll need to strike a careful balance.But it's not all doom and gloom! There's plenty to be optimistic about. The growing awareness of cybersecurity opens doors to attract diverse talent and foster greater industry collaboration. Plus, AI promises more efficient and better defences against threats. By tackling potential risks head-on, we can embrace the positives of these trends and be well-prepared for the future.
    0 Comments 0 Shares 20 Views
  • WWW.COMPUTERWEEKLY.COM
    Tech workers say diversity and inclusion efforts are working
    Rawpixel.com - stock.adobe.comNewsTech workers say diversity and inclusion efforts are workingThe status of diversity in the technology sector remains up for debate, but some tech industry workers are seeing improvements, says Tenth Revolution GroupByClare McDonald,Business Editor Published: 12 Dec 2024 15:39 Tech workers claim that where organisations are working to improve diversity, equity and inclusion (DEI) those efforts are making a difference, according to a survey.As part of its Careers and Hiring Guide 23/24, tech talent firm Tenth Revolution Group gathered data from global employees at Amazon Web Services, Microsoft 365, Azure and Business Applications, NetSuite and Salesforce, and found that 71% of tech workers claimed their workplace promotes DEI and that these efforts are having a positive impact.Caroline Fox, global ED&I strategy lead for Tenth Revolution Group, said: Most tech employers are investing resources in ED&I, and the majority of tech professionals are affirming its positive impact. And while the numbers do drop slightly, its especially great to see that its still a majority when we filter specifically for marginalised communities across the range of tech ecosystems. My hope is that more data like this will enable us to keep focusing our efforts on implementation and meaningful inclusion across the sector.Diversity in the tech sector has been a focus of debate for many years, and the progress towards improving diversity in the UKs technology industry has been slow.Figures from BCS, for example, found in the four years to 2022, the percentage of women in tech roles only grew by 4%, and the number of people from black, Asian and minority ethnic (BAME) backgrounds grew by just 2% in the same timeframe.Read more about diversity in techCommunity focused not-for-profit Tech Cornwall has developed a skills programme to help people from diverse backgrounds move into tech careers.Research by organisations Women in Tech North and Tech Returners finds that women believe developing alternative routes into tech jobs will help close the industrys diversity gap.The slow growth in some cases has been attributed to company attitudes towards DEI practices, sometimes only focusing on one group, or addressing their hiring practices without taking into consideration retention.Just under 70% of those involved in Tenth Revolutions research said their employer is investing in DEI initiatives, and while 71% of employees as a whole feel these are working, this number fluctuates depending on the focus of the efforts.Elsewhere in the industry, attitudes towards DEI are mixed, with some saying firms are doing well to increase the diversity of tech teams, while others claim budget cuts and a lack of leadership buy-in are preventing these initiatives from making a difference.Tenth Revolution found 69% of tech professionals believe their workplace is an equal opportunities employer for tech workers who are differently abled.The outlook is much the same for tech workers who are people of colour, with 68% saying their employer is supportive of people of colour in the workplace.The number drops even further when it comes to supporting women in tech, with 62% saying their employer promotes equality inclusion and internal culture can play a huge role in retaining diverse talent, but this is especially true of women.Data from Tech Talent Charter found a lack of flexible working practices stands in the way of women choosing to stay in a tech role or in the tech sector as a whole, with 40% of women in tech saying their future career choices will depend on their care responsibilities.There has been an unfortunate trend of DEI practices being scaled back over the past year, with many industry organisations stating diversity and inclusion practices should continue to be intentional and well planned rather than for show. There is widespread opinion that with the rise of artificial intelligence (AI) technology and a growing skills gap, the push for diversity in the sector is now more important than ever.In The Current Issue:CIO interview: Steve OConnor, Aston MartinNCSC boss calls for sustained vigilance in an aggressive worldDownload Current IssueData engineering - Eve: Making a case for transformed unstructured data with LLM-power CW Developer NetworkGrid Dynamics creates developer portal CW Developer NetworkView All Blogs
    0 Comments 0 Shares 20 Views
  • WWW.COMPUTERWEEKLY.COM
    CISOs: Dont rely solely on technical defences in 2025
    Threats have been more sophisticated, unpredictable and harder to pin down. Attackers dont just exploit technical weaknesses they target human behaviour, organisational blind spots, and even regulatory loopholes. From spear phishing and deepfake fraud to misinformation generated by artificial intelligence (AI), cyber criminals are using emerging technologies to launch attacks with precision and ease. This means the old playbook of relying solely on technical defences isnt enough anymore.Organisations need a shift in mindset: prioritising secure human behaviours, leveraging technologies like GenAI, and addressing business risks as much as external threats. The scope of cyber security is not just tech-savvy but also human-centric.CISOs need to also consider the following trends for their security strategies for the near future.In 2024, one of the more subtle yet critical challenges that emerged was the rise of malinformation deliberate misinformation aimed at manipulating and destabilising. Battling misinformation and reputational threats is becoming a top-line issue for all. By 2028, organisations will spend over $500 billion annually addressing malinformation, with impacts felt across marketing and cyber security budgets alike.Deepfake fraud, social engineering, and AI-driven scams are driving the need for enterprise-wide programmes led by CISOs. Companies must prioritise investments in resilience measures such as chaos engineering to prepare for these challenges.Zero-trust has become a cyber security cornerstone, but its application has limits. By 2026, 75% of organisations will exclude legacy systems and operational environments from zero-trust strategies due to their unique constraints.Adapting zero-trust principles to non-IT systems, like production lines or older platforms, will be critical for organisations looking to expand their defences while maintaining operational efficiency.Cyber security leaders are facing increased accountability. By 2027, two-thirds of Global 100 companies will extend directors and officers insurance to their cyber security leaders, reflecting heightened scrutiny on their roles. Clarifying the CISO role and aligning it with regulatory expectations is vital to manage these risks effectively.Insider threats remain a significant challenge, particularly in an era of remote and hybrid work. By 2027, 70% of organisations will combine data loss prevention and insider risk management with identity and access systems. This integrated approach will help businesses better identify and mitigate potential threats while simplifying their security frameworks.The Computer Weekly Security Think Tank looks aheadMike Gillespie and Ellie Hurst, Advent IM:CISOs will face growing challenges in 2025 and beyond.Elliot Rose, PA Consulting:The most pressing challenges for CISOs and cyber security teams.Pierre-Martin Tardif, ISACA:Six trends that will define cyber through to 2030.Stephen McDermid, Okta: In 2025: Identities conquer, and hopefully unite.GenAIis set to make a practical but measured impact on cybersecurity operations. By 2028, AI-driven solutions will allow 50% of entry-level cybersecurity roles to be filled without requiring specialised education, helping organisations bridge talent shortages. In addition, organisations integrating GenAI into employee training programmes and security workflows could see up to a 40% reduction in employee-driven incidents by 2026. While GenAI offers promising tools for improving efficiency and education, it should be viewed as a complement to, not a replacement for, broader security strategies.As low-code and no-code tools grow in popularity, application security is moving closer to the teams building the software. By 2027, 30% of organisations will empower non-technical professionals to manage aspects of app security, supported by new roles like application security product managers. Providing these teams with the right resources and training will be essential to maintaining robust security practices in a more decentralised environment.2024 underscored the growing personal and legal stakes for cyber security leaders. As the threat landscape evolves, the lessons of 2024 underline the critical need for organisations to be agile, innovative, and human-focused in their strategies. While the potential of GenAI is undeniable, its success will hinge on careful governance and targeted use. At the same time, the growing impact of threats like malinformation and personal liability underscores the need for new tools, strategies, and insurance protections.Ultimately, cyber security in 2025 will require security and risk management leaders to act decisively and collaboratively. Those who embrace this complexity and prioritise building secure behaviours within their teams will be the ones who stay ahead and succeed in 2025.Deepti Gopal is director analyst at Gartner.
    0 Comments 0 Shares 20 Views
  • WWW.COMPUTERWEEKLY.COM
    Police not ruling any person or crime out of Post Office scandal investigation
    Trump appointments show potential for U.S.-China relationsApart from President-elect Donald Trump's promise to take a strong stance on goods imported from China, collaboration might be ...Experts urge DOGE to prioritize tech over agency cutsElon Musk and Vivek Ramaswamy want to 'delete' federal agencies. However, some hope the duo will turn their focus to improving ...Intel CEO Pat Gelsinger out; board searches for new CEOGelsinger's turnaround plans -- including layoffs, restructuring and foundries -- weren't enough to pull the company out of ...Citrix NetScaler devices targeted in brute force campaignCitrix advised NetScaler customers to ensure that their devices are fully updated and properly configured to defend against the ...Microsoft enhanced Recall security, but will it be enough?Microsoft's controversial Recall feature began rolling out to certain Windows Insiders with Copilot+ PCs in November, with more ...Attackers exploit vulnerability in Cleo file transfer softwareCleo disclosed and patched the remote code execution vulnerability in late October, but managed file transfer products have ...The business case for AI-driven network orchestrationOrganizations that are considering AI-driven network orchestration will find it has many business cases -- chief among them are ...The purpose of route poisoning in networkingRoute poisoning is an effective way of stopping routers from sending data packets across bad links and stop routing loops. This ...How to determine when to use reserved IP addressesNetwork admins choose IP address configuration based on management requirements. Each address type has a specific role, but ...StorMagic SvHCI 2.0 eases VM migrationSix months after the initial release, StorMagic SvHCI is adding features to make it an attractive alternative to VMware at the ...Understanding the impact of data center noise pollutionData center noise pollution from generators and cooling systems disrupts nearby communities and affects health. Community concern...How the rise in AI impacts data centers and the environmentAI's impact on data centers raises environmental concerns as rising energy demands from technologies such as ChatGPT strain ...10 data governance challenges that can sink data operationsNo organization can successfully use data without data governance. Addressing 10 data governance challenges is necessary to avoid...Noninvasive data governance offers friendly approachData governance doesn't need to be a burden on employees. A noninvasive approach formalizes existing responsibilities as ...Vectorized data uses see resurgence with generative AIAdvancements in generative AI are driving renewed interest in vector databases. Organizations have also found new uses for the ...
    0 Comments 0 Shares 21 Views
  • WWW.COMPUTERWEEKLY.COM
    Russia focuses cyber attacks on Ukraine rather than West despite rising tension
    Russia is focusing its cyber attacks against Ukraine, rather than stepping up its attacks against the West in response to decisions by the US and the UK to allow Ukraine to use long-range missiles on Russian territory.In an interview with Computer Weekly, Paul Chichester, the director of operations of the National Cyber Security Centre (NCSC), part of the Government Communications Headquarters (GCHQ), said that Russia had not used cyber attacks to respond tactically against increasing military support for Ukraine.Russian cyber operations have been at a high level since the start of the Ukraine conflict, but Russias primary purpose remains to support military operations on the Ukraine battlefield, he said.Former NCSC CEO and founder Ciaran Martin, now a director of security skills and training body the SANS Institute, said initial predictions that the Ukraine war would lead to a concerted cyber campaign against the West had not materialised.Going into the war, there were two big predictions, he told Computer Weekly. One was that Russia would use heavy cyber effects against Ukraine. They have tried that, but the impact can be debated.But the other assumption was they would try much more aggressive cyber blips, if you like, against Western allies of Ukraine, added Martin.But no serious scholar of cyber security thinks theyve done [that]. Its observably untrue.Salt TyphoonThe NCSC said its keeping a watching brief on attacks by Chinese hacking operation Salt Typhoon, which has hit US telecoms networks, including AT&T, Verizon and Lumen Technologies, placing the personal information of millions of people at risk.The attack, which has reportedly been underway for at least two years, has given Chinese hackers access to unencrypted messages and voice calls, and has enabled them to target the personal information of senior political figures in the US.Chichester said the British intelligence services were trying to assess the impact of the threat on the UK.Were still learning what that threat is, he said. It appears to be very focused on the US at the moment, but that doesnt mean were complacent. We will continue to look at the UK angles to that and respond to them as and when they occur.The UKs introduction of the Product Security and Telecoms Infrastructure Act 2012, which came into force this year, placed legal duties on manufacturers of electronic and home devices to protect consumers and businesses from cyber attacks.Chichester said that the act, together with telecoms security regulations that are being phased in over the next couple of years, aim to design-out vulnerabilities that could be exploited by attacks like Salt Typhoon.I think that the UK has been considering these kinds of vulnerabilities for some significant time, and has brought forward legislation and regulations with [telecoms regulator] Ofcom and others to absolutely try and increase resilience against those kinds of attacks, he said. We all know that defenders make mistakes, and thats all an attacker sometimes needs. But genuinely a lot of the things that are being required of operators in the UK are things that I know the US are looking at, and other countries are as well.Read more about Salt TyphoonChinese hacking of US telecom networks raises questions about the exploitation by hostile hacking groups of government backdoors to provide lawful access to telecoms services.Following the widespread Salt Typhoon hacks of US telecoms operators including AT&T and Verizon, CISA and partner agencies have launched refreshed security guidance for network engineers and defenders alike.Martin said that UK telecoms companies and the NCSC were aware of weaknesses and vulnerabilities in the telecoms network, and it was a question of how quickly they can be rectified before they can be exploited by threat actors.I think there are certain advantages that allow the UK to try to manage Salt Typhoon-style operations which arent available to allied countries, he said.Chichester said that much of the tradecraft used by cyber security attackers in Salt Typhoon and other attacks had been anticipated by government and industry ahead of time.Although its not possible to know every attack plan, simple strategies such as telcos separating operational and management infrastructure will reduce the risks.Just putting certain requirements and security around the administration of those networks cuts off a lot of vectors, he said. You might not know how the adversary is going to do it, but if you architect it in a certain way, then thats what gives you resilience.The UK government is working with telcos collaboratively to develop security regulations and technologies to block a variety of potential attacks, said Chichester.This has led to a back and forth between the NCSC and telcos, to see what might work, and what security measures are possible.One long-running debate is whether governments are right to attribute hacking attacks to the nation state responsible. Former NCSC CEO Martin said that where the identity of a nation-state hacker was known, it should be disclosed unless there were good reasons not to do so.Chichester said that identifying an attacker publicly can make it easier to get the message across to companies that they need to take action.At the end of the day, if you want to communicate to people, weve got to make it about people, either the adversary or the victim, he said. Youve got to tell a story. I think [naming an attacker] is a really powerful communications tool that we would like to use where we can. And so I think it helps defenders.It helps you kind of think and visualise, because, you know, as an organisation, OK, do I care about Russia, China or Iran? added Chichester.The cyber security director said the NCSC and the UK government publicly attributed cyber attacks for a variety of reasons, including to build coalitions and increase the political cost of cyber attacks.I dont think anybody genuinely thinks that attributions or public indictments or sanctions will ever prevent a state from doing this, but that is not what its about, he said.But when an attribution is accompanied by a court indictment naming individuals responsible for a hacking operation, that can be a powerful tool, said Martin. That does give you credibility, he added. It really does.
    0 Comments 0 Shares 21 Views
  • WWW.COMPUTERWEEKLY.COM
    iOS vuln leaves user data dangerously exposed
    A bypass flaw in the FileProvider Transparency, Consent and Control (TCC) subsystem within Apples iOS operating system could leave users data dangerously exposed, according to researchers at Jamf Threat Labs.Assigned CVE-2024-44131, the issue was successfully patched by Apple in September 2024 and Jamf, whose researchers are credited with its discovery, is formally disclosing it today. It also affects macOS devices, although Jamfs researchers have focused on the mobile ecosystem since these estates are more often neglected during updates.CVE-2024-44131 is of particular interest to threat actors because if successfully exploited, it can enable them to access sensitive information held on the target device, including contacts, location data and photos.TCC is a critical security framework, the Jamf team explained, which prompts users to grant or deny requests from specific applications to access their data, and CVE-2024-44131 enables a threat actor to sidestep it completely if they can convince their victim to download a malicious app.This discovery highlights a broader security concern as attackers focus on data and intellectual property that can be accessed from multiple locations, allowing them to focus on compromising the weakest of the connected systems, said the team.Services like iCloud, which allow data to sync across devices of many form factors, enable attackers to attempt exploits across a variety of entry points as they look to accelerate their access to valuable intellectual property and data.Open to abuseThis is not the first time that Apple's TCC subsystem has been shown to be at risk of compromise. Earlier in 2024, Cisco Talos researchers detailed eight vulnerabilities in Microsoft applications, including Excel, PowerPoint and Teams, that enable a threat actor to exploit TCC by abusing the applications' enhanced privileges to slip a malicious code library into the application's running space. The researcher who discovered it said that because Apple's operating systems trust applications to self-police their permissions, a failure in this responsibility effectively breaks down the entire permission model.At the core of the problem sits the interaction between the Apple Files.app and the FileProvider system process when managing file operations.In the exploit demonstrated, when an unwitting user moves or copies files or directories with Files.app within a directory that the malicious app running in the background can access, the attacker gains the ability to manipulate a symbolic link, or symlink a file that exists solely specify a path to the target file.Usually, file operation APIs will check for symlinks, but they usually appear at the final portion of the path prior to beginning the operation, so if they appear earlier which is the case in this exploit chain the operation will bypass these checks.In this way, the attacker can use the malicious app to abuse the elevated privileges provided by FileProvider to either move or copy data into a directory they control without being spotted. They can then hide these directories, or upload them to a server they control.Crucially, said the Jamf team, this entire operation occurs without triggering any TCC prompts.The most effective defence against this flaw is to apply the patches from Apple, which have been available for a couple of months. Security teams may also wish to implement additional monitoring of application behaviour and endpoint protection.Jamfs strategy vice president Michael Covington warned that because the updates also included support for Apple Intelligence, a series of artificial intelligence (AI) features for iOS devices, wariness around this feature might have led some organisations to hold off applying the updates with the necessary patch, leaving the attack vector open to exploitation.This discovery is a wake-up call for organisations to build comprehensive security strategies that address all endpoints, said the team.Mobile devices, as much as desktops, are critical parts of any security framework.Extending security practices to include mobile endpointsis essential in an era where mobile attacks are increasingly sophisticated.Read more about Apple securityIt can be difficult for Apple admins to adapt to every new OS release and the respective compliance changes. Thats where the macOS Security Compliance Projectcomes into play.There are lots of universal security controls that can apply to any type of desktops, but IT teams need to look at the specific features native to desktopssuch as macOS.Macs are known for their security, but that doesn't mean they're safe from viruses and other threats. IT teams can look into third-party antivirus toolsto bolster macOS security.
    0 Comments 0 Shares 21 Views
  • WWW.COMPUTERWEEKLY.COM
    Dangerous CLFS and LDAP flaws stand out on Patch Tuesday
    Microsoft has issued fixes for 71 new Common Vulnerabilities and Exposures (CVEs) to mark the final Patch Tuesday of 2025, with a solitary zero-day that enables privilege elevation through the Windows Common Log File System Driver stealing the limelight.Assigned designation CVE-2024-49138 and credited to CrowdStrikes Advanced Research Team, the flaw stems from a heap-based buffer overflow in which improper bounds checking lets an attacker overwrite memory in the heap.It is considered relatively trivial to exploit by an attacker who to execute arbitrary code and gain system-level privileges that could be used to execute deeper and more impactful attacks, such as ransomware. Microsoft said it had observed CVE-2024-49138 being exploited in the wild.The CLFS driver is a core Windows component used by applications to write transaction logs, explained Mike Walters, president and co-founder of patch management specialist Action1.This vulnerability enables unauthorised privilege elevation by manipulating the driver's memory management, culminating in system-level access the highest privilege in Windows. Attackers gaining system privileges can perform actions such as disabling security protections, exfiltrating sensitive data, or installing persistent backdoors, he said.Walters explained that any Windows system dating back to 2008 that uses the standard CLFS component is vulnerable to this flaw, making it a potential headache across enterprise environments if not addressed quickly.The vulnerability is confirmed to be exploited in the wild and some information about the vulnerability has been publicly disclosed, but that disclosure may not include code samples, said Ivanti vice president of security products, Chris Goettl.The CVE is rated Important by Microsoft and has a CVSSv3.1 score of 7.8. Risk-based prioritisation would rate this vulnerability as Critical which makes the Windows OS update this month your top priority.In a year that saw Microsoft push over 1,000 bug fixes across 12 months, the second highest volume ever after 2020, as Dustin Childs of the Zero Day Initiative observed, December 2024 will stand out for a notably high volume of Critical vulnerabilities, 16 in total and all, without exception, leading to remote code execution (RCE).A total of nine of these vulnerabilities affect Windows Remote Desktop Services, while three are to be found in the Windows Lightweight Directory Access Protocol (LDAP), two in Windows Message Queuing (MSMQ) and one apiece in Windows Local Security Authority Subsystem Service (LSASS) and Windows Hyper-V.Of these, it is CVE-2024-49112 in Windows LDAP that probably warrants the closest attention, carrying an extreme CVSS score of 9.8 and affecting all versions of Windows since Windows 7 and Server 2008 R2. Left unaddressed, it allows an unauthenticated attacker to achieve RCE on the underlying server.LDAP is commonly seen on servers acting as Domain Controllers in a Windows network and the feature needs to be exposed to other servers, and clients, in an environment in order for the domain to function.Immersive Labs principal security engineer Rob Reeves explained: Microsoft has indicated that the attack complexity is low and authentication is not required.Furthermore, they advise that exposure of this service either via the internet or to untrusted networks should be stopped immediately.An attacker can make a series of crafted calls to the LDAP service and gain access within the context of that service, which will be running with System privileges, said Reeves.Because of the Domain Controller status of the machine account, it is assessed this will instantly allow the attacker to get access to all credential hashes within the domain. It is also assessed that an attacker will only need to gain low privileged access to a Windows host within a domain or a foothold within the network in order to exploit this service gaining complete control over the domain.Reeves told Computer Weekly that threat actors, particularly ransomware gangs, will be keenly trying to develop exploits for this flaw in the coming days because taking complete control of a Domain Controller in an Active Directory environment can get them access to every Windows machine on that domain.Environments which make use of Windows networks using Domain Controllers should patch this vulnerability as a matter of urgency and ensure that Domain Controllers are actively monitored for signs of exploitation, he warned.Read more about Patch TuesdayNovember 2024: High-profile vulns in NTLM, Windows Task Scheduler, Active Directory Certificate Services and Microsoft Exchange Server should be prioritised from Novembers Patch Tuesday update.October 2024: Stand-out vulnerabilities in Microsofts latest Patch Tuesday drop include problems in Microsoft Management Consoleand the Windows MSHTML Platform.September 2024: Four critical remote code execution bugs in Windows and three critical elevated privileges vulnerabilitieswill keep admins busy.August 2024: Microsoft patches six actively exploited zero-days among over 100 issuesduring its regular monthly update.July 2024: Microsoft has fixed almost 140 vulnerabilities in its latest monthly update, with a Hyper-V zero-daysingled out for urgent attention.June 2024: An RCE vulnerability in a Microsoft messaging feature and a third-party flaw in a DNS authentication protocol are the most pressing issues to address inMicrosofts latest Patch Tuesday update.May 2024: A critical SharePoint vulnerability warrants attention this month, but it is another flaw that seems to be linked to the infamous Qakbot malwarethat is drawing attention.April 2024: Support for the Windows Server 2008 OS ended in 2020, but four years on and there's a live exploit of a security flawthat impacts all Windows users.March 2024: Two critical vulnerabilities in Windows Hyper-V stand out onan otherwise unremarkable Patch Tuesday.February 2024: Two security feature bypasses impacting Microsoft SmartScreen are on the February Patch Tuesday docket,among more than 70 issues.January 2024: Microsoft starts 2024 right with another slimline Patch Tuesday drop, but there are some critical vulnerabilities to be alert to, including a number ofman-in-the-middle attack vectors.Finally, one little-regarded bug stands out this month, a flaw in Microsoft Muzic, tracked as CVE-2024-49063.The MicrosoftMuzicAI project is an interesting one, observed Ivantis Goettl.CVE-2024-49063is a remote code execution vulnerability in MicrosoftMuzic. To resolve this, CVE developers would need to take the latest build from GitHub to update their implementation.The vulnerability stems from deserialisation of untrusted data, leading to remote code execution if an attacker can create a malicious payload to execute.For those unfamiliar with the project, Microsoft Muzic is an ongoing research project looking at understanding and generating music using artificial intelligence (AI). Some of the projects features include automatic lyric transcription, song-writing and lyric generation, accompaniment generation and singing voice synthesis.
    0 Comments 0 Shares 20 Views
  • WWW.COMPUTERWEEKLY.COM
    AI and cloud: The perfect pair to scale your business in 2025
    Artificial intelligence (AI) and cloud technology are set to be the driving forces behind business growth and innovation in 2025. Together, they offer transformative potential to boost efficiency, drive innovation, and create scalable solutions. However, unlocking these benefits requires careful planning, governance, and security measures. As the new year draws nearer, now is the time for business leaders to get started.By adopting a strategic approach, business leaders can begin with manageable steps, scale swiftly, and confidently navigate the dynamic, AI-powered future ahead.AI requires immense computational power, making cloud platforms like AWS, Azure, and Google Cloud ideal for scaling AI projects. These platforms provide on-demand access to advanced resources without the need for costly hardware, enabling businesses to experiment and scale solutions as needed.But the relationship between AI and the cloud extends beyond just infrastructure. Cloud technology enables your business to experiment with AI applications, test new ideas, and expand them as needed. This flexibility allows AI to move from concept to scalable solution, growing alongside your business and its evolving needs.The combination of AI and the cloud is reshaping industries across the board. From retail to healthcare, organisations are already seeing the benefits of AI-powered cloud solutions. For example, Spotify uses AI in the cloud to personalize music recommendations for millions of users, processing vast amounts of data to create tailored experiences that enhance engagement. Meanwhile, the NHS leverages AI-powered cloud tools to predict hospital admissions and optimize resources, ultimately reducing wait times and improving patient outcomes.By integrating AI with the cloud, businesses can tackle more complex challenges, automate processes, and improve customer engagement. However, to realize this potential, businesses need to ensure they have the right foundations in place.To fully leverage AI and the cloud, its essential to set your business up for success with the right governance, security, and strategic approach. Lets break this down further.Get governance rightAI thrives on data, but without proper governance, you risk data breaches, non-compliance, or even misuse of AI outputs. Implementing clear policies and leveraging cloud tools for access control and encryption ensures that your AI remains safe and compliant.For example, Microsofts Copilot integrates with cloud-based services like Office 365 to enhance productivity. However, without adequate data governance, theres a risk of exposing sensitive company information. Establish clear policies on data access, usage, and security controls to ensure your AI operates safely and in compliance with regulations like GDPR. Aligning governance strategies to cloud platforms is key, as cloud environments offer tools for managing permissions, access control, and encryption seamlessly.Prioritise securityAs AI processes increasingly sensitive data, robust security measures in the cloud are essential. Cloud platforms are built to support evolving security needs, offering continuous monitoring and dynamic threat protection to ensure that your AI systems stay secure as they scale.Stay agileBoth AI and cloud technologies are evolving rapidly, so businesses must remain adaptable to keep up with new trends and tools. Cloud platforms enable your business to experiment with AI solutions, test applications, and adjust strategies quicklywithout the high costs and long lead times of traditional infrastructure upgrades. This flexibility allows companies to refine their AI systems in real-time, staying responsive to shifting market conditions and consumer demands.The agility provided by cloud platforms ensures that your business can innovate at pace, maintaining your competitive edge in a fast-moving digital landscape.Take smart, measured stepsThe scalability of cloud platforms also makes it easier for you to adopt AI gradually, scaling up at your own pace. AI isnt just for Christmas. So, instead of diving into large-scale AI projects from the start, you can begin with small, manageable initiatives and expand when youre confident in the value theyre creating. Cloud technology allows you to experiment without significant upfront investment, minimising risk while building a strong foundation for growth.By taking a step-by-step approach, you can ensure that their AI capabilities grow sustainably, scaling only when youre ready and avoiding the pressures of premature expansion.With governance, security, and flexibility in place, AI and cloud can now deliver on their full potential. So, whats next?The 12 Days of AIOn the First day of AI, we explore how AI is being used in marketing, the benefits and key use cases, as well as concerns and how marketerscan best take advantage of the technology.On the Second Day of AI, we look at the importance of truly understanding what AI isto enable true organisational transformation.On the Third Day of AI, we explore some of the key trends to keep an eye on, and prepare for,in 2025.On the Fourth Day of AI, we discuss the value of adopting AI responsibly, and outlines how businesses can build responsible adoptioninto their plans.On the Fifth Day of AI, we explore how AI is reshaping HR boosting productivity, addressing concerns, and preparing organisations for the future.AI and cloud technology are a natural fit, and if you leverage both your business be well-positioned for future success. The cloud not only provides the scalable infrastructure that AI needs, but it also empowers AI to deliver actionable insights that drive business growth, from automation to predictive analytics.As we enter the 2025, now is the perfect time to unwrap the full potential of AI and cloud technology. With the right strategy, this dynamic duo can help your business transform its operations, innovate efficiently, and prepare for whatever challenges lie ahead.Matt Gallagher is technical product manager at ANS, a digital transformation provider and Microsofts UK Services Partner of the Year 2024. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations.
    0 Comments 0 Shares 24 Views
  • WWW.COMPUTERWEEKLY.COM
    In 2025: Identities conquer, and hopefully unite
    Were halfway through another decade, and as we enter a new year we are seeing new resolutions and new possibilities, and regrettably, new threats.Cyber security is now a headline grabbing and board level conversation. You only have to look at recent incidents affecting the NHS, the British Library and of course, CrowdStrike to see that cyber security and identity based attacks impact consumers, employees and businesses on a global level.This is only set to continue as CISOs and security teams are faced with bigger and more sophisticated challenges in the coming months, years and as we close out the decade. But what is front of mind for CISOs and their teams? And how are they tackling these issues? Here are two trends that should be in the crosshairs of businesses for 2025.Along with cyber security, a key theme of 2025 has been the rise of AI. According to Gartner, AI agents will be the most important technology trend in 2025, with analysts predicting that 15% of daily work decisions will be made autonomously by AI agents by 2028. While productivity gains will be immense, the cyber security industry needs to have an urgent conversation about information access control for the coming explosion of autonomous AI agents, and if we dont, well see a rising tide of both accidental and hostile cyber breaches and data leakage next year.By the end of 2025 and into the latter half of the decade, well be living in a world with billions of autonomous AI agents acting on our behalf. There are important questions that the cyber security industry needs to answer. Such as, what are these bots doing? What information do they have access to? And how do we set and control the conditions and parameters around what information they can share, with who, and under what circumstances?Right now, all these questions are up in the air. These bots dont even have the benefit of basic cyber security awareness training. They dont have that human sixth sense that tells us something might be wrong. They cant think for themselves. All it takes is one rogue prompt for an AI agent to mistakenly share sensitive personal or financial information with another agent, and things could quickly spiral out of control.Its not all doom and gloom though, and going into 2025 we need to have a renewed optimism that things can improve! For CISOs and security teams to be able to tackle the increasing threat landscape, we need a mindset shift across the cyber security industry, with far more collaboration between industry players. We face an unprecedented threat environment, and this is before the potential risks that AI agents bring to the table.In the coming years, we need to agree and implement more standards, best practices and frameworks around cloud applications and how they communicate with each other so that they are secure by default. A single cyber security vendor cant do that alone.At Okta weve started on this with the Interoperability Profiling for Secure Identity in the Enterprise (IPSIE), to help standardise secure identity management, in partnership with the OpenID Foundation. Id like to see more organisations sign up to this standard, and other standards be introduced to help businesses, and ultimately end-users, improve their security posture.The Computer Weekly Security Think Tank looks aheadMike Gillespie and Ellie Hurst, Advent IM:CISOs will face growing challenges in 2025 and beyond.Elliot Rose, PA Consulting:The most pressing challenges for CISOs and cyber security teams.Pierre-Martin Tardif, ISACA: Six trends that will define cyber through to 2030.The world of cyber security and identity-based attacks is a complex and ongoing struggle thats spurring constant innovation and adaptation on both sides. For companies looking to protect their users and data, itll take continued evolution in technologies, policies, and business processes to put up an effective defence. This requires businesses to collaborate and work together to improve their security posture, educate consumers and the workforce and continue to adapt quickly with threat actors. Only then will we be able to create a world where data is secure by default and consumers are able to trust businesses with their most valuable asset - their identity.Stephen McDermid is EMEA CSO atOkta
    0 Comments 0 Shares 24 Views
  • WWW.COMPUTERWEEKLY.COM
    Trains delayed due to nationwide fault with comms system
    Yury - stock.adobe.comNewsTrains delayed due to nationwide fault with comms systemA fault instigated by the installation of new hardware at a major communications hub hindered the ability of train drivers and signallers to automatically log in to onboard radio communications systemBySebastian Klovig Skelton,Data & ethics editorPublished: 06 Dec 2024 17:30 A nationwide fault with the onboard radio communications system used by train drivers and signal operators has caused major disruption across the UKs rail network, according to National Rail.The Global System for Mobile Communications-RailwayThe unspecified hardware was installed as part of an upgrade to the system, which was rolled out to increase the safety of the UKs train network and reduce the costs associated with running a patchwork of legacy systems.National Rail said the fault which specifically disrupted the GSM-Rs automated login process meant trains were unable to register onto their route for the start of service and deregister to end their service.This prompted staff to connect manually via a wild card code, akin to a Wi-Fi password, that allowed them to once again communicate with the national network. However, use of the manual process delayed many trains by up to 15 minutes, while others were subject to cancellations or alterations as a result.The BBC reported that the well-rehearsed implementation of the manual backup system meant no safety-critical issues occurred while the fix was underway.The problem affected at least 10 rail lines across the country, including Great Northern, Northern, ScotRail, Southeastern, Southern, South Western Railway, Thameslink, Gatwick Express, Heathrow Express and the Elizabeth tube line.Although the problem was fixed within around three hours of being reported, National Rail said there may be some residual disruption while timetables are restored.For those seeking compensation, it added that customers should keep hold of train tickets and make a note of their journey, as both would support any claim.Read more about rail network commsPKP Polskie Linie Kolejowe on track for 5G-based railway comms: Polish railway operator teams with coms tech provider for trials of 5G-based railway communications system to replacing legacy system and prepare for technological migration.National roll-out of UK 5G standalone across road and rail could unlock 3bn for economy: Modelling from leading UK operator suggests roll-out of nationwide 5G standalone will transform road and rail travel across the country, saving billions on fuel and boosting productivity through remote working on trains.Telent completes communications systems upgrades at HS1 railway stations: Telent designs and installs data network and management system at four international stations.In The Current Issue:Interview: James Fleming, CIO, Francis Crick InstituteStage is set for legal battles over Big Tech dominanceFive reasons why and when cloud storage is the answerDownload Current IssueGreater industry collaboration will help move customers to the latest tech Cliff Saran's Enterprise blogWhat to expect from OpenUKs State of Open Con 2025 Open Source InsiderView All Blogs
    0 Comments 0 Shares 38 Views
  • WWW.COMPUTERWEEKLY.COM
    How AI can help you attract, engage and retain the best talent in 2025
    As we move into 2025, the landscape of human resources (HR) is heading for a significant transformation. Artificial intelligence (AI) is set to revolutionise workforce collaboration, efficiency, and talent management.For HR leaders, harnessing the power of AI will be essential to attract, engage, and retain top talent in an increasingly competitive market.AutomationAI is reshaping and revamping HR by automating routine and mundane tasks such as interview scheduling, data entry, and CV screenings. This automation allows HR teams to focus on strategic initiatives that add real value to employees, such as developing diverse cultures, offering tailored development programmes, and increasing engagement.AnalyticsAI-powered analytics can identify workforce trends, predict employee turnover, and suggest to retain top talent. These insights enable HR leaders to make data-driven decisions to support a high-performance culture, ultimately improving employee engagement and organisational performance.Just look at Unilever, which uses AI to streamline its recruitment process. By using AI-driven assessments and video interview analytics, Unilever has significantly reduced time-to-hire while enhancing the candidate experience. Additionally, AI can streamline performance management by providing continuous feedback and personalised development plans. This shift towards real-time performance management fosters a culture of continuous improvement, where the team receives timely feedback and support to achieve their goals, leading to higher engagement levels and better retention rates.RecruitmentAs the demand on sourcing talent with scarce skills continues in 2025, attracting top talent needs innovative strategies. AI can play a pivotal role in enhancing the candidate experience. Imagine AI-driven chatbots engaging with candidates in real-time, answering their questions and providing personalised information about the company and the role. This immediate engagement can significantly improve the candidate experience, making the organisation more attractive.AI can also help create a more inclusive hiring processes by eliminating unconscious biases from recruitment. AI algorithms can analyse job descriptions to ensure they are free from biased language and assess candidates based on objective criteria. This is an incredibly important step to support organisations in attracting and growing a more diverse and inclusive workforce, which is crucial for driving innovation and business success.RetentionRetaining your team is equally important as attracting it. AI can help HR leaders identify early signs of peoples disengagement or dissatisfaction. For instance, AI-powered sentiment analysis can monitor employee communications and flag any negative sentiments, allowing HR and managers to intervene proactively. By addressing issues before they escalate, organisations can improve the satisfaction, happiness and ultimately retention of the team. AI can also facilitate personalised employee development. By analysing skills, performance data, and career aspirations, AI can recommend tailored development programmes and career paths for each individual. This personalised approach to development can help people feel valued and supported.24% of all workers are worried that AI will soon make their job obsolete. HR leaders have a crucial role in addressing these concerns and ensuring their teams are ready for AI integration. Providing training and the right tools to integrate AI smoothly is essential. By fostering a culture of continuous improvement and responsible AI use, HR can drive greater efficiency and empower the entire workforce.AI is more likely to enhance roles rather than replace them, and HR leaders should embrace AI ethically and transparently. This involves being clear about how AI is used, ensuring data privacy, and maintaining a human touch in all interactions. By doing so, HR can build trust and create a positive environment where AI is seen as a tool for empowerment rather than a threat.The 12 Days of AIOn the First day of AI, we explore how AI is being used in marketing, the benefits and key use cases, as well as concerns and how marketerscan best take advantage of the technology.On the Second Day of AI, we look at the importance of truly understanding what AI isto enable true organisational transformation.On the Third Day of AI, we explore some of the key trends to keep an eye on, and prepare for, in 2025.On the Fourth Day of AI, we discuss the value of adopting AI responsibly, and outlines how businesses can build responsible adoption into their plans.As we approach 2025 and beyond, the integration of AI in HR will continue to evolve. Future trends may include more sophisticated AI-driven talent management systems, enhanced predictive analytics for workforce planning, and even more personalised employee experiences powered by AI. HR leaders who stay ahead of these trends and continually innovate will be well-positioned to lead their organisations into the future.Looking to the New Year, AI will play a pivotal role in enhancing HR functions, making them more efficient, strategic, and employee centric. By leveraging AI to attract, engage, and retain top talent, organisations can stay competitive in a rapidly evolving job market. HR leaders who embrace AI responsibly and proactively will be well-positioned to drive their organisations forward, creating workplaces that are both productive and fulfilling for their team.Toria Walters is chief people officer at ANS, a digital transformation provider and Microsofts UK Services Partner of the Year 2024. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations.
    0 Comments 0 Shares 39 Views
  • WWW.COMPUTERWEEKLY.COM
    US TikTok ban imminent after appeal fails
    An appeals court in the United States has upheld a law passed by Congress earlier in 2024 to ban China-owned video-sharing social media platform TikTok in the US on national security and data protection groundsThe law sailed through the US legislature back in April, after being included in a wider package of aid for Israel, Taiwan and Ukraine. It gives TikToks parent, ByteDance, notice to either sell TikTok to a US-based entity or be removed from online app stores for good with both Apple and Google facing financial penalties if they do not comply.The laws passage came amid a growing freeze in relations between the US and China, and a spate of accusations from Western cyber security agencies claiming widespread Chinese cyber espionage.TikTok appealed against this, but the US Court of Appeals for the District of Columba Circuit today [6 December] unanimously denied this petition.In the courts opinion on the case of TikTok and ByteDance Ltd versus Merrick Garland [US attorney general], judge Douglas Ginsberg said the decision had significant implications for both TikTok and its users, because unless ByteDance divests the business by 19 January 2025, or the president grants a 90-day extension, the TikTok platform will effectively be unavailable in the United States. Consequently, TikToks millions of users will need to find alternative media of communication.Ginsberg wrote this burden was attributable to Chinas hybrid commercial threat to US security and not the US government, which he wrote has been engaged with TikTok for some time in efforts to find alternative solutions.Ginsberg also dismissed TikToks arguments that a ban infringed its First Amendment rights the First Amendment, dating back to December 1791, guarantees freedom of speech and the press in the US.The First Amendment exists to protect free speech in the United States. Here the government acted solely to protect that freedom from a foreign adversary nation and to limit that adversarys ability to gather data on people in the United States, he wrote.The Supreme Court has an established historical record of protecting Americans' right to free speech, and we expect they will do just that on this important constitutional issue, a TikTok spokesperson said, via social media site X.Unfortunately, the TikTok ban was conceived and pushed through based upon inaccurate, flawed and hypothetical information, resulting in outright censorship of the American people. The TikTok ban, unless stopped, will silence the voices of over 170 million Americans here in the US and around the world on 19 January 2025.According to US news network CNBC, TikTok plans to seek an injunction to have the case heard before the US Supreme Court in Washington DC.The one saving grace for TikTok may yet be the incoming Republican administration led by president elect Donald Trump, who returns to the White House in January for an historic second term.Prior to the 2020 election Trump had led calls for a ban on TikTok, and came close to achieving this goal. However, after the Biden administrations legal intervention, he now appears to have had a change of heart. Indeed, back in September, he briefly positioned it as a campaign issue, encouraging TikTok users to cast their vote for him. At the time of going to press, however, Trump had not stated whether he will actually enforce a ban.Craig Singleton, senior fellow and China program director at the Foundation for Defense of Democracies, who contributed extensively to an amicus brief on which the court heavily relied, said the ruling underscored a growing consensus that time was up for TikTok, at least in its current form.The unanimous decision is a clear warning shot to foreign companies operating in sensitive sectors they must play by the rules or face the consequences, said Singleton.Expect TikTok to pull every lever lobbying, lawsuits, and public pressure - to stall divestiture. But. the bipartisan appetite for action means the companys runway is rapidly shrinking.The ruling also serves as a bellwether for how the US, and by extension its core allies including the UK, confront tech threats from authoritarian regimes, and for policymakers, the saga so far serves as a test of whether the law can keep up with emerging threats, he said.For Beijing, this is more than just about TikTok its a symbolic and strategic loss in the broader tech competition with Washington, added Singleton. There can be no doubt that this ruling undercuts Beijings ability to use TikTok as a powerful tool for influence, data collection, and narrative control within the US, marking a significant strategic loss.China has few meaningful options apart from retaliatory rhetoric or tit-for-tat measures targeting U.S. companies operating in China, Singleton told Computer Weekly in emailed comments.While Beijing is likely to issue strong condemnations, we shouldn't expect any dramatic responses China may complain loudly, but with its economy under strain, this is more a diplomatic headache than an immediate crisis.Read more about TikTokThe United States government takes aim at the viral video sharing application TikTok.Social media giant TikTok has completed the first of two datacentre builds it has under development in Ireland.Data protection regulators warn social media companies to take all necessary measures to protect childrens privacy after fining TikTok over 300m.
    0 Comments 0 Shares 29 Views
  • WWW.COMPUTERWEEKLY.COM
    Six trends that will define cyber through to 2030
    Maksim Kabakou - FotoliaOpinionSix trends that will define cyber through to 2030From Covid-19 to war in Ukraine, SolarWinds Sunburst, Kaseya, Log4j, MOVEit and more, the past five years brought cyber to mainstream attention, but what comes next? The Computer Weekly Security Think Tank looks ahead to the second half of the 2020sByPierre-Martin Tardif, ISACAPublished: 06 Dec 2024 Guessing the future is always a difficult task. Six trends for the next five years seem more apparent than others, and it will be interesting to re-read this article in 2029 to assess its accuracy. In the meantime, the six trends standing out as top priorities, in no particular order, are:Preparing the post-quantum cryptographic migration, including raising top management awareness to provide sufficient resources.There will be a need to identify where cryptography is used in the organisation, which can be found in several places, including libraries, the Internet of Things (IoT), communication protocols, storage systems, and databases. Prioritizing systems for the transition will be paramount, taking care to clearly identify your critical systems.Choosing how to manage the transition will also be essential since it may hinder the organisation. More precisely, hybrid protocols, mixing classical and post-quantum cryptography, could be an interesting option to consider, since it allows your clients to migrate at their own pace.Also, testing will be mandatory, while deploying a realistic test environment might be complex. Finally, the right migration time will be hard to establish, even if governments provide guidelines.Finalising operational technologies (OT) oversight, improving their cyber resilience, and integrating them into existing cyber security operations.This convergence started more than 10 years ago and is still ongoing. OT cyber security must include addressing human safety concerns and intensive collaboration with engineering.The monitoring approach should rely on artificial intelligence (AI) to identify abnormal behaviour, from weak signals, to support advanced persistent threat hunting. Since some systems are legacy, they may lack the necessary features to directly collect the information needed. Encapsulating with an intermediate security system could be a viable solution.A layered defence strategy and a movement toward a zero-trust architecture might help minimise the attack surface.Improving cyber security fundamentals, including identity management and network micro-segmentation, and supporting zero-trust architecture while enabling automated threat response.This leads to implementing robust identity and access management that enforces least-privilege principles and multi-factor authentication.By integrating policy-based automation, access management becomes more dynamic, transparent and enforceable. Continuous monitoring and real-time analytics should be used to detect anomalies and unauthorised activities, including user behaviour, device posture and geolocation.The Computer Weekly Security Think Tank looks aheadMike Gillespie and Ellie Hurst, Advent IM:CISOs will face growing challenges in 2025 and beyond.Elliot Rose, PA Consulting: The most pressing challenges for CISOs and cyber security teams.Learning how to conduct cyber security for artificial intelligence pipelines (AIOps) while constructing a business case for artificial intelligence-based cyber security, like zero-day attack detection.This dual focus addresses the sharply increasing complexity of cyber threats and the pervasiveness of AI. As AI continues to revolutionise the landscape, international and domestic regulations are being defined and will become vital to ensure its compliance, resilience and trustworthiness.Addressing increasing regulations to maintain global compliance, notably for privacy, critical infrastructure, and business continuity.As stricter rules are adopted, like European Union's (EU's) General Data Protection Regulation (GDPR) and AI Act, California's Consumer Privacy Act (CCPA) for privacy, as well as European Network and Information Systems Directive 2 (NIS2) and CISA guidelines in the United States for critical industries, and more specific requirements from the EUs Digital Operational Resilience Act (DORA) for the financial industry, organisations need to contextualize these requirements and integrate them into their security posture.Collaborating closely with third parties, including identifying their Software Bill of Materials (SBOM), and communicating any vulnerability along the supply chain. This will remain an important priority for security leaders as the global enterprise landscape becomes increasingly interconnected.This should ensure a better understanding of the dependencies toward the third parties, and when an organisation becomes more mature, the broader interdependencies of their ecosystem.In conclusion, while predicting the near future remains a challenging task, these six top priorities will play a pivotal role in organisational resilience.As we look ahead, there seems to be a distant echo on the horizon. Lets hope it is not your next threat!Pierre-Martin Tardif is a member of the ISACA Emerging Trends Working Group. A longstanding IT and cyber security professional and educator, he is based in Quebec, Canada.In The Current Issue:Interview: James Fleming, CIO, Francis Crick InstituteStage is set for legal battles over Big Tech dominanceFive reasons why and when cloud storage is the answerDownload Current IssueGreater industry collaboration will help move customers to the latest tech Cliff Saran's Enterprise blogWhat to expect from OpenUKs State of Open Con 2025 Open Source InsiderView All Blogs
    0 Comments 0 Shares 29 Views
  • WWW.COMPUTERWEEKLY.COM
    Met Police challenged on claim LFR supported by majority of Lewisham residents
    The Metropolitan Police has claimed its live facial-recognition (LFR) deployments in Lewisham are supported by the majority of residents and local councillors, but a community impact assessment (CIA) obtained by Computer Weekly shows there has been minimal direct consultation with residents, while elected officials continue to express concern.In August 2024, Lewisham councillors complained there had been no engagement with the local community ahead of the controversial technology being deployed in the area, with the Met announcing the tech would be used in Tweet just a month after being urged by councillors to improve its community engagement around LFR.Responding to Computer Weeklys questions about the concerns raised by Lewisham councillors, a Met Police spokesperson said at the time that its LFR deployments have been very much supported by the majority of Lewisham residents, business owners and political representatives namely Lewisham councillors.The spokesperson added that over the previous six months, the force had delivered more than six briefings at a mixture of public forums, private council and independent advisory group sessions to explain what its LFR deployments entail and to answer all enquiries posed by committee members.However, according to the CIA obtained under freedom of information (FoI) rules by Computer Weekly, the only mention of residents in the entire document is when detailing the press response given to Computer Weekly.Despite the Met claiming its LFR deployments are supported by the majority of residents, the CIA also explicitly notes there is mixed opinion for the operation within the community, adding that while there is nothing to suggest there would any form of disorder/criminality in relation to the deployment, there is likely to be some opposition.In terms of actual engagement conducted by the Met, the CIA notes the force held seven meetings between March and August 2024, including five with various council bodies, and two sets of public discussions: one at the New Met for London event held at the Albany in Deptford, and another held in relation to the Mets London Race Action Plan.The council bodies engaged with included a select committee tasked with scrutinising LFR deployments, the Lewisham Independent Advisory Group (IAG) for LFR, and the Safer Neighbourhoods Board (SNB).Members of the Safer Stronger Communities Select Committee urged improved communication with residents concerning LFR deployments, as well as a need to increase stakeholder engagement, the committee told Computer Weekly in response to the CIA document.Many councillors are on record (as evidenced in meeting minutes) calling for improved communication with residents and stakeholders, noting there has been minimal stakeholder engagement regarding LFR deployments thus far.Expressing her own views on the matter, independent councillor and Safer Stronger select committee member Hau-Yu Tam who previously stressed the need to give local people the ability to scrutinise the Mets approach told Computer Weekly she is personally only aware of one instance of consultation between the Met and Lewishams SNB, the boroughs independent forum for community engagement with the police.The CIA document confirms there has been one formal meeting with the SNB recorded, which took place on 26 March 2024.Policing is touted as being legitimised by community consent, so they tick the box of community consultation, but it doesnt take much digging to find that the consultation is extremely poor, she said, adding that the effectiveness of the consultation is limited by the fact that not a lot of people get consulted, and the use of leading questions by the Met when they talk to people about the technology, which are designed to sell LFR to the public, rather than understand and act on the areas of concern. People who would be hurt or harmed by LFR dont have the means to access the consultation, nor are their views really allowed to be registered Hau-Yu Tam, Lewisham CouncilIts similar to a lot of large public institutions, including Lewisham Council, in that consultation is undertaken poorly because communities are not engaged. Above all, budget cuts including to communities are being passed down, with the political and executive leadership failing to formulate alternatives or even to believe alternatives can be possible.An example of the leading nature of the Mets engagement process is shown by an email to an SNB member (not recorded in the CIA), which has been shared with Computer Weekly. In it, a Met police officer explains that local policing teams are proposing to run an LFR operation in the area, highlighting only the benefits of the technology.This is used to identify individuals who are sought by police in relation to ongoing investigations with a focus on violence against women and girls. Previously, this has been extremely successful in other local boroughs e.g. identifying an individual who was sought for a serious domestic violence incident and had been evading police by changing appearance, they said.Facial-recognition technology is a very valuable tool to help to catch perpetrators of crime that impact individuals and communities. Is this something that you think is a good idea, and would support? We appreciate your comments.Tam said the email shows the Met framing LFR solely around the prevention of violence against women and girls in a way that would appeal to the recipient, because obviously they would express support in that context.She added that thebiggest issue is the lack of mechanisms in place for dealing with critical comments about LFR: What people support is safer streets and improved equity and community cohesion. They dont necessarily support live facial recognition, which theyre not given the full rundown of, or theyre given very misleading information about.She further added that while the Met does seek input from legitimate voices, the same sorts of voices are over-represented: People who would be hurt or harmed by LFR dont have the means to access the consultation, nor are their views really allowed to be registered.Tam said that while the Met may have formally engaged with the SNB on LFR issues, many members of that body have raised concerns around the use of LFR by police, adding: Theres a lot of trepidation about this.Computer Weekly contacted the Met about the CIA process and every aspect of the story.The Met is committed to making London safer, using data andtechnology to helpidentify offenders that pose a risk to our communities, said Lindsey Chiswick, the forces director of performance. We continue to engage with and listen to views from a range of voices across Lewisham on our use of LFR technology, including local residents, councillors, local businesses and retailers.A spokesperson for the force added that the Met is committed to transparency and community engagement in its use of LFR technology, which they described as a key tool for enhancing public safety that also enables police to identify individuals wanted for serious offences while minimising disruption to the wider public.Officers have conducted extensive engagement with the Lewisham community, including local residents, councillors, businesses, and advisory groups, they said. These sessions provide an open platform for discussion, allowing us to explain how LFR works, the intelligence-led process behind deployments, and the safeguards in place to protect privacy and human rights. We also share data, such as the number of arrests, other outcomes and false-positive alerts, to ensure accountability and transparency.We understand the concerns raised by some community members and are committed to listening to all voices, including those critical of LFR. Engagement is intended to be inclusive, and we work with independent advisory groups [IAGs] and community leaders to reach those who may not always have access to formal consultation processes.Our focus is on ensuring the safety of Londons streets while maintaining open, honest dialogue about the use of LFR technology.Responding to the contents of the CIA, Charlie Whelton, policy and campaigns officer at human rights groupLiberty, said: Facial-recognition technology effectively enables the police to identify and track anyone they choose. But instead of reaching out to the residents of Lewisham on the impacts of this dangerous surveillance tech, the Met has redefined community engagement as speaking to high-level officials.The real community impact of facial recognition is that our privacy is undermined, our movement restricted, and our risk of being subjected to a false stop from a dodgy algorithm is increased as we just go about our lives. None of these were addressed within the assessment as the Met Police continue to push forward this unknown and unchecked technology.He added that the huge power LFR grants police is particularly concerning after years of high-profile scandals involving violent, racist and sexist police forces in the UK: The government must urgently introduce safeguardsto restrict the use of this invasive technology and for the police to recognise the true impact on the communities they are spying upon.Jake Hurfurt, head of research and investigations at privacy campaign group Big Brother Watch, added that it is hard to evaluate the efficacy of the Mets community engagement in Lewisham because the CIA is so light on detail: It doesnt demonstrate very good engagement at all. Instead of reaching out to the residents of Lewisham ... the Met has redefined community engagement as speaking to high-level officials Charlie Whelton, LibertyEchoing sentiments from Tam that the CIA is a box-ticking exercise, he further added that because there is so little genuine community engagement over LFR with people who live in Lewisham, the engagement process becomes a rubber stamp for the Mets continued deployments.To be honest, do it properly or dont bother, he said, adding that the way the Met has characterised its engagement with councillors is also an issue. Were in conversation with councillors and a lot of them arent happy.According to a spokesperson for Lewisham Council, the local authority will continue to carefully monitor its implementation in our borough and will continue to engage with the police and other local authorities where its being used.Hurfurt concluded that for there to be meaningful community engagement, the process needs to be done without the Mets thumb on the scale by limiting its consultation to mostly high-level council meetings and officials.You have to properly consult people, giving them a chance to object, to raise concerns and listen to them, rather than tick a box theres a chance this undermines trust in the police if its not done properly, he said, adding that while a number of local authorities have passed motions that express their opposition to the police deployment of LFR in their boroughs, its been deployed anyway.In January 2023, for example, Newham Council unanimously passed a motion to suspend the use of LFR throughout the borough until biometric and anti-discrimination safeguards are in place.While the motion highlighted the potential of LFR to exacerbate racist outcomes in policing particularly in Newham, the most ethnically diverse of all local authorities in England and Wales both the Met and the Home Office said that they would press forward with the deployments anyway.As part of the authorisation process and before any deployment, a specific community impact assessment is completed by the local BCU [Basic Command Unit], said a Met police spokesperson at the time. This assessment involves speaking to a wide number of local groups so that policing is informed of those views and can take those into consideration before any decision to deploy is made.The Mets ownLFR policy documentstates it may be appropriate to pursue engagement opportunities with a number of stakeholders prior to any deployments taking place.Chiswick, speaking as the Mets then-director of intelligence, has alsopreviously toldLords that LFR is a precision-based, community crime-fighting tool,adding in a later sessionthat because of a lack of support for police among specific community groups, there would need to be engagement with them prior to any LFR deployments to quell any fears people might have.You get told theres all this engagement by the Met, but theyre just cracking on, said Hurfurt.On 13 November 2024, MPs held their first-ever debate on the police use of LFR technology, eight years after the Met first deployed the technology at Notting Hill Carnival in August 2016.MPs including members of both front benches discussed a range of issues associated with the technology, including the impacts of LFR surveillance on privacy; problems around bias, accuracy and racial discrimination; the lack of a clear legal framework governing its use by police; and how its wider roll-out could further reduce peoples dwindling trust in police.While there were differences of opinion about the efficacy of LFR as a crime-fighting tool, MPs largely agreed there are legitimate concerns around its use by police, with a consensus emerging on the need for proper regulation of the technology.The majority of MPs involved in the debate openly lamented why there had been no debate about the use of the technology by police up until now.Read more about facial recognition technologyLords shoplifting inquiry calls for facial recognition laws: A Lords committee has expressed concern about the use of facial recognition technology by retailers to combat shoplifting, and is calling on the government to introduce new laws to ensure there are minimum standards for their deployments.Facial recognition to play key role in UK shoplifting crackdown: UK government will fund roll-out of police facial recognition across the country as part of its crackdown on shoplifting and violence against retail staff, but civil society groups say the government is attempting to police its way out of the cost-of-living crisis.Campaigners criticise Starmer post-riot public surveillance plans: A UK government programme to expand police facial recognition and information sharing after racist riots is attracting criticism from campaigners for exploiting the far-right unrest to generally crack down on protest and increase surveillance.
    0 Comments 0 Shares 44 Views
  • WWW.COMPUTERWEEKLY.COM
    Government agencies urged to use encrypted messaging after Chinese Salt Typhoon hack
    US government agencies have been urged to use end-to-end encrypted messaging services, including WhatsApp, Signal and FaceTime, following disclosures that China has breached US telephone networks in a hacking operation that undermines US national security.In a letter to the US Department of Defence (DOD), two prominent senators warned the DOD is placing security at risk through its continued use of unencrypted landlines, and unencrypted platforms such as Microsoft Teams.The warning follows confirmation from the FBI and the US Cyber Security and Infrastructure Agency (CISA) that groups linked to the Peoples Republic of China have compromised multiple telephone networks and had accessed private communications of a limited number of people in government and politics in a hacking operation dubbed Salt Typhoon.Democratic senator Ron Wyden and republican Eric Schmittcriticised the defence department for failing to use its purchasing power to require wireless telephone service providers to provide cyber defences and accountability, in a letter on 4 December 2024.DODs failure to secure its unclassified voice, video and text communications with end-to-end encryption has left it vulnerable to foreign espionage, they warned.The senators disclosed previously classified details of a trial by the US Navy to test end-to-end encryption communications platform Matrix, an open-source, decentralised service widely used by Nato countries. The US Navy is testing Matrix to send encrypted messages from 23 ships and three on-shore sites.While we commend the DOD for piloting such secure, interoperable communications technology, its use remains the exception; insecure propriety tools within the DOD and the federal government generally, the senators said.The widespread adoption of insecure, proprietary tools is the direct result of DOD leadership failing to require the use of default end-to-end encryption, a cyber security best practice, as well as a failure to prioritise communications security when evaluating different communications platforms.The Salt Typhoon attack, first reported by the Wall Street Journal, has targeted individuals including president-elect Donald Trump, vice-president-elect JD Vance and Senate majority leader Chuck Schumer, according to press reports.This successful espionage campaign should finally serve as a wake-up call to the governments communications security, despite repeated warnings from experts and Congress, the senators wrote.The FBI and CISA have recommended that people use encrypted messaging and voice services such as Signal and WhatsApp to reduce the risk of hackers intercepting text messages.CISA executive assistant director for cyber security Jeff Greene told broadcaster NBC this week: Encryption is your friend, whether its on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible.Read more about Salt TyphoonUS updates telco security guidance after mass Chinese hack: Following the widespread Salt Typhoon hacks of US telecoms operators including AT&T and Verizon, CISA and partner agencies have launched refreshed security guidance for network engineers and defenders alike.CISA, FBI confirm China breached telecommunication providers: The government agencies confirmed Wall Street Journal reports that China-backed threat actors breached telecommunication providers and access data for law enforcement requests.According to a blog by cyber security expert Bruce Schneier in October 2024, Chinese hackers appear to have accessed backdoors used by the US government to execute wire-tapping requests, which have been mandated by the Communications Assistance for Law Enforcement Act, enacted in 1994.For years, the security community has pushed back against these backdoors, pointing out that the technical capability cannot differentiate between good guys and bad guys, he said. And here is one more example of a backdoor access mechanism being targeted by the wrong eavesdroppers.Matthew Hodgson, co-founder of Matrix.org, a non-profit foundation developing standards for end-to-end encryption, told Computer Weekly that the Salt Typhoon hack was an unfortunate validation of concerns raised about the impact of the UKs Online Safety Act, which contains measures that could be used to weaken end-to-end encrypted communications services.It is morbidly amusing to see all of the intelligence agencies telling everybody that actually, end-to-end encryption is a good idea, and the backdoors are a bad idea, and everybody should hop on encrypted systems like Matrix or Signal rather than trust the phone network anymore, he said.
    0 Comments 0 Shares 46 Views
  • WWW.COMPUTERWEEKLY.COM
    2025: The year of AI for business - top trends to watch out for
    You might not have started thinking about your Christmas shopping yet, but I bet youve been thinking about what artificial intelligence (AI) for business is going to look like in 2025. If you havent, then settle in with a glass of mulled wine, because now is your chance.AI has come leaps and bounds over the past few years and is currently one of the biggest opportunities for business growth. With capabilities to intelligently automate admin tasks, take on customer service tasks, and analyse masses of data, the advantages are endless. But theres still lots of room for development, in ways which will and wont surprise you.Like your list of New Years Resolutions, the regulation landscape is constantly changing and adapting to the needs of tech businesses. For AI development to thrive in 2025, there must be a supportive environment ready for it. Theres no denying the appetite for AI, with over 120 bills on AI currently before the United States Congress. These build upon regulations already in place, such as the EU AI Act, which promotes the rapid adoption of trustworthy AI through reduced administrative burdens for SMEs and clear requirements for AI use.The EU AI Act defines AI systems by their risk rating, splitting them up into prohibited, high-risk, limited-risk, and minimal-risk groups. This is something we could see changing in 2025, with the potential for new legislation focusing on AI classification over risk. This approach would consider criteria such as the intended uses and basic properties of AI systems.New legislation coming into effect next year will significantly impact how businesses can use AI. Data management is one area likely to see substantial legislative focus, ensuring that AI does not compromise the security and privacy of business and customer data.As new legislation is rolled out in 2025, it will give businesses and developers more freedom and safety to largen AIs scope. Many of us will already have AI ingrained into our processes, but what will we be bringing on board next?Leading the way MicrosoftOne company which has been leading the way in AI development in 2024 has been tech giant Microsoft. At its recent Ignite 2024 event, it made several announcements which demonstrate the acceleration of AI in 2025. One of these was that Microsoft Teams will let participants speak in a language of their choice, through its new AI-powered Interpreter feature. Facilitating global communication and collaboration, this is one powerful way in which AI will fuel business growth.Microsoft also announced the introduction of its AI agents this year. These agents will drive organisational wide optimisation and automation by collaborating with workers, a step forward from the AI assistants we already have. Agents can be trained to know your organisation from top to bottom and can compile details for business pitches and presentations whilst you focus on more valuable tasks.Cutting corners with automationLike AI agents, other AI systems which rely on trigger-based automation will flourish in 2025. Once the system is notified of a trigger, such as an email being received, it can digest the information and deliver an automated response to the trigger. Automated AI will seamlessly slot into business processes, taking care of admin tasks which frees up time for workers in all levels of the business to spend more time with customers and focus on their long-term needs.The rise of automated AI poses a need for focus on responsible usage. Automation means that AI could be exposed to confidential data, and without the right protection measures in place, could learn that data and share it without authorisation. Legislation will play a key role in ensuring the responsible and ethical use of AI, but responsibility lies with business leaders as well to make sure that AI adoption goes hand in hand with education. Its important to understand that we will always include a human in the loop and full observability of these interactions with AI.AI boot campAI-powered systems might be forging new opportunities for businesses, but they lose their value and customer trust if inaccurate. To prioritise the accuracy of the models AI systems are trained on, we will see a shift in the New Year on how this process works. Grounding an model in accurate, secure data is extremely important. The better he data the more accurate the responses will be. Developers may synthesise their training data on large language models, and then train the AI system on a small language model.This will approve the accuracy of the AI system, but as it adds degrees of complexity, it also poses the risk of potential bias or incorrect activity, such as the AI hallucination concept. When AI produces information like it is fact without any data to back it up, its a sign that something has gone wrong with the training data. Whilst 2025 will be a big year for the development of training models, businesses need to be aware of how their AI systems are being trained to avoid bias and unethical practice.The 12 Days of AIOn the First day of AI, we explore how AI is being used in marketing, the benefits and key use cases, as well as concerns and how marketers can best take advantage of the technology.On the Second Day of AI, we look at the importance of truly understanding what AI is to enable true organisational transformation.The huge amount of investment in 2025 is just one of many signs that AI isnt a fleeting New Years Resolution. Companies like OpenAI and Microsoft have made a long-term commitment to investing in AI development, because they know were still unlocking its full portfolio of capabilities. Even if theyre not profiting off AI right now, its undoubtable that the future is rich. But this isnt just a game for the big players, small businesses will also be staking their claim by adopting and investing in AI.With the developments well see next year in automation, robotics, and training data, its certain that therell be a flurry of businesses who havent explored AI yet looking to adopt. To make the most of the new developments, dont wait until New Years Day to get started, reach out to the experts now to help your business get AI ready.Chris Huntingford is the newly-promoted director of AI atANS, a digital transformation provider and Microsofts UK Services Partner of the Year 2024. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations.
    0 Comments 0 Shares 40 Views
  • WWW.COMPUTERWEEKLY.COM
    From front to back: tech vice-president Dan Lake on Notonthehighstreet.coms tech strategy
    The big news from online marketplace Notonthehighstreet.com (NOTHS) in the build-up to peak trading is its new partnership with delivery platform Deliveroo, announced in September.NOTHS is one of the early wave of non-food-specific retail businesses partnering with Deliveroo to add speedy fulfilment options to their offering. Screwfix led the charge in 2023, and others such as B&Q, Ann Summers, Wilko, and The Perfume Shop have followed suit in 2024, opening up rapid delivery via the Deliveroo app to London consumers who need their items pronto.Launching with 15 brands under the umbrella of NOTHS, the partnership enables Deliveroo customers to order personalised gifts on-demand for the first time via the presence of luxury jewellery and accessories retailer and NOTHS partner Hurley Burley on the app as well as access to goods from a variety of small non-food businesses.Paul Wilkinson, Deliveroo product director, paid compliment to his companys integrations team on a LinkedIn post in October, saying their work means consumers have up-to-date product and availability information at their fingertips from launch.These use a new dedicated API [application programming interface] that we have designed from the ground up for grocery and retail partners, and it has taken a whole village of amazing people to build and ship this, he wrote. Contrastingly, the direct tech integration with NOTHS is non-existent at present, according to Dan Lake, vice-president for technology at the online marketplace. The hardware and software integrations are through the NOTHS brand partners, with a NOTHS logo accompanying brand pages on the Deliveroo app to signify the connection.Its an obvious brand partnership that is beneficial to the business, Lake says of the Deliveroo tie-up, which he says generates unprompted NOTHS brand awareness.Weve not invested anything from a tech point of view, but if it goes very well and we want to scale across the UK, there will be some tech investment needed. This approach buys us time to make our platform easier for integrating into third parties.And therein lies the crux of the technology challenge NOTHS faces right now. So much of the focus for the business in its 18 years of operating, since being founded by Holly Tucker in 2006, has been on the consumer experience and its front-end capabilities.But in the past two years, since Lakes arrival from high-flying fitness brand and retailer Gymshark, simplifying behind the scenes and exploring where a buy, not build approach to technology might be more appropriate has been the name of the game.Weve underinvested in the back end, Lake says.In the two years Ive been here, weve gone through a lot of change and been purposeful. Its about going back to what the company was about in the first place shouting about and supporting small businesses in the UK.From a tech perspective, he says, it has been important to articulate NOTHSs definition of customer is a dual definition encompassing the end consumer, but also the small brands selling through the platform.It sounds obvious and it is obvious internally but it can get missed on how we decide what were going to focus on and invest into, he says.Lakes senior leadership position reports directly to CEO Leanne Osbourne, and he has the responsibility of looking after tech products across the organisation. He acknowledges he joined NOTHS primarily for the tech challenge, identifying it as a reverse job to what he faced at Gymshark, where he was engineering director.When Gymshark went through its exponential growth period, which resulted in its 2020 unicorn status as a 1bn-valued privately-owned business, it needed to internally build out tech to support its core Shopify foundations. At NOTHS, theres a need to more comprehensively work with tech partners and stop relying on building everything in house.At NOTHS, were trying to end up in the same space but from the opposite end, Lake says, adding that the business is looking to buy more tech rather than build it in house.My view is we should only invest in or own things that are strategically important to us or we would have operational challenges without we have too much stuff that falls into the commoditised bracket.In what might be welcome news for the retail technology ecosystem, NOTHS is now looking for products on the market where there is commoditisation. Albeit, there is not a bottomless pit for investment.Lake talks of the need for products within a retail organisations tech stack to contribute to strategic and operational performance. With so much built in house, NOTHS finds itself with components that are no longer contributing to either and are holding us back its a typical retail legacy system tale of entanglement.Everything is owned and maintained, so my focus is on identifying whats now been commoditised and what have other people done a better job of building and we can then think about what we can chop away at.After all, were not a tier one tech company.NOTHS has already started its journey of modernisation under Lakes stewardship. The marketplace has migrated promotional capabilities to a third-party engine platform Talon One.Although pretty simplistic in approach compared to most businesses, it represents the first time weve gone out and bought a capability and integrated it in a composable MACH tech way, Lake says.Its a fundamental shift in thinking internally for the engineering and product teams. We deprecated and removed the old promo engine which surprise, surprise we had built. It did one thing and we had the age-old problem that you never come back to it you go on to the next priority and it becomes a problem for people.This change will support in the running of campaigns, but is also set to be a capability utilised as NOTHS explores its options around building a loyalty proposition.This takes a number of things the tech team shouldnt need to be involved in off their plate, so we can focus in the investments we want to make, Lake adds.With e-commerce stack technology, the most commoditised area of retail tech, according to Lake, theres lots of focus on what to bring in to the NOTHS business in this area:Were headless already, but some better decisions probably could have been made you should own the user experience as it can contribute to strategic differentiation.What we hadnt done in the move to headless was consider the service or integration layers just under that, so we built a load of microservices, some with thin veneers into the monolithic platform. We hadnt thought about how to take off parts we shouldnt really own which can be a distraction and they take time with maintenance on bugs.NOTHS is using Contentstack from a headless content management system point of view, but a stream of work currently well under way with Kin + Carta and Valtech is focused on better optimising the digital experience.Lake says the NOTHS search and discovery process starts with its brand partners putting product data in and this is an area where improvements are sought.For trade reasons, we focused on very outer edge of search and discovery and how results had ranked and reranked and were using Google Vertex AI, he adds.Search went live last year and there have been marked improvements there. Were doing tests on browse currently.We have circa 450,000 products on the platform, and surfacing the most relevant of those is a big challenge and we have built a load of tech that doesnt really lean into surfacing the most relevant thing.That is being addressed using Google Vertex, and the work with Kin + Carta involves improving data quality and product information management processes so NOTHS can augment the effects of the AI.In terms of AI strategy, a lot will depend on finding the most suitable partners.A lot of the third-party companies we might buy into will be bringing AI to us because they are integrating it into their products and thats great, Lake says.Thats the benefit you find yourself in as a D2C or online business. You can see the pressure on fellow CTOs working for SaaS businesses because there is a race to market and there will be a number of misses, but we can benefit from that.Lake admits NOTHS was looking at how to use AI for search and discovery, but then Google Vertex came along. He predicts this type of situation will continue to happen for a while as the AI hype and focus continues.Once we have solved some problems and operational issues and removed friction for partners and internally we can think about how to utilise AI for something that is really interesting, he says.Lake describes his team as a lean 40-45 people covering tech and product, and says his leadership style follows a teach-a-man-to-fish mentality.Its no good me steaming in and saying, Cut that out, remove this, and go and buy this, as it wont build the sustainability in the approach we need, he says, adding that the team is realising this new working method is aimed at making their lives easier as much as it is part of a method for driving the business forward.The team covers IT infrastructure, cyber security, and support, with delivery managers, and an engineering team overseeing online, back and front-end, and mobile work across iOS and Android. There are members of the team focused on data analytics and data science, and those looking after platform infrastructure and product management.Good people get bought into the culture, Lake adds.It is their job to ensure the tech serves the five to six million customers NOTHS has in the UK, but under Lakes leadership, they are also increasingly focused on making the lives of circa 5,000 marketplace sellers some of which have started their journeys with Deliveroo this autumn easier and more fruitful.Read more about retail technologyRetail tech leaders from Sweaty Betty and Selfridges discuss levers to pull and tactics to adopt when influencing the boardroom on transformation projectsNew tech from supply chain companies, M&A activity among logistics firms, and H&Ms new fees show product returns is the retail industry debate that keeps coming back
    0 Comments 0 Shares 50 Views
More Stories