


Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
1 people like this
405 Posts
2 Photos
0 Videos
0
Reviews
Share
Share this page
Recent Updates
-
Auditor: 2026 till Birmingham recovers from botched Oracle projectwww.computerweekly.comGrant Thornton has issued a forensic 66-page report relating to a failed Oracle Fusion enterprise resource planning (ERP) implementation at Birmingham City Council, which has left it without a functioning finance system until at least 2026.Among the impacts of the failed Oracle implementation, the auditors note its overall cost, plus the investment necessary to put it right, will be at least 90m in excess of its original budget. The reimplementation of the system is not expected to be complete until next year, or beyond.When the councils Oracle implementation steering committee took the decision to go live in April 2022, say the auditors from Grant Thornton, the level of risk inherent in the Oracle solution was not properly understood. This resulted in the implementation failing at a significant cost to the council, contributing to a breakdown of financial control such that it has been unable to adequately control its finances throughout 2022/23, 2023/24 and into 2024/25.The governance and programme management for the Oracle programme had fundamental weaknesses that were never effectively remedied and were further exposed by high turnover of staff in both senior and operational roles, they said.Birmingham City Council implemented an SAP ERP system, ECC6, in 1999, for finance, procurement, HR and payroll purposes. Over the years, it was customised to fit the organisations business processes. In 2006, Capita was engaged to provide council services, including the hosting of the SAP instance.And then came SAPs ERP system based on its reportedly high-speed, columnar, in-memory database, Hana, S/4 Hana, in 2015. SAP also announced end of support for the older ERP by 2025, later extended to 2027.Like many other SAP customers, Birmingham had a choice to make. It opted to switch to cloud-based Oracle Fusion, following advice from Socitm Advisory, and announced its decision in July 2019.It also announced programme support contracts for Insight UK Ltd, in partnership with Evosys for systems integration; Socitm Advisory, for programme and change management; and Egress, for data migration services.Read more about the failed Oracle implementation at Birmingham City CouncilBirmingham City Councils Oracle system the biggest of its kind in Europe went live in April 2022, resulting in a catastrophic IT failure. Cliff Saran investigates.Read how the council swapped out a heavily customised SAP ERP system for Oracle Cloud; but after going live, it had numerous technical challenges. Cliff Saran explains what went wrong with Birmingham City Councils Oracle implementation.Birmingham sets aside 25m for Oracle transformation.Ameo Professional Services was appointed to provide programme management and assurance to the council from January 2021. The effectiveness of Ameo and the other suppliers is the subject of another report commissioned by the council, Grant Thornton says. Its findings are legally privileged, according to the auditor, and they are not included in the 11 February 2025 report.As previously reported in Computer Weekly, the financial business case for the upgrade showed that the SAP system, run by Capita, was costing the council 5.1m per year. Over the nine years between 2022/2023 and 2031/2032, this would amount to 46m. For the same period, the Oracle system was meant to save 563k in 2022/23 and 788k in 2023/2024, with the savings accumulating over the nine years to 10.9m.In August 2019, the council ended its contract with Capita and the majority of IT services were transitioned to an in-house team, including 300-plus Capita staff, for the most part not Oracle specialists. Building in-house Oracle capability proved to be very challenging, according to the report.As previously reported in Computer Weekly, after the go-live in 2022, the Oracle system required manual remediation to fix accounting issues. The councils 2024 financial report showed a budget of 5.3m for continued support and manual intervention on the Oracle system.The reports authors do note that the failure of the implementation of Oracle Fusion has been a contributory factor to the councils financial position rather than being a fundamental factor. Birmingham City Council effectively went bankrupt in 2023.It also notes that the implementation programme failed to adhere to its design principles to adopt Oracle-standard functionality, instead choosing to adapt it to align with the councils existing processes.Grant Thornton also found that the council did not focus sufficiently on the business and culture change required to prepare the organisation for the Oracle system. End users were unprepared and unequipped to use the system, it claims.It also notes that the culture of the council appears to be one where either bad news was not welcome or officers felt uncomfortable to communicate bad news. We have reported previously on the high level of turnover of senior officers at the council. If the council is to succeed with other major projects, including the current ERP project, and avoid similar issues to the failed ERP implementation, then it will need to carefully consider how it can change its culture to one of openness, mutual support and transparency.The council will hold a meeting on 11 March 2025 to consider the Grant Thornton report, which draws lessons from the botched implementation. These include learning points with respect to governance and oversight, programme management, solution design, business change and culture.0 Comments ·0 Shares ·42 Views
-
Apple withdraws encrypted iCloud storage from UK after government demands back door accesswww.computerweekly.comhanohiki - stock.adobe.comNewsApple withdraws encrypted iCloud storage from UK after government demands back door accessAfter the Home Office issued a secret order for Apple to open up a backdoor in its encrypted storage, the tech company has instead chosen to withdraw the service from the UKByBill Goodwin,Computer WeeklyPublished: 21 Feb 2025 17:52 Apple has withdrawn from providing its UK smartphone and computer users with encrypted cloud storage following a secret government order to require the company to provide back-door access to encrypted data.The tech firm confirmed it will no longer offer UK users its Advanced Data Protection (ADP) service which allows users to store data in encrypted form on Apples iCloud service.The decision is likely to expose people in the UK using Apple services to greater risk of cyber threat and they will no longer have the ability to fully encrypt their personal data on Apples iCloud, though the service will remain available elsewhere in the world.The move by Apple is designed to head off demands by the Home Office to require Apple to provide a back door to give law enforcement and other government agencies access to encrypted data stored by any of its customers worldwide.Demands by the Home Office to access encrypted data belonging to Apple users throughout the world caused ructions in the US when the US Congress accused the UK of a foreign cyber attack waged through political means and led calls for the UK to be thrown out of the Five Eyes intelligence sharing networkAs we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will, Apple said, in a statement."Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users and current UK users will eventually need to disable this security feature. ADP protects iCloud data with end-to-end encryption, which means the data can only be decrypted by the user who owns it, and only on their trusted devices.The company said securing cloud storage through encryption was more urgent than ever given the growing number of security and data breaches.We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy, Apple added.Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before. Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom.Users in the UK who have not already enabled ADP will no longer be able to do so, Apple confirmed.Apples decision means the nine iCloud data categories covered by ADP will be protected by standard data protection, and UK users will not have a choice to benefit from end-to-end encryption for these categories: iCloud Backup; iCloud Drive; Photos; Notes; Reminders; Safari Bookmarks; Siri Shortcuts; Voice Memos; Wallet Passes; and Freeform.Withdrawing ADP from the UK will not affect the 14 iCloud data categories that are end-to-end encrypted by default. Data such as iCloud Keychain and Health remain protected with full end-to-end encryption. Apple said communication services like iMessage and FaceTime remain end-to-end encrypted globally, including in the UK.For users in the UK who already enabled ADP, Apple said it will provide additional guidance. Apple cannot disable ADP automatically for these users - instead, UK users will be given a period of time to disable the feature themselves to keep using their iCloud account.ADP continues to be available everywhere else in the world.Matthew Hodgson, CEO of Element, a secure communications platform used by governments, said its not a surprise to see Apple switch off end-to-end encrypted for iCloud in the UK.[Apple] had no choice. You cannot offer a secure service and then backdoor it - because its no longer a secure service, he said.According to Element research, 83% of UK citizens want the highest level of security and privacy possible, yet the UK government has just put Apples UK customers data at risk, added Hodgson.It is impossible to have a safe backdoor into an encrypted system. Time and again it has been proven that any such point of entry is exploited by bad actors, he said.Salt Typhoon is the current and obvious example, which has seen law enforcement backdoors in the US public telephone network being hijacked by a cyber attack group believed to be operated by the Chinese government. The US is urging its citizens to use end-to-end encrypted services. Simultaneously were witnessing the UK undermining end-to-end encryption - a key part of the nations cyber security.Earlier this month, over 100 cyber security experts, companies and civil society groups signed a letter calling for home secretary Yvette Cooper to drop demands for Apple to create a backdoor into its encrypted iCloud service.The experts warned that the UKs move to create a backdoor into peoples personal data jeopardises the security and privacy of millions of people, undermines the UK tech sector and sets a dangerous precedent for global cyber security.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current Issue0 Comments ·0 Shares ·39 Views
-
Understanding intersectionality: Inclusion and employees whole life experiencewww.computerweekly.comThe reason why diversity is so important for tech teams is because difference in opinions and life experience can lead to more innovative ideas, as well as ensure technology is developed with features that better reflect the needs of its user base.For those campaigning for diversity and inclusion in the technology industry, the past decade has focused on helping businesses understand the benefit of encouraging underrepresented groups into tech.But the result has been diversity and inclusion initiatives only focused on hiring a specific group of people, such as women or people of colour, overlooking how an overlap of these characteristics can affect their experience in the technology sector.During a panel at the 2024 Computer Weekly and Harvey Nash Diversity in Tech event, experts discussed why its important to consider a persons whole experience when trying to develop an inclusive and equitable tech culture, acknowledging the intersectional nature of many in the industry, and how that plays a part in forming their perspective and approach to tech.Merriam-Websters dictionary defines intersectionality as the complex, cumulative way in which the effects of multiple forms of discrimination (such as racism, sexism and classism) combine, overlap or intersect, especially in the experiences of marginalised individuals or groups an idea introduced by civil rights scholar Kimberl Crenshaw.As an example related to the IT sector, its difficult to be a woman in tech, its difficult to be a person of colour in tech, and its even more difficult to be a woman of colour in tech.Sonya Barlow, founder and director of diversity, inclusion and belonging at the Like Minded Females (LMF) network, explained: Its really about what different experiences you have. In simple terms, is the fact that you have so many layers to you either going to help you or hinder you? Everyone here is intersectional, because we all have different layers. Diversity is about differences; intersectionality is the different layers that we bring to the table.The business benefit of including these individuals in the tech workplace is no different to the benefits of increased diversity in tech overall diverse teams better reflect technology users, and the more mixed a group is, the more likely they are to come up with different and therefore more innovative ideas.As explained by Megan Goodwin, co-founder of The Vision: If you create an environment which actually embraces and seeks challenge, and seeks change, and difference of opinion, that is only going to be positive for your firm. All the stats that everybodys given [during the Computer Weekly diversity event] are that the more diverse the leadership team, the more revenue it will generate.The challenge comes during the inclusion piece developing a culture where people can thrive no matter their background.She continued: How many companies seek out very different opinions when theyre making massive strategic decisions? How many businesses really incentivise people to have a different view and to put their hand up?The quietest people in the room are the people who are probably the most unrepresented. How do you change that? I think that you need a culture shift of difference is good.Without this cultural shift, the working world is even harder for underrepresented individuals.Barlow used some of her own overlapping characteristics as an example of intersectionality, highlighting that shes of Pakistani heritage, is a British Asian, has ADHD and experiences chronic migraines.I not only face issues being a woman, she said. Turns out, I face issues being a brown woman, then Im a loud brown woman, which no one really likes. You know what I mean? Im ambitious on top of that, and then on top of that, turns out I didnt know I had ADHD. I didnt even know I had chronic migraines.Barlow also highlighted that her life and work experience will be different to others with different characteristics and this is also true of what people need out of technology: different people will need different things depending on their experience.But in the workplace, the more characteristics you have, the more difficult things become to navigate, clarified Gill Cooke, inclusion, equity and diversity consultant, associate, advisor, and trainer.The more different identities that you identify with, the more likely you are to have additional challenges, additional obstacles, probably additional discrimination, abuse, harassment, etc, she said. So, actually, the scales are really weighed against you. And really what we want to talk about is that, how do we recognise that and open the doors to more people?Whole self, or work persona?A term often heard in the diversity and inclusion space relates to the concept of bringing your whole self to work, and how building an inclusive culture should help people to achieve this, especially if they are from an underrepresented group.For someone with neurodivergence, such as autism or ADHD, this can be quite helpful as highlighted by Goodwin, many people with neurodivergence end up masking at work, sometimes leaving them feeling alienated and exhausted.There are all sorts of mental health aspects towards people masking at work, she said.But while being embraced for who you are can add to a sense of belonging in the workplace, there is still a line to be drawn, said Cooke.The workplace is still just that, so there needs to be a certain level of appropriateness when it comes to bringing your whole self to work, she explained.Actually, we dont always want everyones authentic self at work, in the nicest possible way, because some people might come and say, Well, Im a racist, Im homophobic. This is me, take it or leave it. Im being my authentic self.Instead, the goal should be creating an environment where everyone is set up to be as successful as they possibly can be.Recognition was the first piece of advice the panel gave to ensure underrepresented groups are supported and included acknowledging someone has challenges is one step towards helping build a better environment for them.Many elements of difficulty, such as neurodivergence, can be invisible, so its unhelpful to make assumptions about the challenges people are facing.Next, the panel explained that adaptations need to be made, and thats not always a one-size-fits-all situation.Tab Ahmed, founder and CEO of EmployAbility, told the audience she often hears excuses from employers when talking about disability, such as: Oh, theres nobody of disability who works here because we cant see them.Okay, theyre not in a wheelchair, she said. Thats true, but thats 5% of people of disability who might be in a wheelchair. Just because you cant see it, doesnt mean it doesnt exist. Or the other thing I get is, Its okay, we have a ramp. Thats great for somebody in a wheelchair. It doesnt help ADHD so much.But the problem with non-visible differences, especially disabilities, is the challenge of disclosure. Ahmed urged businesses to ensure people have a safe and clear way to ask for help if they need it.One of the really key important things is, is there a safe, robust accommodations process in place that is well signposted that people can go to and be comfortable to engage with, knowing that their privacy is going to be protected and that information is only going to be shared with people that need to understand that information to provide the correct accommodations they might need in the workplace? she said.While there isnt a single solution that will suit everyone, Ahmed pointed out that catering for those with disability and neurodivergence means businesses usually get it right for the other diversity strands as well. Because I think sometimes disability and neurodivergence is one of the most complex strands to actually address.Advice for supporting intersectionalityThe panels key advice for creating a supportive and inclusive work environment:Recognise that some individuals may have difficulties in the workplace because of their background, which may be exacerbated if they have overlapping characteristics.Dont make assumptions about what people need.Create an environment where people feel able to ask for help, and make it clear how they can do so.Not one solution is going to work for everyone, so communicate with individuals to see what will help them: collaboration is key.Allow others access to the same accommodations, as it may unexpectedly end up making others work lives better, too.Educate employees about differences in working patterns and accommodations understanding breeds recognition and acceptance.Focus on inclusion, and the rest will follow more easily.Education was also mentioned as a key tool for ensuring colleagues are supported in the workplace, whereby ensuring employees are informed about differences in working patterns and the reasons behind them helps everyone to be more understanding, flexible and supportive, and in the end may result in helping everyone.In fact, Cooke claimed both a holistic and a specific approach is best, suggesting firms make reasonable adjustments for those who need it, then make those adjustments available to everyone.She also pointed out that many are so focused on getting diverse candidates into the tech workforce that they forget to focus on implementing the inclusive culture needed to keep them, and an overall flexible and supportive culture will be beneficial regardless of whether someone is from an underrepresented group or not. I think inclusion is more important than diversity, said Cooke. Inclusion creates diversity. I think in the past, people have brought people in, but then people who do have extra needs or are maybe a little bit loud, or a little bit this, or a little bit that, and they dont fit in, and therefore they leave.If you want people to stay, start with inclusion, start with the inside, and then other people will want to come to the party, she added.At the end of the day, Barlow claimed, it comes down to common sense. If you just take it a step back, it really is about being empathetic, she said. Would you like it if you were in that situation? Ask people what they want; ask people how they like to work.0 Comments ·0 Shares ·30 Views
-
UK police forces supercharging racism with predictive policingwww.computerweekly.comUK police forces are supercharging racism through their use of automated predictive policing systems, as they are based on profiling people or groups before they have committed a crime, according to Amnesty International.Predictive policing systems use artificial intelligence (AI) and algorithms to predict, profile or assess the likelihood of criminal behaviour, either in specific individuals or geographic locations.In a 120-page report published on 20 February 2025 titled Automated racism How police data and algorithms code discrimination into policing Amnesty said predictive policing tools are used to repeatedly target poor and racialised communities, as these groups have historically been over-policed and are therefore massively over-represented in police data sets.This then creates a negative feedback loop, where these so-called predictions lead to further over-policing of certain groups and areas; reinforcing and exacerbating the pre-existing discrimination as increasing amounts of data are collected.Given that stop-and-search and intelligence data will contain bias against these communities and areas, it is highly likely that the predicted output will represent and repeat that same discrimination. Predicted outputs lead to further stop-and-search and criminal consequences, which will contribute to future predictions, it said. This is the feedback loop of discrimination.Amnesty found that across the UK, at least 33 police forces have deployed predictive policing tools, with 32 of these using geographic crime prediction systems compared to 11 that are using people-focused crime prediction tools.It said these tools are in flagrant breach of the UKs national and international human rights obligations because they are being used to racially profile people, undermine the presumption of innocence by targeting people before theyve even been involved in a crime, and fuel indiscriminate mass surveillance of entire areas and communities.The human rights group added the increasing use of these tools also creates a chilling effect, as people tend to avoid areas or people they know are being targeted by predictive policing, further undermining peoples right to association.Examples of predictive policing tools cited in the report include the Metropolitan Polices gangs violence matrix, which was used to assign risk scores to individuals before it was gutted by the force over its racist impacts; and Greater Manchester Polices XCalibre database, which has similarly been used to profile people based on the perception that they are involved in gang activity without any evidence of actual offending themselves.Amnesty also highlighted Essex Polices Knife Crime and Violence Models, which uses data on associates to criminalise people by association with others and uses mental health problems or drug use as markers for criminality; and West Midlands Polices hotspot policing tools, which the force itself has admitted is used for error-prone predictive crime mapping that is wrong 80% of the time. The use of predictive policing tools violates human rights. The evidence that this technology keeps us safe just isnt there, the evidence that it violates our fundamental rights is clear as day. We are all much more than computer-generated risk scores, said Sacha Deshmukh, chief executive at Amnesty International UK, adding these systems are deciding who is a criminal based purely on the colour of their skin or their socio-economic background.These tools to predict crime harm us all by treating entire communities as potential criminals, making society more racist and unfair. The UK government must prohibit the use of these technologies across England and Wales as should the devolved governments in Scotland and Northern Ireland.He added that the people and communities subject to this automated profiling have a right to know about how the tools are being used, and must have meaningful routes of redress to challenge any policingdecisions made using them.On top of a prohibition on such systems, Amnesty is also calling for greater transparency around the use of data-driven systems by police that are in use, including a publicly accessible register with details of the tools, as well as accountability obligations that include a right and clear forum to challenge police profiling and automated decision-making.In an interview with Amnesty, Daragh Murray a senior lecturer at Queen Mary University London School of Law who co-wrote the first independent report on the Met Polices use of live facial-recognition (LFR) technology in 2019 said because these systems are based on correlation rather than causation, they are particularly harmful and inaccurate when used to target individuals.Essentially youre stereotyping people, and youre mainstreaming stereotyping, youre giving a scientific objective to stereotyping, he said.Computer Weekly contacted the Home Office about the Amnesty report but received no on the record response. Computer Weekly also contacted the National Police Chiefs Council (NPCC), which leads on the use of AI and algorithms by UK police.Policing uses a wide range of data to help inform its response to tackling and preventing crime, maximising the use of finite resources. As the public would expect, this can include concentrating resources in areas with the most reported crime, said an NPCC spokesperson.Hotspot policing and visible targeted patrols are the bedrock of community policing, and effective deterrents in detecting and preventing anti-social behaviour and serious violent crime, as well as improving feelings of safety.They added that the NPCC is working to improve the quality and consistency of its data to better inform its response, ensuring that all information and new technology is held and developed lawfully, ethically in line with the Data Ethics Authorised Professional Practice (APP).It is our responsibility as leaders to ensure that we balance tackling crime with building trust and confidence in our communities whilst recognising the detrimental impact that tools such as stop and search can have, particularly on black people, they said.The Police Race Action Plan is the most significant commitment ever by policing in England and Wales to tackle racial bias in its policies and practices, including an explain or reform approach to any disproportionality in police powers.The national plan is working with local forces and driving improvements in a broad range of police powers, from stop and search and the use of Taser through to officer deployments and road traffic stops. The plan also contains a specific action around data ethics, which has directly informed the consultation and equality impact assessment for the new APP.Problems with predictive policing have been highlighted to UK and European authorities using the tools for a number of years.In July 2024, for example, a coalition of civil society groups called on the then-incoming Labour government to place an outright ban on both predictive policing and biometric surveillance in the UK, on the basis they are disproportionately used to target racialised, working class and migrant communities.In the European Union (EU), the blocs AI Act has banned the use predictive policing systems that can be used to target individuals for profiling or risk assessments, but the ban is only partial as it does not extend to place-based predictive policing tools. According to a 161-page report published in April 2022 by two MEPs jointly in charge of overseeing and amending the AI Act, predictive policing violates human dignity and the presumption of innocence, and it holds a particular risk of discrimination. It is therefore inserted among the prohibited practices.According to Griff Ferris, then-legal and policy officer at non-governmental organisation Fair Trials, time and time again, weve seen how the use of these systems exacerbates and reinforces discriminatory police and criminal justice action, feeds systemic inequality in society, and ultimately destroys peoples lives. However, the ban must also extend to include predictive policing systems that target areas or locations, that have the same effect.A month before in March 2022,Fair Trials, European Data Rights (EDRi) and 43 other civil society organisations collectively calledon European lawmakers to ban AI-powered predictive policing systems, arguing that they disproportionately target the most marginalised people in society, infringe fundamental rights and reinforce structural discrimination.That same moth, following its formal inquiry into the use of algorithmic tools by UK police including facial recognitionand various crime prediction tools the Lords Home Affairs and Justice Committee (HAJC) described the situation as a new Wild West characterised by a lack of strategy, accountability and transparency from the top down. It said an overhaul of how police deploy AI and algorithmic technologies is required to prevent further abuse.In the case of predictive policing technologies, the HAJC noted their tendency to produce a vicious circle and entrench pre-existing patterns of discrimination because they direct police patrols to low-income, already over-policed areas based on historic arrest data.Due to increased police presence, it is likely that a higher proportion of the crimes committed in those areas will be detected than in those areas which are not over-policed. The data will reflect this increased detection rate as an increased crime rate, which will be fed into the tool and embed itself into the next set of predictions, it said.However, in July 2022, the UK government has largely rejected thefindings and recommendationsof the Lords inquiry, claiming there is already a comprehensive network of checks and balances.The government said at the time while MPs set the legal framework providing police with their powers and duties, it is then for the police themselves to determine how best to use new technologies such as AI and predictive modelling to protect the public.Read more about police technologyMet Police challenged on claim LFR supported by majority of Lewisham residents: A community impact assessment for the Met Polices deployment of live facial-recognition tech in Lewisham brings into question the forces previous claims to Computer Weekly that its use of the technology is supported by the majority of residents.Automated police tech contributes to UK structural racism problem: Civil society groups say automated policing technologies are helping to fuel the disparities that people of colour face across the criminal justice sector, as part of wider warning about the UKs lack of progress in dealing with systemic racism.Campaigners criticise Starmer post-riot public surveillance plans: A UK government programme to expand police facial recognition and information sharing after racist riots is attracting criticism from campaigners for exploiting the far-right unrest to generally crack down on protest and increase surveillance.0 Comments ·0 Shares ·44 Views
-
A landscape forever altered? The LockBit takedown one year onwww.computerweekly.comWednesday 19 February 2025 marked the first anniversary of Operation Cronos, a multinational cyber law enforcement action led by the UKs National Crime Agency (NCA) with support from global partners, which disrupted the activities of the notorious LockBit ransomware crew in a targeted operation that has had a lasting impact on the ransomware economy.Looking back to 2023 and earlier, LockBits effectiveness and influence cannot be understated. According to the Counter Threat Unit (CTU) at Secureworks, now part of Sophos, LockBit attacks accounted for 25% of all listed victims on ransomware leak sites in 2023, with its closest competitor at the time, ALPHV/BlackCat, managing just 12%.LockBit targeted organisations across Britain and around the world, with its arguably most prominent UK victim the Royal Mail, which rejected an extortion attempt of more than 60m described at the time as absurd after the crews ransomware downed IT systems at its Heathrow Worldwide Distribution Centre in January 2023, paralysing international deliveries for weeks.Reflecting on the success of Operation Cronos, Secureworks CTU senior researcher Tim Mitchell said: When we coordinated our research alongside law enforcements seizure of the LockBit leak site on February 19, 2024, we knew it was a significant moment in time in the fight against cyber criminals.It was the first step in a steady march of operations against ransomware, its enablers and cyber crime more broadly. And the most obvious result is the mark its left on the landscape, with affiliates scattering to new schemes or turning to independent operations.Paul Foster, head of the NCAs cyber crime unit, said that during the investigation leading up to Operation Cronos, he too had a distinct feeling that the agency was on the verge of a significant coup de grce.We knew, in the context of ransomware, that we had a unique opportunity to have a significant impact on the threat, he told Computer Weekly. How often do you get the opportunity to look at the threat landscape and say we can probably take out if we do it well and right 25% of that threat?Greg Linares, now principal threat analyst at US-based managed security platform provider Huntress but at the time working elsewhere, also remembers the events of 19 February clearly. We had a feeling this was going to occur, he said. We had FBI contacts that we worked with But we didnt know how big it was going to be. And it was a massive operation, it was really well executed.But of course the immediate headline action was only part of the story, and much hinged on delivering the right blows at the right time to maximise the hoped-for results not to do so would risk things backfiring, or missing the opportunity altogether.To get the best results, the NCA elected to significantly broaden the scope of its activity against the gang.We could have put everything into trying to find out who was behind LockBit and effectively take out the kingpin, said Foster.Similarly, we could have gone down a technical route and taken down the leak site, their splash pages, etcetera. Alternatively, we could have gone out and hoovered up lots of people and tried to make arrests all around the world, and indict people and sanction people.Actually, it was a combination of all of those things delivered in a sequenced way, combined with a very clear public articulation of what wed done, and that together delivered a significant disruptive effect.Foster also singles out the work of the NCAs partners, security researchers, the wider private sector industry, and even the technology and national media whose reporters swarmed the story in short order, for amplifying the impact of Operation Cronos, and even building on it.All of that together said that good quality cyber crime operations in the future need to be multifaceted, not linear. I think it was the multifaceted nature of Operation Cronos that was one of the key reasons as to why it was so successful.After the first flush of excitement had faded and the news stories started to drop off the top of Google searches, Operation Cronos kept on keeping on, with the NCA and others particularly its US partners keeping up a constant drumbeat of anti-ransomware law enforcement activity.Over the course of 2024 further announcements, indictments, and even arrests, were made against LockBit and its affiliates. Significantly, the investigators named-and-shamed LockBit ringleader LockBitSupp as a Russian national named Dmitry Khoroshev.Theauthorities also proved long-suspected links between the gang and the Russian government after demonstrating connections between LockBitSupp and Evil Corps Maksim Yakubets, who likely had access to senior Kremlin officials through his father-in-law, an ex-intelligence man, and may even have received tasking from Russias spy agencies.Other operations also targeted leak site operators and the money launderers who helped the likes of LockBit wash its ill-gotten gains. Most recently, in February 2025 the British government announced sanctions against Russian infrastructure services provider Zservers and its UK representative, XHOST, the bulletproof hosting service that allegedly facilitated LockBit attacks against targets in the UK.The most obvious result is the mark its left on the landscape, with affiliates scattering to new schemes or turning to independent operations, said Mitchell at Secureworks.With these disruptions to the status quo, it has added friction and increased the cost for the cyber criminals, which ultimately makes such operations more challenging to successfully execute. The more collaboration we see across the industry and with law enforcement will lead to making it harder for cyber criminals to succeed.Foster added: Our overall assessment of the threat landscape for ransomware is that it has plateaued, but not decreased. Thats good news though, because it was accelerating at some rate. In the run up to our LockBit disruption, it was unequivocally true that the threat from ransomware was going up and up.That ransomware attacks have levelled off in their volume is not just a consequence of Operation Cronos, said Foster, but also a reflection of other operations conducted last year, and significantly increased awareness of the ransomware threat in relevant stakeholder communities, which is to say, among CISOs and others empowered to take steps to address the threat.However, said Foster: I am concerned that it [the ransomware threat] will continue to rise in the future, and I think we would reasonably expect it to unless we can continue to maintain our disruptive impact and disruptive effect, which means more of these operations, fundamentally based on more joint collaboration and information sharing across law enforcement, with government partners [and] between the public and the private sector.This is never a one organisation mission, without a doubt. Its everybodys challenge. And I think if we can keep that up, hopefully we can continue to suppress the threat.Evidence gleaned through Secureworks telemetry at first glance supports the plateauing of ransomware attack volumes, but it also reveals that even though LockBits demise did cause a slowdown in the wider landscape, December and January bucked the trend with a 61% year-on-year increase in the number of victims listed on leak sites in December, and an 80% increase in January.Also noteworthy is a significant increase in the number of operational gangs, said Mitchell. So, what is going on here?The first months of the year invariably see the publication of multiple annual threat and ransomware reports from security suppliers which usually say exactly the same thing Mitchell explained, and in early 2025, most of them pointed to a fragmentation of the ecosystem, which tracks with the idea that many individuals associated with LockBit have scattered to the four winds.The increase through the year in the number of schemes operating is indicative of that fragmentation in the landscape. And its important to remember that a victim is named on a leak site when they havent paid a ransom, so an increase in victim numbers could mean that the number of victims paying is actually decreasing, he said.On the flipside, Linares at Huntress said that, unfortunately, LockBit also proved more defiant and resilient than many had hoped.Its interesting how LockBit has handled this takedown. It [Operation Cronos] was very successful but as we all know, LockBit hasnt gone away, he said. They have put themselves back together again. They have reformed and stayed vigilant and persistent.Thats a testament to how well theyre able to perform their activities. Unlike many other groups, they are well organised and this is not their first rodeo.He said other gangs, such as RansomHub, Play, and even Cl0p, have all incorporated elements of LockBits playbook into their own, and learned lessons from its downfall. One notable effect of this is likely a widely observed decrease in dwell times, the amount of time between when ransomware gangs first access a future victims network and when they execute their attack.Weve [also] seen groups even skip out of ransomware entirely now and just go for straight extortion, because they find out that theyre getting caught at the level when they drop ransomware. LockBit has absolutely fuelled some of these trends, said Linares.Foster at the NCA is sanguine on the fact that the LockBit gang or people claiming to be associated with them still pop up regularly, often trying to counter the NCAs narrative with their own viewpoints, and recently teasing a return to business and a new locker malware, LockBit 4.0.When we did this we knew we would never be able to obliterate LockBit completely because of course, theres legacy code, once somethings online it is, to a degree, permanently so, and its very easy for people to adopt the brand or try to find new things they could put out there. We accept the fact there will always be a bit of a legacy of LockBit floating around the system, he said.I think whats clear though is that whatever it is thats left of LockBit, through a cyber criminal lens, has got very little credibility if any. That plays out in what were seeing in the victim reporting, certainly in the UK.I understand LockBit recently launched its new version a couple of weeks ago. Were not seeing any effect from that. There hasnt been a known, reported LockBit attack in the UK for over four months now and our international partners are seeing similar trends, he added.This is not to say no LockBit ransomware attacks are taking place. Linares said that while the NCA operation credibility damaged the credibility of LockBit and LockBitSupp, even now theres still some gang activity.We have seen them in government and hospitals, mostly, he said. One thing that has happened post-their takedown is they started only going after targets that were much larger to help their credibility and also to help them recoup money and lost income.A couple of weeks ago, he said, Huntress started to see evidence of the previously-trailed version of LockBit 4.0, now dubbed LockBit Green a LockBit 4.0 Black version may also be available according to some sources being used in the wild.Were starting to see some activity there. So, I believe while [Operation Cronos] helped discredit LockBitSupp unfortunately theyre still ransoming people, said Linares.The jury is still out on whether or not LockBit 4.0 is a severe threat, but Secureworks Mitchell said we must remember the wider ransomware threat has not gone away.Far from it, he said. Although the impact of such attacks on individual victims might be reduced, experiencing a ransomware incident is still a very bad day in the office.Organisations should be prioritising the basics including regularly patching internet-facing devices, implementing phishing-resistant multi-factor authentication [MFA] as part of a conditional access policy, and monitoring the network and endpoints for malicious activity.Organisations should also have an incident response plan in place, battle-tested regularly to ensure theyre prepared to respond a cyber attack with speed and precision, he said.Foster also urges defenders to prioritise their own cyber resilience and ransomware response plans, describing law enforcement as merely one weapon in the fightback, albeit a very important one.We will never not keep an eye on LockBit, that would be nave, but there are other ransomware strains that my team and I are far more concerned about at the moment, he concludes.Read more about ransomwareA ban on ransomware payments by UK government departments will be extended to cover organisations such as local councils, schools and the NHSshould new government proposals move forward.NCA-led Operation Destabilise disrupts Russian crime networks that funded the drugs and firearms trade in the UK, helped Russian oligarchs duck sanctions, and laundered money stolen from the NHS and othersby ransomware gangs.An individual associated with the LockBit ransomware gang has broken cover to tease details of a new phase of the cyber criminal operations activity, which they claim isset to begin in February 2025.0 Comments ·0 Shares ·45 Views
-
Microsoft overcomes quantum barrier with new particlewww.computerweekly.comJohn BrecherNewsMicrosoft overcomes quantum barrier with new particleIt has taken 20 years of development, but researchers now have a device that can scale to millions of qubits without errors rising exponentiallyByCliff Saran,Managing EditorPublished: 20 Feb 2025 15:00 Microsoft has published the culmination of 20 years of research into subatomic particles, known as Majorana fermions, which it aims to use to build a million-qubit quantum computer.The research has involved developing topological qubits, which Microsoft research anticipated would offer more stable qubits, requiring less error correction. A research paper on the property of these particles notes that Majorana fermions have a mathematical quirk which suggests that if fermions and anti-fermions are indistinguishable, they may be able to coexist without annihilating one another.In a YouTube video discussing the research, Microsoft technical fellow Matthias Troyer said: Majoranas theory showed that mathematically its possible to have a particle that is its own antiparticle. That means you can take two of these particles and you bring them together, and they could annihilate and theres nothing left. Or you could take two particles and you bring them together and you have two particles.This offers a way to correlate the nothing state when the fermion and anti-fermion annihilate each other as a binary 0, and when they both exist as a binary 1.Microsoft technical fellow Krysta Svore said Microsoft has succeeded in designing a chip called Majorana 1 that is able to measure the presence of the Majorana fermion particles. Majorana allows us to create a topological qubit, she said, where the qubit is reliable, small and controllable.The nature of the Majorana particles means they hide quantum information, making it more robust, but also harder to measure. Microsoft developed a new measurement approach that it claims is so precise that it can detect the difference between one billion and one billion and one electrons in a superconducting wire, which is used to determine the state of the qubit for quantum computation.Read more quantum computing articlesQuantum computing in cyber security a double-edged sword: Scepticism still abounds, but quantum computing stocks have boomed this year. In the world of cyber, however, quantum brings both unprecedented capabilities and significant threats, demanding careful attention.Quantum datacentre deployments how they are supporting evolving compute projects: Quantum datacentre deployments are emerging worldwide, so what are they and where are the benefits?According to Svore, the approach Microsoft has taken gets around the noise problem that leads to errors in qubits, which results in error-prone quantum computers.Now that we have these topological qubits, were able to build an entirely new quantum architecture, the topological core, which can scale to a million topological qubits on a tiny chip, she said.Svore said that each atom in this chip is placed purposefully. It is constructed from the ground up, she added. It is entirely a new state of matter. Think of us as building the picture by painting it atom by atom.The processors used to power computers traditionally use electrons. We don't use electrons for compute, said Svore. We use Majoranas.Majorana 1 is Microsofts new quantum chip that combines both qubits as well as surrounding control electronics. Along with the control logic, the Microsoft approach to quantum computing requires a dilution refrigerator that keeps qubits at temperatures much colder than outer space. Microsoft has also developed a software stack, which is needed to enable applications to take advantage of Microsofts quantum computing.The Majorana 1 device can be held in the palm of a hand, and fits neatly into a quantum computer that can be easily deployed inside Azure datacentres. The way the system that we are constructing works is you have the quantum accelerator, said Microsoft vice-president Zulfi Alam. You have a classical machine that works with it and controls it. And then you have the application that essentially goes between classical and quantum depending on which problem its trying to solve.Once the computations are completed, the results are re-synthesised on the classical computational machine, where its surfaced as an answer to the problem.The researchers at Microsoft are confident the approach they have taken with Majorana 1 will be able to scale, which is something that has so-far hindered the progress of quantum computing, due to the error-prone nature of scaling logical qubits. Microsofts topological qubit architecture uses aluminum nanowires joined together in an H shape. Each H has four controllable Majoranas that are combined onto one qubit. The Hs can also be connected across the chip.Its complex in that we had to show a new state of matter to get there, but after that, its fairly simple, said Svore. It tiles out. You have this much simpler architecture that promises a much faster path to scale.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current IssueSLM series - InFlux Technologies: It's a question of specialisation, especially CW Developer NetworkBudget flexibility for on-prem AI Cliff Saran's Enterprise blogView All Blogs0 Comments ·0 Shares ·38 Views
-
Watchdog approves Sellafield physical security, but warns about cyberwww.computerweekly.comSellafield LtdNewsWatchdog approves Sellafield physical security, but warns about cyberThe Office for Nuclear Regulation has taken Sellafield out of special measures for physical security, but harbours cyber security concernsByBrian McKenna,Enterprise Applications EditorPublished: 20 Feb 2025 15:45 Cumbrian nuclear facility Sellafield is still under scrutiny for cyber security problems, despite the regulators clean bill of health for its physical security.The Office for Nuclear Regulation (ONR) has returned the Sellafield site to a routine regulatory regime for physical security after a period of enhanced oversight, according to a government statement.The ONRs statement said: Over the last two years, ONR has carried out a regular programme of inspections and interventions at Sellafield, assessing evidence provided by the licensees security and resilience team. This identified a period of sustained improved performance in the area of physical security, and ONR is satisfied that the required security outcomes are now being achieved.However, the ONR added: Sellafield Ltd currently remains in significantly enhanced attention for cyber security, and collaborative work is ongoing to achieve the required improvements in this area.In December 2023, The Guardian reported that groups linked to China and Russia had hacked into Sellafields IT systems, embedding sleeper malware that could lurk and be used to spy or attack systems.And in October 2024, the nuclear waste facility was ordered to pay 400,000 by Westminster Magistrates Court, after it pleaded guilty to criminal charges over years of cyber security failings, and apologised to the court.The ONR brought the charges against Sellafield Ltd, accusing it of leaving exposed information that could threaten national security over a four-year period, from 2019 to 2023. Three-quarters of its servers were also said to be vulnerable to cyber attack.Read more about SellafieldSellafield operator opens dedicated cyber centre.Sellafield pleads guilty to criminal charges over cyber security.The local authority for Sellafield, Europes biggest nuclear site, has been slammed by auditors for its response to a North Korea-linked cyber attack that temporarily crippled its operations.Sellafield whistleblower ordered to pay costs after email tampering claims.One of the three criminal charges brought related to Sellafields failure to ensure that there was adequate protection of sensitive nuclear information on its information technology network, while the other two related to failures to conduct annual health checks of its IT systems.Sellafields lawyers said at the time, it is important to emphasise there was not and has never been a successful cyber attack on [the facility], before noting that the offences are historical [and] do not reflect the current position.Paul Dicks, the ONRs director of regulation for Sellafield, decommissioning, fuel and waste, said of the new bill of health: We have worked closely with Sellafield Ltd through our enabling approach to ensure that the required improvements are delivered. Im satisfied that Sellafield Ltd has demonstrated significant and sustained security improvements which has allowed us to return them to routine regulatory attention.Sellafield operates under the governance of the Nuclear Decommissioning Authority(NDA), a quasi-governmental body that serves to wind up and render safe the UKs oldest nuclear industry sites.In November 2024, the NDA opened a cyber security centre to safeguard against cyber attacks on the civil nuclear sector.Its Group Cyberspace Collaboration Centre in Cumbria is said to gather security, digital and engineering experts to work on how best to adopt new technologies and defend against evolving threats.Warren Cain, superintending inspector at theOffice for Nuclear Regulation, said: All nuclear sites must have strong cyber security systems in place to protect important information and assets from cyber threats.Cyber security is a key regulatory priority for the Office for Nuclear Regulation, and we welcome the NDAs commitment to strengthen their cyber defences with this new specialist facility.Besides Sellafield, the UKs nuclear sites include Hinkley Point, Harwell, Dungeness, Bradwell and Sizewell, Trawsfynydd and Wylfa, and Dounreay.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current IssueSLM series - InFlux Technologies: It's a question of specialisation, especially CW Developer NetworkBudget flexibility for on-prem AI Cliff Saran's Enterprise blogView All Blogs0 Comments ·0 Shares ·26 Views
-
Volvo to roll out second software-defined electric carwww.computerweekly.comzapp2photo - stock.adobe.comNewsVolvo to roll out second software-defined electric carNvidia hardware accelerates AI-powered safety features, built using its Superset tech stackByCliff Saran,Managing EditorPublished: 20 Feb 2025 16:00 The so-called Superset tech stack, on which carmaker Volvo is building its software-defined cars, is behind the companys next launch.The ES90 electric vehicle, which is being unveiled on 5 March, will be the first Volvo car equipped with dual Nvidia Drive AGX Orin hardware, which the company said will raise the bar on safety and overall performance through data, software and artificial intelligence (AI).As Computer Weekly has previously reported, the Superset tech stack consists of one single set of hardware and software modules, and systems that underpin all upcoming electric cars from Volvo.It represents what Volvo describes as a radical transformation in how it can develop and use software to improve levels of safety, technology and overall performance throughout the cars lifecycle. With the Superset tech stack, we can make such improvements more efficiently and roll them out even faster via over-the-air updates and across all models based on the Superset, Volvo said.The Nvidia Drive AGX Orin hardware provides 508 trillion operations per second for AI-based active safety features, car sensors and efficient battery management.Volvo said the hardware will enable its engineers to increase the size of the deep learning model and neural network it uses from 40 million to 200 million parameters. This will happen over time as we collect more data and continue to develop the model, with the overall goal of improving customer experience and most importantly safety levels, Volvo said.The Nvidia hardware helps the ES90 to understand its surroundings through an advanced array of sensors, which includes one lidar, five radars, eight cameras and 12 ultrasonic sensors, as well as an advanced driver understanding system inside the car. According to Volvo, these safety systems are designed to help keep you safe by detecting obstacles, even in darkness, and activating proactive safety measures such as collision avoidance.Read more about software-defined carsVolvos engineering lead discusses tech stacks: Volvo Cars approach to manufacturing is becoming more software-defined, built on top of what it calls a superset tech stack.Volkswagen supports Arm-based software-defined car standard: Car manufacturer's software business supports industry initiative to develop a standard platform and open-source reference implementation.Commenting on the hardware and software innovations inside the ES90, Volvo chief engineering and technology officer Anders Bell said: We innovate in all areas of technology to become a leader in software-defined cars, and were channelling all our engineering efforts into one direction: making great cars that get even better over time.By combining the power of core computing and our Superset tech stack, we can now make safer cars more efficiently than ever before.The ES90 will be the second Volvo built based on the Superset tech stack, and follows on from the EX90, where the stack was first introduced.The Superset tech stack will underpin all upcoming Volvo electric cars, which, according to Volvo, means it will be able to boost the performance of each car in its lineup simultaneously. For instance, ES90 customers can benefit from EX90 software upgrades and vice versa.Volvo positions the Superset stack as an enabler to replace value creation through hardware with a software approach to building value into its customers cars.The Nvidia Drive AGX Orin configuration will also be installed on new EX90 cars, replacing the existing Drive AGX Orin and Drive AGX Xavier hardware. Volvo said existing customers of the EX90 will get an upgrade of their cars free of charge.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current Issue0 Comments ·0 Shares ·25 Views
-
Privacy at a crossroads in the age of AI and quantumwww.computerweekly.comThe digital landscape is entering a critical turning point, shaped by two game-changing technologies: generative AI (GenAI) and the imminent arrival of quantum computing. These technologies hold vast promise for innovation, but they also magnify the risks to privacy, data security, and trust. Organisations that want to thrive sustainably in this new era must adapt quickly, recognising that the traditional methods used to protect personal data will no longer suffice.Privacy has long been a legal obligation for organisations. Today, its much more than that. In fact, privacy has become a competitive differentiator organisations that handle customer data with integrity can build stronger relationships and earn more loyalty.Currently, around 75% of the global population is covered by modern privacy laws, which signals that privacy is increasingly seen as a universal right. However, despite these widespread legal frameworks, there are still significant gaps in how laws are executed across different regions and industries. Data breaches continue to escalate, misinformation is increasingly rampant, and consumers are becoming more sceptical about how their personal data is handled. The rise of GenAI has only intensified these challenges as machine-generated content blurs the lines between fact and fiction.Meanwhile, quantum computing looms on the horizon, introducing an entirely new set of challenges. By 2029, the computational power and availability of quantum systems is expected to make current encryption methods obsolete, putting sensitive data at unprecedented risk. For many organisations, the sheer cost of ensuring that this data remains secure could become unmanageable, potentially forcing them to purge vast quantities of personal data to prevent breaches.As the use of AI accelerates across industries, the quality of the data feeding these systems becomes even more crucial. However, too many organisations continue to focus primarily on protecting the confidentiality of data, while overlooking its integrity. This imbalance has led to a slew of problems, from poor decision-making to failed AI initiatives that fail to deliver meaningful outcomes.Gartner predicts that by 2028, organisations will invest as much in ensuring data integrity as they do in confidentiality. This is a major shift, and rightly so. For AI models to be effective, they need high-quality, trustworthy data to train on. If this data is flawed or unreliable, the resulting AI systems will be just as flawed and unreliable. Beyond AI, maintaining data integrity is critical for everything from regulatory compliance to safeguarding consumer trust in the organisations practices.In addition, data integrity plays a critical role in mitigating the risks posed by misinformation and AI-generated content. As GenAI continues to evolve, ensuring that data is accurate, traceable, and verifiable will become more important than ever. Without these measures, AI models risk becoming susceptible to manipulation, making them less effective and ultimately less trustworthy across industries.Read more on the intersection of AI and quantumQuantum computing development can benefit datacentres. Potential quantum computing uses include improving supply chains, financial modelling, and AI and machine learning optimisation.Microsoft unveiled Majorana 1, a quantum chip with eight qubits, aiming for a million. It focuses on scalability for breakthroughs in various fields despite current challenges.Middle East financial firms are investing heavily in quantum computing, with one of the worlds top quantum research centres in Abu Dhabi.The rise of quantum computing is not just a future concern; its a present reality that organisations must begin preparing for today. The concept of harvest now, decrypt later is already a reality, with malicious actors stockpiling encrypted data in anticipation of quantum breakthroughs that would render traditional encryption methods obsolete. This poses a grave risk to organisations, as sensitive information that is currently safe from hackers could one day be compromised by quantum systems.Governments around the world are already pushing for the development and adoption of post-quantum cryptography (PQC) encryption methods that are resistant to the computational power of quantum machines. But making the shift to PQC is no small feat. It requires a fundamental overhaul of existing cryptographic systems and infrastructure, a process that will take years to complete. For many organisations, the pressure is mounting to begin this transition as soon as possible to protect their sensitive data and remain ahead of the quantum curve.To navigate these challenges, organisations need to act decisively:Reassess Data Strategies: Move away from storing huge amounts of data to adopting data minimisation practices. Retaining only necessary information reduces risk and aligns with modern privacy regulations.Invest in Data Integrity: Apply robust measures to ensure data accuracy, provenance, and lineage. This is critical for AI applications and for maintaining consumer trust.Adopt Post-Quantum Cryptography: Begin developing crypto-agility and a migration to quantum-resistant encryption methods now to safeguard sensitive data before quantum computing becomes mainstream.Enhance Privacy Practices: Integrate privacy-by-design principles into every product and service, offering consumers granular control over their data.The intersection of GenAI and quantum computing represents a critical turning point for organisations. Failing to adapt to the evolving privacy and security landscape could lead to lost consumer trust, regulatory penalties, and competitive disadvantage. On the other hand, those who take proactive steps to protect data and embrace emerging technologies will not only minimise risks but also position themselves as leaders in the digital economy.Bart Willemsen is a VP analyst at Gartner, with a focus on privacy, ethics and digital society.0 Comments ·0 Shares ·33 Views
-
European and African tech skills programme could increase economic tieswww.computerweekly.comEmerging economies in Africa often have relationships with developed nations through dark colonial pasts, but today, digital tech is connecting previously unexpected partners.Developed nations looking for growth are targeting Africa as an opportunity, but must offer the countries of the continent something in return, and one programme to transfer IT professionals and knowledge between Africa and the Baltic region is an example that goes beyond filling a skills gap.As Computer Weekly reported recently, IT professionals in Africa are being connected to tech businesses in the Baltic region as part of a European Commission-funded project, known as theDigital Explorersprogramme.Fronted by Lithuania-based think tank Osmos, it aims to address skills shortages in the Baltic tech sector, and increase more business and government engagement between the Baltic nations and African countries.While countries in the Baltic region, Lithuania, Estonia and Latvia lead the world in digital business, they lack people. Estonia, for example, while a leading digital nation, has a population of about 1.3 million.In contrast, countries like Nigeria are lagging in terms of digital economy, but have large and growing IT talent pools. Nigeria, for example, has a population of about 240 million and growing.But African countries offer more than a skills pool for Europe to tap, with a huge potential market for its goods and services. Its hoped connecting people through digital technology initiatives, like Digital Explorers, will initiate cooperation between the two regions.It also sees African IT professionals learn new skills that can be used to help the economic development in their home countries.At the Turing College data science school in Lithuanias capital Vilnius, the Digital Explorers programme has already remotely trained 90 junior to mid-level data analysts from Africa. These trainees then travel to and work in the Baltic region, particularly in its rich tech startup sector. Its hoped the project will create a model for the wider European Union (EU) region to follow.Cindy Waweru, aged 24, from Kenyas capital, Nairobi, a policy analyst in the city, was invited by the Kenya Private Sector Business Alliance (Kepsa) to take up a role that blended economics with statistical analysis. She had the option of taking up the role in Kenya or Lithuania, and opted for the latter. Once I saw the Lithuania option, I was pretty intrigued, she said.With a degree in economics and statistics from the University of Nairobi, and experience as a policy analyst, Waweru took up a role at research institute Visionary Analytics in Lithuanias capital, Vilnius.Originally I wanted to become a policy analyst and this could give me the opportunity to be a global one, Waweru told Computer Weekly. I have an IT background and worked initially as a data specialist in the Kenyan government. This was pretty important for the programme.She is currently on a six-month placement at Visionary Analytics in Vilnius. After that, she will either be offered a role in Lithuania or take her learnings back to Kenya.In Kenya there will be opportunities for Waweru to work either in the tech sector or with tech-enabled organisations.She said her international experience could open up more opportunities for her in Kenya. There is a growing tech scene in the East African country, she told Computer Weekly. They call Kenya the Silicon Savannah, said Waweru.Read more about African IT skillsCan Africa deliver on its ambitious digital transformation goal?Kenyan AI workers form Data Labelers Association.Kenya needs to emulate some of the strategies adopted in Europe, and Waweru said one of the main differences she has learned is the cooperation between nations. I have noticed with in Europe generally and in terms of the framework and their policies that they operate within all EU member states, she said. We have something like that with the African Union, but a lot of the policies are led to the national governments. Something like intergovernmental working would help a lot in Africa.Waweru hopes the programme will build a good reputation for African talent and lead to more European countries taking advantage of their skills to fill gaps in their workforces.But the programme is about much more than tech skills, with future business ties a major goal for both sets of economies.Ashley Immanuel, co-founder and chief operating officer at Nigeria-based Semicolon, which trains software engineers and other technology skills, is an ambassador of the Digital Natives programme.Immanuel said she is increasingly engaging with Baltic tech firms and tech ecosystems, as well as others across Europe.She said the Nigerian digital tech market has evolved quite quickly over the past 10 to 15 years. There is activity in terms of technology startups, and then of course the digital transformation of established companies, said Immanuel. Historically in Nigeria, obviously oil and gas has been present, but also some of the larger corporates like banks and finance firms.She said there is a huge population in Nigeria and that people are anxious to find good jobs, but added: There has historically been a gap because the human capital thats available here hasnt been aligned to employer needs, especially for leading technology companies.In contrast, the Baltic nations have small populations and a large tech sector.Immanuel said both regions have challenges and that Baltic employers and tech companies she has met have listed access to talent as one of their challenges.She said there is a mutual desire to learn from each other, as well as potential for business partnerships and relationships. On her travels in Europe, there is a lot of interest in working with African companies, she told Computer Weekly.Immanual agreed that diversity of the IT workforce is also important, with the rapid development of technologies such as AI, and that Africa and the Baltics relationship can contribute to increased diversity.ilvinas vedkauskas, managing director at Osmos, said it creates unexpected country partnerships.We built the project around people, digital explorers and their digital journeys, he told Computer Weekly. We create connections that set the path for more business-to-business and government-to-government type of engagement between countries.0 Comments ·0 Shares ·50 Views
-
ARM and Meta: Plotting a path to dilute GPU capacitywww.computerweekly.comNews that ARM is embarking on developing its own datacentre processors for Meta, as reported in the Financial Times, is indicative of the chip designers move to capitalise on the tech industrys appetite for affordable, energy-efficient artificial intelligence (AI).Hyperscalers and social media giants such as Meta use vast arrays of expensive graphics processing units (GPUs) to run workloads that require AI acceleration. But along with the cost, GPUs tend to use a lot of energy and require investment in liquid cooling infrastructure.Meta sees AI as a strategic technology initiative that spans its platforms, including Facebook, Instagram and WhatApp. CEO Mark Zuckerberg is positioning Meta AI as the artificial intelligence everyone will use. In the companys latest earnings call, he said: In AI, I expect this is going to be the year when a highly intelligent and personalised AI assistant reaches more than one billion people, and I expect Meta AI to be that leading AI assistant.To reach this volume of people, the company has been working to scale its AI infrastructure and plans to migrate from GPU-based AI acceleration to custom silicon chips, optimised for its workloads and datacentres.During the earnings call, Meta chief financial officer Susan Li said the company was very invested in developing our own custom silicon for unique workloads, where off-the-shelf silicon isnt necessarily optimal.In 2023, the company began a long-term venture called Meta Training and Inference Accelerator (MTIA) to provide the most efficient architecture for its unique workloads.Li said Meta began adopting MTIA in the first half of 2024 for core ranking and recommendations inference. Well continue ramping adoption for those workloads over the course of 2025 as we use it for both incremental capacity and to replace some GPU-based servers when they reach the end of their useful lives, she added. Next year, were hoping to expand MTIA to support some of our core AI training workloads, and over time some of our GenAI [generative AI] use cases.Meta has previously said efficiency is one of the most important factors for deploying MTIA in its datacentres. This is measured in performance-per-watt metric (TFLOPS/W), which it said is a key component of the total cost of ownership. The MTIA chip is fitted to an Open Compute Platform (OCP) plug-in module, which consumes about 35W. But the MTIA architecture requires a central processing unit (CPU) together with memory and chips for connectivity.The reported work it is doing with ARM could help the company move from the highly customised application-specific integrated circuits (ASICs) it developed for its first generation chip, MTIA 1, to a next-generation architecture based on general-purpose ARM processor cores.Looking at ARMs latest earnings, the company is positioning itself to offer AI that can scale power efficiently. ARM has previously partnered with Nvidia to deliver power-efficient AI in the Nvidia Blackwell Grace architecture.At the Consumer Electronics Show in January, Nvidia unveiled the ARM-based GB10 Grace Blackwell Superchip, which it claimed offers a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models. The chip uses an ARM processor with Nvidias Blackwell accelerator to improve the performance of AI workloads.The semiconductor industry offers system on a chip (SoC) devices, where various computer building blocks are integrated into a single chip. Grace Blackwell is an example of an SoC. Given the work Meta has been doing to develop its MTIA chip, the company may well be exploring how it can work with ARM to integrate its own technology with the ARM CPU on a single device.Read more GPUs and AI accelerationDeepSeek-R1 budgeting challenges for on-premise deployments: The availability of the DeepSeek-R1 large language model shows its possible to deploy AI on modest hardware. But thats only half the story.GenAI demand fuels record sales of datacentre hardware and software in 2024: Figures from Synergy Research Group highlight how demand for generative AI and GPU technology has generated record amounts of spending in the datacentre hardware and software space during 2024Although an SoC is more complex from a chip fabrication perspective, the economies of scale when production is ramped up, and the fact that the device can integrate several external components into one package, make it considerably more cost-effective for system builders.Lis remarks on replacing GPU servers and the goal of MTIA to reduce Metas total cost of ownership for AI correlate with the reported deal with ARM, which would potentially enable it to scale up AI cost effectively and reduce its reliance on GPU-based AI acceleration.ARM, which is a SoftBank company, recently found itself at the core of the Trump administrations Stargate Project, a SoftBank-backed initiative to deploy sovereign AI capabilities in the US.During the earnings call for ARMs latest quarterly results, CEO Rene Haas described Stargate as an extremely significant infrastructure project, adding: We are extremely excited to be the CPU of choice for such a platform combined with the Blackwell CPU with [ARM-based] Grace. Going forward, therell be huge potential for technology innovation around that space.Haas also spoke about the Cristal intelligence collaboration with OpenAI, which he said enables AI agents to move across every node of the hardware ecosystem. If you think about the smallest devices, such as earbuds, all the way to the datacentre, this is really about agents increasingly being the interface and/or the driver of everything that drives AI inside the device, he added.0 Comments ·0 Shares ·48 Views
-
Warning over privacy of encrypted messages as Russia targets Signal Messengerwww.computerweekly.comRussia-backed hacking groups have developed techniques to compromise encrypted messaging services, including Signal, WhatsApp and Telegram, placing journalists, politicians and activists of interest to the Russian intelligence service at potential risk.Google Threat Intelligence Group disclosed today that Russia-backed hackers had stepped up attacks on Signal Messenger accounts to access sensitive government and military communications relating to the war in Ukraine.Analysts predict it is only a matter of time before Russia starts deploying hacking techniques against non-military Signal users and users of other encrypted messaging services, including WhatsApp and Telegram.Dan Black, manager of cyber espionage analysis at Google Clouds Mandiant division, said he would be absolutely shocked if he did not see attacks against Signal expand beyond the war in Ukraine and to other encrypted messaging platforms.He said Russia was frequently a first mover in cyber attacks, and that it would only be a matter of time before other countries, such as Iran, China and North Korea, were using exploits to attack the encrypted messages of subjects of intelligence interest.The warning follows disclosures that Russian intelligence created a spoof website for the Davos World Economic Forum in January 2025 to surreptitiously attempt to gain access to WhatsApp accounts used by Ukrainian government officials, diplomats and a former investigative journalist at Bellingcat.Russia-backed hackers are attempting to compromise Signals linked devices capability, which allows Signal users to link their messaging account to multiple devices, including phones and laptops, using a quick response (QR) code.Google threat analysts report that Russia-linked threat actors have developed malicious QR codes that, when scanned, will give the threat actor real-time access to the victims messages without having to compromise the victims phone or computer.In one case, according to Black, a compromised Signal account led Russia to launch an artillery strike against a Ukrainian army brigade, resulting in a number of casualties.Russia-backed groups have been observed disguising malicious codes as invites for Signal group discussions or as legitimate device pairing instructions from the Signal website.In some targeted spear phishing attacks, Russia-linked hackers have also embedded malicious QR codes in phishing websites designed to mimic specialist applications used by victims of the attack.The Russia-linked Sandworm group, also known as APT44, which is linked to the General Staff of the Armed Forces of the Russian Federation, has worked with Russian military forces in Ukraine to compromise Signal accounts on phones and computers captured on the battlefield.Googles Mandiant researchers identified a Russian language website giving instructions to Russian speakers on how to pair Signal or Telegram accounts with infrastructure controlled by APT44.The extrapolation is that this is being provisioned to Russian forces to be able to deploy captured devices on the battlefield and send back the communications to the GRU to be exploited, Black told Computer Weekly.Russia is believed to have fed the intercepted Signal communications back to a data lake to analyse the content of large numbers of Signal communications for battlefield intelligence.The attacks, which are based on exploiting Signals device linking capability, are difficult to detect and when successful there is a high risk that compromised Signal accounts can go unnoticed for a long time.Google has identified another cluster of Russia-backed attackers, known as UNC5792, that has used modified versions of legitimate Signal group invite pages which link the victims Signal account to a device controlled by the hacking group, enabling the group to read and access the targets Signal messages.Other Russia-linked threat actors have developed a Signal phishing kit designed to mimic components of the Kropyva artillery guidance software used by the Ukrainian military. The hacking group, known as UNC4221, previously used malicious web pages designed to mimic legitimate security alerts from Signal.The group has also used a lightweight JavaScript payload, known as Pinpoint, to collect basic user information and geolocation data from web browsers.Google has warned that the combination of access to secure messages and location data of victims are likely to be used to underpin targeted surveillance operations or to support conventional military operations in Ukraine.Google also warned that multiple threat actors have been observed using exploits to steal Signal database files from compromised Android and Windows devices.In 2023, the UKs National Cyber Security Centre and the Security Service of Ukraine warned that the Sandworm hacking group had deployed Android malware, known as Infamous Chisel, to search for messaging applications, including Signal, on Android devices.The malware is able to scan infected devices for WhatsApp messages, Discord messages, geolocation information and other data of interest to Russian intelligence. It is able to identify Signal and other messages and package them in unencrypted form for exfiltration.APT44 operates a lightweight Windows batch script, known as WaveSign, to periodically query signal messages from a victims Signal database and to exfiltrate the most recent messages.Russian threat actor Turla, which has been attributed by the US and the UK to the Russian Federal Security Service, has used a lightweight Powershell script to exfiltrate Signal desktop messages.And in Belarus, an ally of Russia, a hacking group designated as UNC1151 has used a command-line utility, known as Robocopy, to line up the contents of file directories used by Signal desktop to store messages and attachments for later exfiltration.Google has warned that attempts by multiple threat actors to target Signal serve as a warning for the growing threat to secure messaging services and that attacks are certain to intensify in the near-term future.There appears to be a clear and growing demand for offensive cyber capabilities that can be used to monitor the sensitive communications of individuals who rely on secure messaging applications to safeguard their online activity, it said.Users of encrypted communications are not just at risk from phishing and malware attacks, but also from the capability of threat actors to secure access to a targets device for example, by breaking the password.Black said it was insidious that Russian attackers were using a legitimate function in Signal to gain access to confidential communications, rather than compromising victims phones or breaking the encryption of the app.A lot of audiences who are using signal to have sensitive communications need to think about the risk of pairing their device to a second device, he said.Russia-aligned groups have also targeted other widely used messaging platforms, including Signal and Telegram.A Russian hacking group linked to Russias FSB intelligence service, known variously as Coldriver, Seaborgium, Callisto and Star Blizzard, shifted its tactics in late 2024 to launch social engineering attacks on people using WhatsApp encrypted messaging.The group targets MPs, people involved in governments or diplomacy, research and defence policy, and organisations or individuals supporting Ukraine.As exposed by Computer Weekly in 2022, Star Blizzard previously hacked, compromised and leaked emails and documents belonging to a former head of MI6, alongside other members of a secretive right-wing network devoted to campaigning for an extreme hard Brexit.Scottish National Party MP Stewart McDonald was another victim of the group. Left wing Freelance journalist Paul Mason, who has frequently criticised Putins war against Ukraine, was also targeted by the group and his emails leaked to the Greyzone, a pro-Russian publication in the US.Academics from the universities of Bristol, Cambridge and Edinburgh, including the late Ross Anderson, professor of security engineering, first published researched in 2023 warning that the desktop versions of Signal and WhatsApp could be compromised if accessed by a border guard or an intimate partner, enabling them to read all future messages.Signal has taken steps to improve the security of its pairing function to alert users to possible attempts to gain access to their accounts through social engineering tactics, following Googles findings.Josh Lund, senior technologist at Signal, said the organisation had introduced a number of updates to mitigate potential social engineering and phishing attacks before it was approached by Google.Google Threat Intelligence Group provided us with additional information, and we introduced further improvements based on their feedback. We are grateful for their help and close collaboration, he told Computer Weekly.Signal has since made further improvements, including overhauling the interface to provide additional alerts when someone links a new device.It has also introduced additional authentication steps to prevent anyone other than the owner of the primary device from adding a new linked device.When any new device is linked to a Signal account, the primary device will automatically receive a notification, allowing users to quickly review and remove any unknown or unwanted linked devices.Dan Black advised people the Signal app to think carefully before accepting links to group chats.If its a contact you know, just create the group yourself directly. Dont use external links to do things that you can do directly using the messaging applications features, he said.Read more about Russian attacks on Signal on Dan Blacks blog post.Countermeasures to protect encrypted communicationsEnable screen lock on all mobile devices using a long, complex password with a mix of uppercase and lowercase letters, numbers, and symbols.Install operating system updates as soon as possible and always use the latest version of Signal and other messaging apps.Ensure Google Play Protect is enabled. Google Play Protect checks apps and devices for harmful behavior and can warn users or block known malicious apps.Audit linked devices regularly for unauthorised devices by navigating to the "Linked devices" section in the applications settings.Exercise caution when interacting with QR codes and web resources purporting to be software updates, group invites, or other notifications that appear legitimate and urge immediate action.If available, use two-factor authentication such as fingerprint, facial recognition, a security key, or a one-time code to verify when your account is logged into or linked to a new device.iPhone users concerned about targeted surveillance or espionage activity should consider enabling Lockdown Mode to reduce their attack surface.Source: Google Threat Intelligence Group0 Comments ·0 Shares ·51 Views
-
Quantum computing in cyber security: A double-edged swordwww.computerweekly.comDespite investor scepticism, prominent quantum computing stocks have seen a notable rise at the beginning of 2025. Even prominent tech leaders like Jensen Huang and Mark Zuckerberg stating the field wont be profitable hasnt stopped investors and the wider public from being excited.In cyber security, however, quantum computing offers both unprecedented capabilities and significant threats, making it a double-edged sword that demands careful navigation. Just as white hat hackers can use it to bolster defences, their malicious counterparts might be able to supercharge their efforts, too.But how do we grapple with this quantum quandary? Thats exactly what well tackle in this article, as we must collectively ensure they are not blindsided by the risks while leveraging its advantages.Due to the presence of qubits, quantum systems can perform multiple calculations simultaneously, exponentially increasing computational power for specific tasks.For cyber security, we already know this means quantum computers could break widely used encryption methods, particularly those relying on factoring large prime numbers, such as RSA and ECC.These encryption standards form the backbone of secure online communication, financial transactions, and digital identity verification.The versatility of quantum computing goes beyond cracking encryption. Its computational power could revolutionise cyber security applications by improving pattern recognition, anomaly detection and optimisation algorithms. Tasks that once took days or months to process could be executed within minutes, drastically reducing response times to potential threats.Classical cryptography, based on mathematical problems too complex for current computers to solve within a practical timeframe, faces obsolescence in the quantum era. Shors algorithm, a quantum computing method, can efficiently factorise large integers, undermining RSA encryptions security.Just for comparison, in the context of Shors algorithm:A traditional computer might need trillions of years to crack a 2,048-bit RSA key.A quantum computer would need hours, if not days, to perform the same action.Similarly, elliptic curve cryptography(ECC), celebrated for its efficiency, is vulnerable to the same algorithm. This vulnerability jeopardises everything from personal data protection to national security.Hence, experts fear that hackers equipped with quantum capabilities could decrypt intercepted communications, exposing sensitive corporate or governmental information. And we all know how hard it is for politicians to adapt to modern tech.Even data encrypted today could be at risk due to the harvest now, decrypt later strategy, where adversaries collect encrypted data now, anticipating quantum decryption in the future. The implications extend to industries like banking, healthcare and energy, where secure communication is paramount.Its not all doom and gloom, as quantum computing offers plenty of tools to counter these threats. Quantum Key Distribution (QKD), for instance, uses quantum mechanics to establish secure communication channels. As a result, any attempt to eavesdrop on quantum-transmitted keys would alter their state, immediately alerting both parties to the intrusion.In addition to QKD, quantum random number generation (QRNG) is another promising application. Unlike classical methods, which rely on algorithms that could be predicted or replicated, QRNG leverages the inherent unpredictability of quantum processes to create genuinely random sequences. This strengthens cryptographic protocols, making them more resistant to attacks.Last, but most certainly not least, quantum-enhanced machine learning could also aid in identifying and mitigating cyber threats. If the current applications of ML seem daunting, think of what quantum ML can do by analysing vast datasets more efficiently than classical systems. Quantum algorithms could detect subtle patterns indicative of an attack, enabling earlier intervention.The cyber security industry is not waiting passively for the quantum threat to materialise. Post-quantum cryptography(PQC) aims to develop encryption algorithms resistant to both classical and quantum attacks.Standards bodies like the National Institute of Standards and Technology (NIST) are already advancing PQC algorithms, with several candidates already released or in the final stages of evaluation.Despite the apparent defensive potential, transitioning to PQC involves significant logistical challenges. Organisations must inventory their cryptographic assets, evaluate quantum risks and implement new algorithms across their systems.For industries like finance and healthcare, where data sensitivity is paramount, the transition timeline could stretch into years, requiring immediate action to stay ahead of quantum advancements.The degree of difficulty gets even higher if legacy systems are being relied upon, as backwards compatibility in a quantum context isnt something developers of old thought about.Likewise, PQC adoption requires extensive testing to ensure compatibility with existing systems and resilience against emerging threats. This, unfortunately, means allocating additional resources to train personnel, upgrade infrastructure and maintain compliance with evolving regulatory requirements.Weve spent a lot of time discussing how quantum computing can aid in defending our data, but white hat hackers and red teams arent the only ones interested in these advancements.Nation states and cyber crime conglomerates with nine-figure sums to spend will certainly finance the R&D of offensive tools, which can pose problems for everyone from governments to small businesses.In particular, sophisticated attacks, such as quantum-enhanced phishing or cracking biometric data, could exploit quantum-powered pattern recognition to unprecedented degrees. These capabilities pose a direct threat to authentication mechanisms, access controls and user trust.Overnight, staples like QR codes and various forms of MFA will become easily corruptible due to the sheer computing power at the criminals disposal. Widely used for payments and authentication, they may require updates or complete overhauls to resist quantum-generated attacks.Even the seemingly simple act of scanning a QR code could become a security risk if quantum-powered adversaries exploit flaws in code generation or scanning software.Despite claims that quantum computing will become feasible or profitable in several decades, we must still prepare for that inevitable moment.Governments and regulatory bodies are beginning to address the quantum challenge. Investments in quantum research and the establishment of frameworks for quantum-safe technologies are gaining momentum.For businesses, aligning with these initiatives is critical to ensure compliance and leverage state-of-the-art defences. Will cyber security become more expensive? Inevitably. But at the same time, there will be many more incidents than the 2,200 a day companies experienced in 2024.Moreover, collaboration between the public and private sectors will play a pivotal role in quantum readiness. Sharing threat intelligence, standardising best practices, and incentivising quantum-safe transitions will strengthen collective security.Most importantly, governments must invest in building a robust quantum infrastructure to ensure that technological advantages are not monopolised by adversaries.But how will we be able to balance between protectionism and benefiting the human race as a whole? Well find out sooner or later, thats for sure.Quantum computing is no longer a distant possibility, but an imminent reality. Organisations of all sizes must adopt a proactive stance, integrating quantum risk assessments into their cyber security strategies. In particular, we must collectively focus on:Education and awareness: IT and cyber security teams must receive the right education on quantum concepts and their implications. Building in-house expertise will be critical to navigating the complexities of quantum integration.Cryptographic inventory: This means mapping current cryptographic use to identify vulnerable assets. It allows organisations to prioritise upgrades where they are most needed.Adopting PQC: Currently, the best option is to transition to NIST-approved post-quantum algorithms. Early adoption minimises the risk of falling behind competitors or compliance requirements.Testing quantum services: In addition, its up to organisations to pilot technologies like QKD and QRNG to evaluate their practical benefits. Testing in real-world scenarios ensures smooth integration and operational efficiency.Quantum computings dual potential in cyber security as a tool for both defence and attack requires a balanced approach. While its threats to traditional encryption are undeniable, its innovations also promise stronger, more resilient defences.Organisations that act now to understand and prepare for the quantum era will not only safeguard their assets, but position themselves as leaders in a rapidly evolving technological landscape.Otherwise, no ones data will be safe, and well have no way of keeping up with the computing power at the hackers disposal.Read more about quantum securityOne of the biggest fears about quantum computing is its ability to easily break current encryption algorithms. Learn why and how to start making quantum security preparations.An emerging approach to quantum security dubbed blind quantum computing may one day help spur mass adoption of quantum computing safely and securely, using technology that is already available today.Experts at the Singapore FinTech Festival predict quantum computing will improve risk management, investment strategies and fraud detection in the financial sector, while also posing new challenges for data security.0 Comments ·0 Shares ·42 Views
-
Balancing act: Managing business needs alongside digital transformation and innovationwww.computerweekly.comWhen building a startup, there is a real balancing act between managing expectations, educating on whats possible, and identifying the true cost of innovation. CTOs are challenged not only to build functional technology platforms quickly, but to do so as cost effectively as possible.Startups are often not profitable therefore dont have a lot of cash to burn, meaning the CTO has to deliver technology solutions to solve their business goals on a limited budget.Lets look at a legacy industry like commercial insurance - its been undergoing a transformation in recent years. The industry is data and human heavy and is heavily regulated which is why its ripe for innovation. It is also playing catch-up to address the needs of many consumers who want a seamless user experience and businesses that want a modern experience - faster, streamlined, digitised, and so on - when dealing with insurance providers. This is particularly true of the on-demand economy.The on-demand economy is characterised by the likes of Taskrabbit, Doordash, Uber, Deliveroo and Amazon Flex. But its the likes of hard working on-demand taxi and delivery drivers who are calling for flexible insurance that caters to their very specific needs which enables them to buy comprehensive coverage for when theyre driving, and to switch it off when theyre not.However, many insurtechs have not adequately met these needs despite their ability to leverage technology more nimbly and effectively than traditional players. The business of insurance is complicated and innovation cannot be retrofitted with existing tech, which is why its vital to have a deep understanding of what the requirements are between the customer, the insurance partners and platforms like Uber and Amazon, for instance.Transforming the on-demand insurance industry is a symbiotic relationship between the customer, the insurance provider and the platform. Although it can deliver real results for all, it also comes with its share of unique challenges.Loss ratio - how much an insurance company spends on claims compared to the premiums it receives is a key indicator of profitability. When insurtech startups focus too much on showy AI-driven gimmicks such as automatic claims payments within seconds, loss ratios suffer and crucial insurance industry partners back away quickly. In the world of insurance, innovation at all costs simply doesnt work.But technology cannot simply operate as a cost centre. By working in partnership with the rest of the business, startup CTOs and their teams need to focus on building an ongoing technology foundation to drive innovation within legacy industry structures and processes, driving business growth as well as consistent results for customers and partners.Many of the challenges CTOs face arent necessarily about technology, but the change of mindset required when implementing tech solutions. Until very recently, insurance was an industry dominated by traditional players, governed by outdated systems and processes. While this is changing, there are still areas where bridges must be built between the promise of what technology can deliver and a certain this is how its always been done mindset.For example, we know that insurance, like many industries, is ripe for reinvention through smart uses of AI as long as it is implemented in the most appropriate areas of the business, and used as an augmented assistant rather than a replacement for specialist expertise.Many of the challenges CTOs face arent necessarily about technology, but the change of mindset required when implementing tech solutionsChris Gray, InshurAt Inshur, working in combination with a team from Google Cloud, we were able to build an AI assistant for our claims team and demonstrate to management its effectiveness in helping the team prioritise work as well as speeding up administrative tasks, while providing fast and effective customer service. Were continuing to roll out this technology internationally, as well as add further features to augment the human adjusters and utilise their expertise while saving them time.The assistant helps the team to quickly scan incoming documents, including email, physical letters, attachments or transcribed phone calls; infer the data, including who is the sender and the intention of the communication; identify important and useful information such as vehicle registration and claimant name; identify the priority and urgency of the claim; assign it to the right team; and summarise the data into a standard format for ease of use. By automatically accepting feedback, retraining, and learning from past actions, the assistant also helps guide handlers with proposed next steps, helping to train new claims handlers.The AI-based tools we built to support our claims teams have enabled us to see patterns that are also a good fit for other departments within the business. So much so, that we see potential for the commoditisation of these approaches to a wider set of solutions that serves not just insurance, but any business.Another question a lot of startup CTOs are asked is whether to build or buy. Building tech solutions from scratch can carry significant risk, especially given the resource investment typically required. But when every business in a given market is using the same platforms usually with significant tweaks and workarounds to fit their specific needs then nobody can truly win the innovation race.First-movers must always be willing to build when necessary, and to buy when prudent.For example, we decided that we needed to invest in developing our own solutions to problems that could not be adequately solved by off-the-shelf products. One such product is our Pay-as-you-flex wallet for Amazon Flex. While traditional insurance has historically covered drivers at all times, including when theyre not driving, we knew that technology held the key to delivering a new insurance product that would enable delivery drivers to pay only for the cover they needed, when they needed it.As the first-of-its-kind to enter the market, we knew that wed need to build it from scratch.Its only since we built our proprietary platform to manage business-critical processes including policy administration, claims management and billing that similar products have entered the market. By building a platform thats fully tailored to the specific needs of the market we serve, weve paved the way for other insurers to do the same for their customers and partners.However, the startup CTO must also take the lead in conversations where buying makes most sense, securing buy-in from other senior stakeholders and identifying the most appropriate vendors to partner with. Often, particularly in a high-growth startup where cost and return on investment are key considerations, this will involve a detailed assessment of risk for all available scenarios.In Inshurs case, were working with Google Cloud to implement several of its AI products to drive efficiencies and ensure that customers are treated fairly which is both a regulatory and moral imperative in the insurance industry.We know that our customers drive for a living, which means they often need to call us via their hands-free mobile technology while driving in between journeys, rather than emailing or speaking to a text-based chatbot.When we identified that a significant proportion of the calls coming into our customer service team could be quickly and effectively answered by an AI-driven solution, we implemented a smart virtual agent to handle more straightforward queries, enabling the team to focus more on serving customers with specific or detailed questions.Because of the crucial role technology such as AI will play in the coming years, CTOs will need to ensure they are consistently developing deep understanding and expertise, not just in the latest technology innovations but also how they can be implemented to drive business strategy and growth.Crucially, this will include taking a leadership role in helping to educate stakeholders across the business on the best use cases for AI tools and other solutions, building understanding at every level around what the technology can and cant help with, and putting clear structure and process around innovation.This ability to bridge the gap between the business and technology is already becoming a crucial indicator of future success.Chris Gray is chief technology officer at vehicle insurance provider Inshur.Read more about balancing business needs and innovationWhy AI will push enterprises to eliminate the silos that slow innovation - Generative AI offers a way to transform developer productivity, by expediting a cultural shift in the way enterprises organise themselves.Nordic innovators drive the evolution of engagement - Nothing is off limits for digitisation in the Nordic region, with startups applying their skills to even the most unexpected lifestyle and business challenges.AI innovation key to UK business, but obstacles threaten progress - AI is seen by IT leaders as essential, but it requires innovation, skills and infrastructure all of which are under pressure from day-to-day firefighting, energy costs and cyber threats.0 Comments ·0 Shares ·48 Views
-
DeepSeek-R1: Budgeting challenges for on-premise deploymentswww.computerweekly.comsdecoret - stock.adobe.comNewsDeepSeek-R1: Budgeting challenges for on-premise deploymentsThe availability of the DeepSeek-R1 large language model shows its possible to deploy AI on modest hardware. But thats only half the storyByCliff Saran,Managing EditorPublished: 18 Feb 2025 17:00 Until now, IT leaders have needed to consider the cyber security risks posed by allowing users to access large language models (LLMs) like ChatGPT directly via the cloud. The alternative has been to use open source LLMs that can be hosted on-premise or accessed via a private cloud.The artificial intelligence (AI) model needs to run in-memory and, when using graphics processing units (GPUs) for AI acceleration, this means IT leaders need to consider the costs associated with purchasing banks of GPUs to build up enough memory to hold the entire model.Nvidias high-end AI acceleration GPU, the H100, is configured with 80Gbytes of random-access memory (RAM), and its specification shows its rated at 350w in terms of energy use.Chinas DeepSeek has been able to demonstrate that its R1 LLM can rival US artificial intelligence without the need to resort to the latest GPU hardware. It does, however, benefit from GPU-based AI acceleration.Nevertheless, deploying a private version of DeepSeek still requires significant hardware investment. To run the entire DeepSeek-R1 model, which has 671 billion parameters in-memory, requires 768Gbytes of memory. With Nvidia H100 GPUs, which are configured with 80GBytes of video memory card each, 10 would be required to ensure the entire DeepSeek-R1 model can run in-memory.IT leaders may well be able to negotiate volume discounts, but the cost of just the AI acceleration hardware to run DeepSeek is around $250,000.Less powerful GPUs can be used, which may help to reduce this figure. But given current GPU prices, a server capable of running the complete 670 billion-parameter DeepSeek-R1 model in-memory is going to cost over $100,000.Read more about DeepSeekNew Relic extends observability to DeepSeek: The observability tools supplier now offers enhanced monitoring for DeepSeek models to help businesses reduce the costs and risks of generative AI development.Welcome to US artificial intelligences Sputnik moment: In spite of the USs financial might, Russias Sputnik was the first satellite. Is something similar about to happen thanks to a new Chinese LLM?The server could be run on public cloud infrastructure. Azure, for instance, offers access to the Nvidia H100 with 900 GBytes of memory for $27.167 per hour, which, on paper, should easily be able to run the 671 billion-parameter DeepSeek-R1 model entirely in-memory.If this model is used every working day, and assuming a 35-hour week and four weeks a year of holidays and downtime, the annual Azure bill would be almost $46,000 a year. Again, this figure could be reduced significantly to $16.63 per hour ($23,000) per year if there is a three-year commitment.Less powerful GPUs will clearly cost less, but its the memory costs that make these prohibitive. For instance, looking at current Google Cloud pricing, the Nvidia T4 GPU is priced at $0.35 per GPU per hour, and is available with up to four GPUs, giving a total of 64 Gbytes of memory for $1.40 per hour, and 12 would be needed to fit the DeepSeek-R1 671 billion-parameter model entirely-in memory, which works out at $16.80 per hour. With a three-year commitment, this figure comes down to $7.68, which works out at just under $13,000 per year.A cheaper approachIT leaders can reduce costs further by avoiding expensive GPUs altogether and relying entirely on general-purpose central processing units (CPUs). This setup is really only suitable when DeepSeek-R1 is used purely for AI inference.A recent tweet from Matthew Carrigan, machine learning engineer at Hugging Face, suggests such a system could be built using two AMD Epyc server processors and 768 Gbytes of fast memory. The system he presented in a series of tweets could be put together for about $6,000.Responding to comments on the setup, Carrigan said he is able to achieve a processing rate of six to eight tokens per second, depending on the specific processor and memory speed that is installed. It also depends on the length of the natural language query, but his tweet includes a video showing near-real-time querying of DeepSeek-R1 on the hardware he built based on the dual AMD Epyc setup and 768Gbytes of memory.Carrigan acknowledges that GPUs will win on speed, but they are expensive. In his series of tweets, he points out that the amount of memory installed has a direct impact on performance. This is due to the way DeepSeek remembers previous queries to get to answers quicker. The technique is called Key-Value (KV) caching.In testing with longer contexts, the KV cache is actually bigger than I realised, he said, and suggested that the hardware configuration would require 1TBytes of memory instead of 76Gbytes, when huge volumes of text or context is pasted into the DeepSeek-R1 query prompt.Buying a prebuilt Dell, HPE or Lenovo server to do something similar is likely to be considerably more expensive, depending on the processor and memory configurations specified.A different way to address memory costsAmong the approaches that can be taken to reduce memory costs is using multiple tiers of memory controlled by a custom chip. This is what California startup SambaNova has done using its SN40L Reconfigurable Dataflow Unit (RDU) and a proprietary dataflow architecture for three-tier memory.DeepSeek-R1 is one of the most advanced frontier AI models available, but its full potential has been limited by the inefficiency of GPUs, said Rodrigo Liang, CEO of SambaNova.The company, which was founded in 2017 by a group of ex-Sun/Oracle engineers and has an ongoing collaboration with Stanford Universitys electrical engineering department, claims the RDU chip collapses the hardware requirements to run DeepSeek-R1 efficiently from 40 racks down to one rack configured with 16 RDUs.Earlier this month at the Leap 2025 conference in Riyadh, SambaNova signed a deal to introduce Saudi Arabias first sovereign LLM-as-a-service cloud platform. Saud AlSheraihi, vice-president of digital solutions at Saudi Telecom Company, said: This collaboration with SambaNova marks a significant milestone in our journey to empower Saudi enterprises with sovereign AI capabilities. By offering a secure and scalable inferencing-as-a-service platform, we are enabling organisations to unlock the full potential of their data while maintaining complete control.This deal with the Saudi Arabian telco provider illustrates how governments need to consider all options when building out sovereign AI capacity. DeepSeek demonstrated that there are alternative approaches that can be just as effective as the tried and tested method of deploying immense and costly arrays of GPUs.And while it does indeed run better, when GPU-accelerated AI hardware is present, what SambaNova is claiming is that there is also an alternative way to achieve the same performance for running models like DeepSeek-R1 on-premise, in-memory, without the costs of having to acquire GPUs fitted with the memory the model needs.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current IssueThe Private AI Infrastructure Imperative: A Practical Perspective Write side up - by Freeform DynamicsSLM series: ABBYY-A strategic recalibration of the tech arsenal CW Developer NetworkView All Blogs0 Comments ·0 Shares ·48 Views
-
EY: Industrial companies worldwide stunted in emerging technology usewww.computerweekly.comSummit Art Creations - stock.adoNewsEY: Industrial companies worldwide stunted in emerging technology useBusinesses globally are spending more on emerging technologies year-on-year, but struggle to expand experimental use cases, finds EYs sixth annual Reimagining Industry Futures studyByBrian McKenna,Enterprise Applications EditorPublished: 18 Feb 2025 15:00 Many companies from a range of industries worldwide are stuck at a trial stage of emerging technologies usage, according to the sixth annual EY Reimagining industry futures study.The firm surveyed 1,635 enterprises in November 2024, including 9% in the UK, 6% in Germany and 20% in the US. Respondents were drawn from a range of industries, including financial services (13%), cars and transport (13%), energy, mining and utilities (13%), and manufacturing (12%).The study has a strong 5G and internet of things (IoT) orientation, as it has done in previous years. The lead authors come from the firms telecoms, media and technology (TMT) practices, Rob Atkinson, area managing partner for UK and Ireland TMT; and Adrian Baschnonga, global TMT lead analyst.The press statement that goes with the report says nearly half (47%) of the respondents are investing in generative artificial intelligence (GenAI), compared with 43% last year.Some 43% are investing in IoT, and 33% are investing in 5G technology, suggesting an upward trend from 39% and 27% respectively in 2024.However, the report finds that businesses struggle to convert technology trials into live deployments. Only 1% of organisations have active deployments of GenAI. And while IoT investment seems to be rising year-on-year, the proportion of businesses with active IoT deployments is in decline, slipping to 16% this year compared with 19% in 2024.Active deployments of edge computing are also flat year-on-year, at 22%.CEOs get more into technology selectionThe report finds decision-making inside enterprises is spreading out across the C-suite, with 49% of CEOs now involved in emerging technology strategy, including in choice of suppliers. Organisations where the CEO is a key decision-maker are further along, the report found.Over half (51%) of businesses with CEOs involved in new technology decisions are investing in GenAI, compared with 44% of organisations where the CEO is less involved.As well as posing a challenge to unlocking long-term value, a failure to progress beyond the trial phase means businesses risk missing out on the combined impact of different technologies deployed together, an area where four in five (79%) organisations are looking to achieve more, said Atkinson. There could also be a danger that too many emerging technologies initiatives will be conducted in isolation, limiting the resulting business benefits.The report discovered that respondents are limitedly aware of what IT suppliers have on offer.Some 73% said they need a better understanding of the changing supplier landscape. EY comments that this reflects an environment where collaborative ecosystems featuring alliances between different technology providers are becoming the norm.More than half (56%) the respondents believe they lack awareness of their technology suppliers partners. Less than a third of organisations have high awareness of new mobile technology capabilities such as network application programming interfaces (32%) and network slicing (26%).Organisations view ecosystem collaboration as a route to access new skills and capabilities but lack understanding of changing supplier ecosystems, said Baschnonga. With many companies under pressure to consolidate vendors, suppliers should prioritise their ecosystem and alliance strategies by concentrating on key partners and adapting their operating models and go-to-market approaches accordingly.Read more about enterprise adoption of 5G, IoT and AI technologies in 2025Deloitte: GenAI paving the way for transformative future for comms.The seven top IoT trends to watch in 2025 and beyond.Podcast: More enterprises are using generative AI, according to Informa TechTargets Enterprise Strategy Group. There is no consensus on the best use case or if open or closed models work better. Organisations that used AI before ChatGPT are ahead.EY: UK enterprises prioritise real-world 5G benefits over sophisticated use cases.The report found the ability to scale and integrate different technologies is important to one in four (25%) of those surveyed.The intention to focus spending on a smaller number of key suppliers makes it even more important that ICT providers present them as effective ecosystem orchestrators, able to provide end-to-end solutions with the assistance of partners and intermediaries, said Atkinson. As part of this, suppliers should take care to underline capabilities that extend beyond their core products.While enterprises remain committed to embracing leading-edge technologies like GenAI, IoT and 5G, they are facing challenges in translating their investments into real business value, he said. Now is the time for IoT suppliers to reposition themselves as holistic partners to their business customers and help them realise the full benefits of their spending on digital transformation.In the report itself, the authors say: This years findings show that organisations across all sectors remain committed to investing in emerging technologies to transform their operations but that issues around scalability and legacy integration are top of mind. Meanwhile, ICT vendors need to pay close attention to enterprises increasing focus on security and growing demand for ecosystem orchestration.Businesses dimly aware of datacentre environmental impactThey also pick out sustainability as an increasingly relevant theme for enterprise IT, especially with regard to datacentres. Sustainability factors increasingly weigh on decisions about emerging technology investments, with organisations more sensitive than before to the potentially ambivalent role of new technologies in the decarbonisation agenda.Datacentres, the reports authors comment, are an area of low environmental, social, and governance awareness for businesses. Half the organisations surveyed are unaware of their datacentres emissions profiles.Respondents are looking at a range of GenAI use cases, with no standout preferences, the report found. Some 50% of businesses see cyber security and data protection as a leading GenAI impediment, while 46% said a need to improve data governance to combat risks concerning data accuracy and ethics would be critical to future implementations.Data governance scores highest among manufacturers (46%) as a GenAI concern, while capturing productivity gains ranks top among EY's respondents in the consumer (48%) and energy (47%) sectors.Across all sectors, the most favoured GenAI use cases are software development, customer service and employee training or collaboration. However, financial services, healthcare and manufacturing respondents rated predictive or real-time operations and supply chain management as top-five GenAI use cases.Upskilling and more collaborationThe report says the two most important changes that organisations can make are employee upskilling and deeper collaboration across business functions.On a country level, education and employee upskilling is highly ranked by German respondents (36%), while deeper collaboration between business functions leads as an action among Chinese businesses (31%).Elsewhere, Indian (20%) and Japanese (18%) businesses are most likely to prioritise collaboration with suppliers.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current IssueThe Private AI Infrastructure Imperative: A Practical Perspective Write side up - by Freeform DynamicsSLM series: ABBYY-A strategic recalibration of the tech arsenal CW Developer NetworkView All Blogs0 Comments ·0 Shares ·40 Views
-
Cyber Monitoring Centre develops hurricane scale to count cost of cyber attackswww.computerweekly.comThe CrowdStrike incident in 2024 hit the UK like a hurricane. As it swept across the country, it ground flights to a standstill, forced hospitals to cancel operations, and brought down the computer systems and websites of hundreds of businesses.Since the early 1970s, it has been possible to predict the damage likely to be caused by hurricanes using a five-point wind scale.Category one hurricanes may damage roofs or break branches on trees, and at the other end of the scale, a category five hurricane could leave areas uninhabitable for months.Theres no such way to categorise the destructive impact of cyber events like the CrowdStrike update, which brought down Windows computers worldwide in July 2024 but that is set to change, as an initiative gets underway this year to assess the damage caused by major cyber attacks on a hurricane-inspired five-point scale.The Cyber Monitoring Centre(CMC), the first organisation of its type, has been set up by the insurance industry as an arms-length organisation to assess the impact of serious cyber attacks that have systemic implications for the UKs infrastructure and services. It aims to make it easier for businesses to buy cyber insurance cover, and know exactly what will be covered and what wont.There are many ways to assess the impact of a cyber event. It could be measured in loss of life through cancelled hospital operations, the disruption caused by leaks of peoples personally identifiable information on the internet, or the strategic implications of the loss of classified government information to a hostile nation state.The CMC will focus on just one: the economic impact. The centre has appointed a technical committee of eminent experts to assign cyber events to a five-point scale ranging from small-scale disruptions impacting hundreds of people to catastrophic attacks affecting hundreds of thousands. Damage impacts range from less than 100m for category one events to more than 5bn for category five.The technical committeeCiaran Martin: Teaches cyber security and public policy at the Blavatnik School of Government at the University of Oxford. Founder of the UKs National Cyber Security Centre, part of GCHQ.Sadie Creese: Professor of cyber security in the Department of Computer Science at the University of Oxford.Gaven Smith: Former director general for technology at GCHQ.Dan Jeffery: Managing director at Daintta, a cyber, data, privacy and systems engineering company.Jamie MacColl: Fellow in cyber security at the Royal United Services Institute.Julian Williams: Head of the department of finance at Durham University. Specialist in quantitative financial and cyber risk.The centre plans to monitor press reports and reports from business organisations to identify significant cyber attacks with multiple victims. It has partnerships with data providers to provide statistics on cancelled flights and disruption to datacentres, and works with the NHS to gather data on cancelled operations and hospital procedures. It also has access to advice from legal experts and cyber security specialists that respond to incidents, to help it build financial models of each significant cyber event. The models are reviewed and stress-tested. The final say goes to CMCs technical committeeThe centre aims to produce an impact report within 30 days of the cyber event that will focus on immediate financial losses. It will not take into account longer-term losses caused by, for example, the risk of litigation, or other delayed effects.The aim of the CMC is to make it easier for companies to buy cyber insurance and know what magnitude of cyber event on the five-point scale they can expect to be covered for, said Ed Lewis, a director and founder of the centre.The insurance industry has long struggled with how to insure cyber risks. Back in 2022, Lloyds of London issued a bulletin mandating the exclusion of cyber war incidents from cyber insurance cover. But who would decide whether a cyber attack was an act of warfare by a hostile state? Government or insurers?Add to that the complex exclusion clauses developed by the London market for cyber insurance, and it was a lawyers dream, said Lewis.It became clear that what mattered most was not which country was responsible for an act of cyber warfare, but the scale and severity of an attack. If a cyber attack had the digital fingerprints to show that it was directed against multiple targets, it had the hallmarks of a systemic attack.Some insurers, particularly those that insure multiple small and medium-sized businesses, do not cover systemic risks. That is to avoid large losses if multiple clients are hit by the same catastrophic incident. However, businesses can obtain insurance cover to protect against systemic risks from other specialist insurers.During the summer of 2022, Lewis went with a team of lawyers from his firm, Weightmans, working with insurer CFC, to France for six weeks to hammer out a solution. They came up with the idea of creating a company limited by guarantee to act as an independent centre of expertise on systemic cyber attacks.The team spent the first half of 2023 developing a methodology to assess the financial impact of cyber attacks on a five-point, hurricane-inspired scale, and in October that year incorporated CMC as a company limited by guarantee.The centre reviewed three cyber attacks in a trial run in 2024, and the results were surprising. Some of the most talked-about cyber attacks were not necessarily the most damaging to the UK economy.Take the attack on the file transfer service, MoveIT, in May 2023. It affected over 2,000 organisations and exposed the personal data of around 64 million people.Although it generated headlines around the world and captivated the attention of the cyber security community, the economic impact of the attack on MoveIT on the UK was as close to negligible as it is possible to reach on the CMCs hurricane scale.In June 2024, another ransomware group struck pathology laboratory Synnovis, which processes blood tests for NHS organisations across London. The attack led to major disruptions for GP surgeries and NHS trusts, leading to delays in medical procedures, cancelled appointments and shortages of blood stocks.Despite attracting mass interest, CMC judged the economic impact as relatively low, at between 100m and 1bn, with less than 0.1% of the population affected. That won it a rating of category two on the five-point scale.The failure of an update to CrowdStrikes security software in July 2024 caused worldwide disruption to Windows computers, but after an initial burst of press coverage, it failed to capture the publics continued interest. However, CMCs experts rated CrowdStrike as a category three incident significantly more impactful than MoveIT and Synnovis.The CMCs assessments may not be infallible, but they come with a clear methodology and use data to inform the technical committees decisions, all of which will be published and open to public scrutiny.The idea is that the centre will act very much like an independent arbitrator. Companies offering insurance and those buying insurance will be able to agree to be bound by its decision in any dispute over insurance cover.That means that the centre will need to be seen as completely independent of the insurance industry and government and that it will need to build a reputation for trusted decisions if it is to be successful.The centres current plans are to raise funding through membership fees, with the organisation hoping to attract members from a wide range of industries, professional services, manufacturing and retail, and insurers. Lewis stressed, however, that insurers and government will have no influence over the CMCs assessments.What we are very clear on is that the work of the technical committee has to be independent of government and independent of insurers, he said. They have to be as far as practically possible, beyond the potential for impeachment.The work of the CMC is likely to influence the direction of government policy over cyber risks. Many hope it will help to shift the balance of regulation from policing data leaks to policing cyber failures which result in the loss of essential services.Ciaran Martin cited as an example an attack by the Conti ransomware group on the Irish health service, which disrupted healthcare for months in 2021.When the Irish state refused to immediately pay the ransom, the Conti crime group stepped up the pressure by releasing medical data on the internet. It was only at that point that Irelands Health Service Executive was obliged to notify regulators about the incident.Its such a stark illustration of the point that a whole national healthcare system, including cancer surgeries had to stop, and thats not a breach of obligations, but the loss of a small amount of medical data [was considered a breach], he told Computer Weekly.That could change in the UK if the Cyber Security and Resilience Bill passes through parliament as expected. It introduces obligations for organisations to maintain critical services, and could lead to mandatory reporting of ransomware attacks.Im not saying, Lets repeal data regulation and lets impose sweeping service obligations on small hairdressing salons, but Im saying, Lets think about it carefully, said Martin.If you give a victim the choice between two bad situations one is the loss of critical health services and the other is the loss of their personal data, most people would opt for losing personal data rather than losing access to medical care, he added.Lewis concurs. There seems to be a disproportionate focus on cyber incidents that also involve a data breach, he said. I think its probably fair to say theres been quite a bit of criticism of the Information Commissioners Office and how those powers have been used over recent times.He hopes that the CMC can remove what he calls victim stigma, where fear of bad publicity or litigation can lead organisations hit by cyber attacks to opt for secrecy rather than openness.There are signs that this is happening already. The British Library, which faced major disruption after an attack by the Rhysida ransomware gang, published a comprehensive lessons-learned report, which was widely applauded in the cyber security community.The Harris Federation, a network of schools in London and the South East that lost email and telephone access after a ransomware attack in 2021, has talked about its experience in a series of podcasts to help others improve their own cyber resilience.For Martin, the CMCs primary aim is to deliver a better-functioning insurance market and better provision for companies seeking to insure against cyber attacks.He would like to see the CMC gain credibility over time as a source of factual information for academic, government and industry papers.And if the CMC is doing its job, he said, the media will be able to get a better handle on what cyber incidents are serious and what are likely to have a minor economic impact.How to measure the impact of a cyber attack1. Initial Review: After reviewing press reports and initial data sources, the technical committee discusses the attack and decides whether to formerly review it. Generally, the committee will only assess threats that cause more than 100,000 of damage and impact multiple organisations. The review includes assessing what types of organisations have been affected by the cyber attack and the creation of models to assess its economic impact.2. Data Collection: The centre collates media reports and supplements the reports by polling businesses through an arrangement with the British Chambers of Commerce. The poll identifies industry sectors that have been subject to cyber attacks, but either because they are less interesting to the public or have remained under the radar have not been reported by the press.Other data providers include Cirium, which provides details of all delayed and cancelled flights, and Parametrix, which provides data about outages of datacentres. Public sector data sources, such as the NHS, which publishes data on the impact of cyber attacks on clinical services, will also contribute. The centre plans fortnightly polls through the Office of National Statistics to collect further data on cyber attacks from up to 40,000 businesses.3. Modelling: The centre builds models of the financial impact of cyber attacks by speaking to experts in the organisations that might be affected, incident responders and legal experts. An attack on a NHS supplier, for example, would involve assessing the costs of delayed operations and the cost of catching up on a backlog of treatment. The centre is able to build a model of the upper and lower financial impacts of cyber attacks, based on the types of organisations affected. The assumptions of the model are reviewed and stress tested.4. Review by technical committee: A technical committee of experts meets to review and challenge the data and decide on the category of financial severity of the incident.5. Publication: The centre publishes an event report ranking the cyber risk between zero and five on a scale of severity. The report includes an analysis of how the committee reached its decision, additional analysis and commentary from members of the technical committee.0 Comments ·0 Shares ·34 Views
-
South Korea plots to become home to worlds largest AI datacentrewww.computerweekly.comsdecoret - stock.adobe.comNewsSouth Korea plots to become home to worlds largest AI datacentreConstruction of a datacentre that is projected to be 3GW in size is set to start later this year in South KoreaByCaroline Donnelly,Senior Editor, UKPublished: 18 Feb 2025 14:00 A newly created public-private partnership looks set to oversee the creation of the worlds largest artificial intelligence (AI) datacentre in South Korea. Work on the datacentre, which has a projected total cost of $35bn, is set to begin later this year and is expected to create a 3GW (gigawatt) datacentre by the time of its scheduled completion in 2028.Overseeing the project will be investment company Stock Farm Road, which has signed a memorandum of understanding (MoU) with South Korean governor Kim Young-rok of the Jeollanam-do Province that will pave the way for the sites development.The facility will feature advanced cooling infrastructure, regional and international fibre bandwidth, and the ability to handle significant and sudden variations in energy load, according to a statement. It will serve as a foundation for next-generation AI enablement, fostering innovation and economic growth in the region and beyond.The statement further claims the project will lead to the creation of 10,000 jobs in a variety of disciplines spanning energy supply and storage, renewable energy production, equipment supply, and research and development.This is more than just a technological milestone; its a strategic leap forward for Koreas global technological leadership, said Stock Farm Road co-founder Amin Badr-El-Din.We are incredibly proud to partner with Stock Farm Road and the Jeollanam-do government to build this crucial infrastructure, creating an unprecedented opportunity to build the foundation for next-generation AI.Stock Farm Road has a background of using data analytics and AI tools to manage energy resources, and operates its own proprietary energy-to-intelligence platform, known as e2i.The company said its expertise in this area will come into play during the datacentres construction, while other parts of its business will provide access to capital to fund the build.Meanwhile, the Jeollanam-do government side of the partnership will provide support by enabling the developers to secure the permits and approvals needed to allow construction of the datacentre to start.Stock Farm Road co-founder Brian Koo said the project could have a transformational impact on the region.Having witnessed first-hand the immense technological capabilities of large Asian enterprises, I recognise the potential of this project to elevate Korea and the region to a new level of technological advancement and economic prosperity, said Koo. This datacentre is not merely an infrastructure project, but the launchpad for a new digital industrial revolution.Looking ahead, Stock Farm Road said in its statement that the South Korean project marks the delivery of the first phase of its broader global strategy, whereby the company will seek to establish similar AI infrastructure partnerships across Asia, Europe and the US over the next 18 months.The decision to site the datacentre in the Jeollanam-do province of South Korea is notable, and in keeping with the direction of travel the countrys government has been going in for some time, with regard to supporting the spread of datacentre developments outside of the central Seoul area.The general policy direction is for the decentralisation of datacentres away from the greater Seoul area to regional areas for the establishment of purpose-led districts, said John Pritchard, Korea datacentre advisory team lead at real estate consultancy Cushman & Wakefield, in a late 2024 research note.However, this provides challenges for users, whereby latency and proximity to [the] end user are key considerations, and as such datacentres operating in the metropolitan area will become crucial enabling tools for digital groups.Read more about APAC datacentresThe Asia-Pacific Data Centre Association will advocate for policies to drive the security and resiliency of datacentres and minimise environmental impact, among other goals.Enterprises in the Asia-Pacific region are moving from their own datacentres into colocation facilities to reduce cost, improve efficiency and lower their carbon footprint.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current IssueSLM series: ABBYY-A strategic recalibration of the tech arsenal CW Developer NetworkSLM series - Qt: Practical code experiences from the command line CW Developer NetworkView All Blogs0 Comments ·0 Shares ·34 Views
-
MSP cuts costs with Scality pay-as-you-go anti-ransomware storagewww.computerweekly.comkaptn - FotoliaNewsMSP cuts costs with Scality pay-as-you-go anti-ransomware storageAutodata gets Scality as-a-service for on-site immutable storage via Artesca, to allow customers to rapidly recover from ransomware and at the same cost per terabyte no matter the volumeByAntony Adshead,Storage EditorPublished: 18 Feb 2025 10:50 London-based managed service provider (MSP) Autodata Products has opted for Scality Artesca object storage through its Scality cloud service provider (SCSP) pay-as-you-go purchasing option, which it uses to supply on-premise backup against ransomware for customers.Benefits of the SCSP licensing model include being able to offer customers highly scalable backup with short recovery time objectives (RTOs) and at the same cost per terabyte (TB) whether its for 25TB or 2.5PB (petabytes).Autodata Products provides IT solutions focused on backup, storage and security via its Cloudlake offer, predominantly based on Wasabi cloud, Veeam backup and Scality storage. It has around 500 customers on rolling monthly contracts and has offices in the US and the Netherlands.Within its core offer it has Cloudlake Ransomware Recovery Vault (RRV), and it is here that it decided to offer services using Scality Artesca and SCSP. It was already a customer of Veeams pay-as-you-go programme. RRV is based around the provision of on-site immutable storage for customers. Here, Autodata deploys Scality Artesca object storage as a backup target and pays only for what is used by its customers.Scality launched version 2.0 of its Artesca platform in 2023, and built in a big emphasis on the ransomware protection inherent toobject storage. Artesca is Scalitysobject storageproduct aimed at single application use cases and is heavily targeted at data protection.According to head of datacentre and cloud services Ant Bucknor, Autodata recommends customers keep a workable amount of critical data on-site so they can restore very quickly should a ransomware attack or other outage occur.He said: Our clients were restoring their data from the cloud. But that would often break their RTO policy because of the length of time it would take to get everything back up and running, then they would connect to the cloud location and then it would take them longer to bring the data back.So, how much data does Autodata recommend customers store on-site?I would suggest probably the last 30 days, said Bucknor. That would be my base guide, but obviously every clients different. Weve got clients where they have data they need to recover quickly from the last six months and others where if its over 48 hours old the data is completely worthless.The cloud will provide you with a full copy, and it will be immutable. But it isnt necessarily going to be quick enough.Key to the benefits for Autodata are that it can supply ransomware recovery solutions that would have been out of reach of SME and mid-market customers previously, and that as it buys more product from Scality prices should decrease.Bucknor said: Traditionally, these solutions were in the hundreds of thousands of pounds. Whereas, because of the flexibility we have with Scality, we now have solutions that are suitable for SMB, mid-market, education, local government, etc, whereas these solutions just wouldnt have been accessible to that market before.Theres a benefit from a profitability at scale point of view, as in the more of these we do over time, the bigger the benefit there is to Autodata as a business, with a knock-on effect in the better commercial terms for our customers.Pay-as-you-go is relatively new in storage purchasing, but its a rising trend. HPE offers pay-as-you-go storage as part of its Greenlake offer that stretches across its IT portfolio. NetApp, meanwhile, offers Keystone storage as-a-service, while Pure Storage has its Evergreen storage programmes.Pay-as-you-go is the future, said Bucknor. The reason is, people want to have a cloud-like purchasing model where they can buy what they want for as long as they want it, and when they dont want it any more, they can stop paying for it. They want to know what their costs are. Not have bought something over five years and suddenly they want to buy an extra few terabytes of data and its three times the price because theyre locked in. People want a more flexible solution.Read more about pay-as-you-go storageStorage explained consumption models of storage procurement: We look at consumption models of storage purchasing and how cloud operating models have made them mainstream and supplanted the traditional three-year lift-and-shift datacentre refresh.Pures storage as a service we can offer what others cant: All-flash storage vendor Pure makes bold claims about subscription pricing for storage, stating that the competition cant offer what it can because its arrays are built for non-disruptive upgrades.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current Issue0 Comments ·0 Shares ·50 Views
-
Metas planned subsea cable will exceed circumference of Earth and support AI innovationwww.computerweekly.comaapsky - stock.adobe.comNewsMetas planned subsea cable will exceed circumference of Earth and support AI innovationMetas planned 50,000 km subsea cable will be the worlds longest and connect the five major continents ByKarl Flinders,Chief reporter and senior editor EMEAPublished: 18 Feb 2025 11:45 Meta has announced its plan for a subsea cable that will span the globe, connecting emerging economies such as India, South Africa and Brazil to the US.Facebook, Instagram and WhatsApps parent company announced what is known as Project Waterworth in a blog post.The social media giants vice-president of network engineering, Gaya Nagarajan, and Alex-Handrah Aim, its global head of network investments, said the 50,000 km cable will be the worlds longest, and use the highest-capacity technology available.Regions of rapid economic growth will be connected directly to the US through the cable, which the Meta executives said will enable greater economic cooperation, facilitate digital inclusion and open opportunities for technological development in these regions.Meta said it has already developedover 20subsea cables. With Project Waterworth, we continue to advance engineering design to maintain cable resilience, enabling us to build the longest 24 fibre pair cable project in the world and enhance overall speed of deployment, wrote the Meta executives.The multibillion-dollar investment, which will see cables laid at depths of 7,000 meters, will take years to complete, but promises increased access to high-speed connectivity, which it said could, for example, support artificial intelligence (AI) innovation across the world.AI is revolutionising every aspect of our lives, from how we interact with each other to how we think about infrastructure and Meta is at the forefront of building these innovative technologies, the company said. As AI continues to transform industries and societies around the world, its clear that capacity, resilience and global reach are more important than ever to support leading infrastructure.Read more about subsea cablesVodacom lands 2Africa subsea cable in the Eastern Cape, South Africa: What is described as transformative subsea cable lands at the Vodacom facility in Gqeberha at South Africas Eastern Cape toprovide a gateway to direct international connectivity for faster, more reliable communications.RETN unveils new low latency London to Paris connectivity: International network services provider uses CrossChannel subsea cable to launch new low latency network route between London and Paris with a length of 550km.Grace Hopper subsea cable lands in UK: First private subsea cable to the UK funded by Google intended to improve diversity and resilience of the network, underpinning consumer and enterprise products, increasing capacity and powering services.Batelco claims Bahrain first with 400Ginternational capacity: Optical network technology supplier inks deal with leading telco provider in Bahrain to deliver 400G capacity high-speed connectivity as part of an initiative to upgrade internet service in the country.The blog post added: With Project Waterworth, we can help ensure that the benefits of AI and other emerging technologies are available to everyone, regardless of where they live or work.While subsea cables promise to enable global connectivity, there are concerns over how these costly and critical infrastructures can be protected from attacks from hostile states.MPs and peers recently launched an inquiry into the UKs ability to protect undersea internet cables that link the country with the rest of the world. This followed heightened threats of sabotage.The Joint Committee on the National Security Strategy, which scrutinises government decision-making on national security, aims to assess the UKs readiness for potential attacks on critical undersea communication cables.The inquiry followed astatement by defence secretary John Healey, warning that Russian president Vladimir Putin is targeting the UKs undersea oil, gas, electricity and internet cables after a Russian spy ship entered British waters.According to the parliamentary committee, 99% of the UKs data passes through undersea internet cables.As the geopolitical environment worsens, foreign states are seeking asymmetric ways to hold us at risk, said committee chairman Matt Western. Our internet cable network looks like an increasingly vulnerable soft underbelly. There is no need for panic we have a good degree of resilience, and awareness of the challenge is growing. But we must be clear-eyed about the risks and consequences: an attack of this nature would hit us hard.The global internet, which is critical for international communications and commerce, relies on a network of 500 cables that carry 95% of internet traffic. The cables are often in remote places, making them difficult and expensive to monitor.In The Current Issue:AI Action Summit: Global leaders decry AI red tapeNavigating the practicalities of AI regulation and legislationDownload Current Issue0 Comments ·0 Shares ·52 Views
-
Times are hard for fintech but latest report reveals glimmer of recoverywww.computerweekly.comFintech investment has been on a downward spiral since 2012, but the second half of this year could see the first shoots of recovery.Investment in UK fintechs fell by over a quarter last year, but there are signs that a recovery could be on its way, according to KPMG.In its latest report into EMEA fintech investment trends, KPMG revealed that 2024 saw UK firms receive $9.9bn (7.8bn). Meanwhile, total investment in 2024 was $20.3bn compared with $27.6bn the previous year.Total UK fintech investment dropped to $9.9bn in 2024, down 27% from $13.6bn in 2023, according to KPMGsPulse of fintech report.Hannah Dobson, partner and UK head of fintech at KPMG, said UK investment is expected to remain relatively soft in the first half of this year, but added that it will likely begin to pick up as interest rates reduce further, with common consensus that this will be in the third and fourth quarters.Fintech industry expert Chris Skinner, CEO at The Finanser, told Computer Weekly that times are hard in the fintech space. Fintechs had an amazing ride in the 2010s, but in the 2020s, it seems not, he said. Fintech took a hammering in 2023, with investing down 48% compared with 2022, which was also a bad year, and now we move into 2025 and reflect on 2024, where it went down even more.In its report, KPMG said geopolitical uncertainty, high levels of inflation and the higher interest rates all contributed to more subdued levels of UK fintech investment.Read more about fintechDobson at KPMG added: 2024 was another tough year for fintech investment, which inevitably has led to some business failure and some consolidation. It has also sharpened the focus on a path to profit and cost control which positively leads to more sustainable saleable businesses in the longer term.In EMEA, and particularly the UK, there are signs of a slow recovery in deals as the reduction in interest rates and more political stability leads to better certainty. The impact of regulation is an ongoing challenge for fintechs across EMEA as they face into new EU and UK regimes in areas such as AI and BNPL.The largest fintech deal in Europe in 2024 was the $560.6m sale of online bank Knab, to Austrian financial firm Bawag Group. The largest deal in the UK was the $267m venture funding round by money transfer provider Zepz.Its not just Europe that saw a fall in investment. Globally, fintech hit a seven-year low last year, with $95bn invested compared with $113.7bn in 2023.Karim Haji, global and UK head of financial services at KPMG, said there are some bright spots.Payments continued to be the rockstar of the fintech subsectors, driven by late-stage deals and an increasing focus on consolidation, and regtech gained a lot of traction, said Haji.Global investment in the payments space hit $31bn in 2024, up from $17.2bn in 2023.Haji added that while more deals are beginning to come through because of interest rate cuts in different jurisdictions and the lower cost of funding, the impacts of changing world trading conditions on inflation, interest rates and the market change are yet to be known.KPMGs figures mirror those published by Innovative Finance last month, which reported a 37% fall in investment in 2024 compared with 2023.Innovate Finance, the industry body for fintech in the UK, blamed tough market conditions that included rising interest rates, geopolitical instability, as well as a recalibration in venture capital fundraising.0 Comments ·0 Shares ·74 Views
-
AI-driven personalisation appealing to UK shoppers, says researchwww.computerweekly.comLaurent - stock.adobe.comNewsAI-driven personalisation appealing to UK shoppers, says research Retailers should be using artificial intelligence to increase brand loyalty through personalisation, according to researchByClare McDonald,Business Editor Published: 17 Feb 2025 15:51 Almost a third of shoppers in the UK have said that personalisation assisted by artificial intelligence (AI) increases their loyalty to brands, according to research from Bazaarvoice.The content generation platforms Shopper experience index report found that 31% of shoppers in the UK believe AI-driven loyalty rewards increase their brand loyalty, and 28% claimed tailored rewards makes them shop more often.More than 40% of shoppers in the UK also reported that personalised discounts or offers are more likely to encourage them to share a product or brand on their social media.Zarina Stanford, CMO of Bazaarvoice, said: In an era where consumers are inundated with choices, personalisation and contextualisation can prove to be a differentiator for brand loyalty and customer engagement.Why? It creates seamless and relevant experiences. Personalised and contextual right time, right place, right form offers and rewards go beyond generic discounts; they shape consumer decisions by delivering meaningful value tailored to individual preferences.Retail isnt the only place where AI is having a huge impact, with a large number of companies and individualsalready using technologies such as generative AI (GenAI) in their daily lives.Personalisation has played a large role in retail over the past 10 years as consumers become increasingly demanding, so AI becoming entangled in the generation of personalised rewards is a natural step that has developed along the way.Shopping habits have been changing as younger consumers grow to gain spending power, leading a large number of consumers between the ages of 18 and 34 to increasingly turn to social media for inspiration about what to buy and from where.But consumers have also been turning away from shopping online in recent years as physical discount stores offer more lucrative deals, forcing online retailers to try harder to entice shoppers back to the web through the use of loyalty schemes and personalised deals.Content from other shoppers, such as reviews, are also becoming increasingly important for UK consumers when online shopping, with more than half of shoppers saying they find reviews useful, and 45% saying an item needs to have between 11 and 50 reviews before they will even consider buying it.Almost 70% of shoppers said they also find content generated by other shoppers useful when making decisions about what to buy, with 12% saying it definitely impacts their shopping behaviour, and 43% saying it can have an effect most of the time.Some 16% of shoppers report they are likely to make a purchase based on user-generated content such as reviews, ratings, photos and videos.Stanford said that retailers need to be utilising personalisation, combined with good timing, to encourage consumers to make more purchases, something AI can help with, adding: AI-infused tools like product recommendations and targeted offers and social proofing present a massive opportunity to amplify these personalised, relevant, contextual experiences. They save time and deliver tailored information to shoppers that brands might not otherwise have the resources or ability to provide.But personalisation isnt the only aspect of retail AI is helping with this years Retail Federation Big Show saw retailers showcase AI use cases such as creating digital twins of stores to keep track of inventory, or helping retail associates use generative AI to more easily access and interpret store or product data.Regardless of how they are using it, retailers using AI to help boost purchases and productivity is an inevitability as AI dominates the next wave of tech adoption.Read more about retail technologyLed by a technology enthusiast, AutoTrader is on a digital journey that began when it decided to take a different route in 2007.Zebra Technologies CEO Bill Burns discusses the companys growth strategy and how it is enhancing frontline worker capabilities through machine vision, artificial intelligence and robotics.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current Issue0 Comments ·0 Shares ·48 Views
-
The Security Interviews: Yevgeny Dibrov, Armiswww.computerweekly.comOver the past 20 to 30 years, the intelligence community has generated a stream of cyber security leaders private cyber security companies are littered with former operatives of the American and British intelligence services.But in Israels case, the intelligence-to-cyber pipeline has produced arguably the highest density of cyber security startups and organisations in the world. The likes of Check Point, CyberArk, Imperva, Palo Alto Networks and Radware can all claim links back to the Israel Defence Forces (IDFs) technology units.Among these units, which likely date back to before Israels founding in 1948, are the highly secretive cyber weapons and tech development shop Unit 81, and the more widely known signals intelligence Unit 8200.Israels astonishing concentration of cyber security talent is largely attributable to both Unit 81 and Unit 8200, whose existence has only been fairly recently acknowledged. Mossad may get international attention, but it is Unit 8200 that gets the data to support it and Unit 81 that builds the tech.Acting as incubators for cyber security and hacking talent, these units benefit from Israels compulsory military service laws and intensive screening processes, which divert individuals with potential from frontline armed service, although they also scout after-school computer clubs for likely-looking candidates.That the IDF is the wellspring of Israels cyber talent is these days no secret, but Armis CEO, Yevgeny Dibrov who is allowed to say little more about the time he served in Unit 81 beyond the fact that he was there says theres more to the growth of Israels cyber community than just the hothouse conditions at the IDF.He compares the environment to that of a startup. When youre a startup, when youre building something, you dont have much budget, but with what you have you still need to do outstanding things that differentiate a lot, that achieve a lot, and that puts you in a great place.We dont have the same budget as the CIA or the NSA, maybe point one of a percent, but we have no choice. There is no other way, he explains. We have a lot of enemies and we want to win.At first. Dibrovs pipeline into the IT industry does not seem all that different from most other peoples stemming from an initial schoolboy interest in computers, maths and physics but he became hooked when he was tapped for Unit 81 as a fresh-faced teen.In the years I spent there I became fascinated by different capabilities, fascinated by this world, fascinated also by working hard for my country, he says. Twice during my service I was part of the team that won the Israel Defence Prize, which is for outstanding achievements in the technology space.The slogan of our unit was Make the Impossible Possible, says Dibrov. Its written over the door when you enter. You see it every day, and so you kind of live towards it. Its not just a clich. Twice during my service [at Unit 81] I was part of the team that won the Israel Defence Prize, which is for outstanding achievements in the technology space Yevgeny Dibrov, ArmisBut the intelligence forces serve not only as a hub for creative talent, but a hub for team-building. Indeed, of Armiss first cohort of employees, about 50% served alongside Dibrov himself at Unit 81, and the others worked alongside his co-founder and chief technology officer (CTO) Nadir Izrael at Unit 8200.People get to know each other, and during my time at Unit 81, we were always talking to alumni that actually started companies and did great things, says Dibrov. I remember my team leader in the army was [Wiz CEO] Assaf Rappaport, so we were always meeting some of the alumni from our unit and learning what they had done.It makes you excited, he says, it makes you think, okay, when, when Im out, here is what I want to do. I already knew that I wanted to start a company.Alongside heading off to study at Technion, the Israel Institute of Technology between 2010 and 2013, at the end of his service, Dibrov helped set up Adallom, with which Rappaport was also involved. Adallom was a cloud access security brokerage (CASB) specialising in visibility, governance and protection across business applications such as Box, Google Apps, Microsoft Office 365 and Salesforce.The firms Office 365 work clearly stood out, because in September 2015, Microsoft bought the company for over $300m. Just a couple of months later, Dibrov and Izrael started Armis, with the first employees coming on board in February 2016.Asked to explain like Im five, Dibrov describes Armis as a cyber exposure management platform that essentially provides its customers with a Google Map of their IT environment, with every single asset accounted for, whether its something run-of-the-mill like a laptop or smartphone, to operational technology (OT) like industrial controllers, even medical equipment.On top of this basic map, Armis provides additional layers covering security risk discovery, monitoring and management, and ultimately, remediation.We want to not just allow you to see your risk, but reduce it, whether through patching devices or mitigating threats with different rules in your technology environment, he says.Armis was earlier than many to the OT/internet of things (IoT) side of security, mapping it as a factor early on in its history, before the topic really started to hit mainstream security conversations about six or seven years ago. What was the spark that led Dibrov to make this bet?We really started from talking to a lot of customers, talking to a lot of CIOs, and we were hearing about the explosion of connected devices, he explains. We looked at the variety of different environments and we saw there was a gap.On the one hand, you have laptops and servers that are covered by your antivirus or next-gen antivirus, and then you have everything else. And then everything else changes in different industries. If you look at an airport, they have a big gap around a lot of operational technology stuff. They have different distribution centres, logistics centres and more. They have datacentres. They have buildings with building management systems.At about the same time, incidents such as NotPetya and WannaCry were exposing the precarious security of such environments particularly in healthcare settings and this helped push people towards a more holistic view of cyber security. Security teams have no idea what cameras they have, and theyre 90% Chinese, potentially exploited with backdoors, and often in the most critical environments Yevgeny Dibrov, ArmisIt was a huge push across the board, says Dibrov. Everyone suddenly understood that they needed to have visibility into what they have in these environments because imagine if Im an attacker, why would I attack a laptop if the laptop has 50 agents on it? I attack the most vulnerable thing, and thats usually devices that dont run any agents or antivirus, devices that are mostly not updated or cannot be patched, and a bunch of old XP machines in those areas.These devices are often the most important in the organisation. Look at a hospital. How can you compare the importance of a laptop versus an MRI scanner?Customers took to this like ducks to water, and today Armis works with over 35% of the Fortune 100.From day-to-day there is no such thing as a typical customer, says Dibrov, but they tend to be larger, distributed organisations with highly complex environments and a lot of devices. Armis claims currently to have approximately 5.3 billion connected devices in harness.Whats the weirdest thing he ever found? We have things like cars that connect to the company network, to wireless air fryers we see those a lot. And the amount of types of cameras you would never believe, says Dibrov. Security teams have no idea what cameras they have, and theyre 90% Chinese, potentially exploited with backdoors, and often in the most critical environments.Like many of its peers, Armis has also been branching out into threat research and frequently publishes its own thought leadership on diverse topics recent ones include breaking down CISAs most exploited vulnerabilities and the emergence of DeepSeek.We have so much data now, and our customers can benefit from that, says Dibrov. We also acquired a company in the space, some super-talented guys who merge a lot of their own data with data we generated to provide early warning, which has been very significant.Keeping in touch with Armiss buyers is a source of pride for Dibrov, who makes a point of frequently checking in with his user advisory board and speaking to six or seven individual customers every day, whether those are long-term existing ones, new ones, or those moving through their procurement or onboarding processes.What do they need? What do they think like? What do we need to do different? says Dibrov. This is something that is ongoing for us always listening, always developing, always running fast, and always providing real solutions to real problems.Dibrov declares himself particularly paranoid when it comes to the competition, and likes to try to think about 18 months ahead in terms of innovation. This is something that is always on my mind because thats the biggest differentiator, he says. You need to have first of all the best product, and then to execute from there. Thats what keeps me up at night.Armis recently closed a large Series D funding round, raising $200m to take it to a total valuation of over $4bn. And having made two acquisitions in the past 12 months Silk Security in April 2024 and CTCI in February 2025 Dibrov is open to more, as well as exploring the possibility of an initial public offering (IPO).Beyond these goals, Dibrov is, of course, keeping a close eye on the developing threat landscape. His views on where things are going tally with those of many other observers.We keep seeing a lot of state actors, from Russia, China, North Korea, Iran. We keep seeing them, and we keep seeing a lot of targeting of EMEA and US critical infrastructure and manufacturing, he says. We see them sometimes also leveraging AI [artificial intelligence]. My guess is well see that more and more, and defenders really need to be prepared.Read more in the Security Interviews seriesThreat intel expert and author Martin Lee, EMEA technical lead for security research at Cisco Talos, joins Computer Weekly to mark the 35th anniversary of the first ever ransomware attack.Okta regional chief security officer for EMEA sits down with Dan Raywood to talk about how Okta is pivoting to a secure-by-design champion.We speak to Googles Nelly Porter about the companys approach to keeping data as safe as possible on Google Cloud.0 Comments ·0 Shares ·47 Views
-
AI models explained: The benefits of open source AI modelswww.computerweekly.comOpen source software has a number of benefits over commercial products, not least the fact that it can be downloaded for free. This means anyone can analyse the code and, assuming they have the right hardware and software environment configured, they can start using the open source code immediately.With artificial intelligence (AI), there are two parts to being open. The source code for the AI engine itself can be downloaded from a repository, inspected and run on suitable hardware just like other open source code. But open also applies to the data model, which means it is entirely feasible for someone to run a local AI model that has already been trained.In other words, with the right hardware, a developer is free to download an AI model, disconnect the target hardware from the internet and run it locally without the risk of query data being leaked to a cloud-based AI service.And since it is open source, the AI model can be installed locally so it does not incur the costs associated with cloud-hosted AI models, which are generally charged based on the volume of queries measured in tokens submitted to the AI engine.All software needs to be licenced. Commercial products are increasingly changed on a subscription basis and, in the case of large language models (LLMs), the cost correlates to the amount of usage, based on the volume of tokens submitted to the LLM and the hardware consumed in terms of hours of graphics processing unit (GPU) time used by the model when it is queried.Like all open source software, an LLM that is open source is subject to the terms and conditions of the licensing scheme used. Some of these licences put restrictions on how the software is used but, generally, there are no licence fees associated with running an open model locally.However, there is a charge if the open model is run on public cloud infrastructure or accessed as a cloud service, which is usually calculated based on the volume of tokens submitted to the LLM programmatically using application programming interfaces (APIs).Beyond the fact that they can be downloaded and deployed on-premise without additional cost, their openness helps to progress the development of the model in a similar way to how the open source community is able to improve projects.Just like other open source projects, an AI model that is open source can be checked by anyone. This should help to improve its quality and remove bugs and go some way to tackling bias, when the source data on which a model is trained is not diverse enough. The following podcast explores AI models further.Download this podcast which covers the different AI models available.Most AI models offer free or low-cost access via the web to enable people to work directly with the AI system. Programmatic access via APIs is often charged based on the volume of tokens submitted to the model as input data, such as the number of words in a natural language query. There can also be a charge for output tokens, which is a measure of the data produced by the model when it responds to a query.Several open source large language models offer smaller versions, known as small language models. These are trained using less data, which means they offer adequ\ate performance on modest hardware. However, they may be limited in what they can achieve given that an SLM is not trained on the vast knowledge based required by an LLM. Examples include:DeepSeek-tiny, DistilBERT from Hugging Face.Gemma from Google Ministral from Mistral.Phi-mini from Microsoft.TinyLlama from Meta.Since it is open source, an open model can be downloaded from its open source repository (repo) on GitHub. The repository generally contains different builds for target systems such as distributions of Linux, Windows and MacOS.However, while this approach is how developers tend to use open source code, it can be a very involved process and a data scientist may just want to try the latest, greatest model, without having to get into the somewhat arduous process of getting the model up and running.Step in Hugging Face, an AI platform where people who want to experiment with AI models can research what is available and test them on datasets all from one place. There is a free version, but Hugging Face also provides an enterprise subscription and various pricing for AI model developers for hosting and running their models.Another option is Ollama, an open source, command-line tool that provides a relatively easy way to download and run LLMs. For a full graphical user interface to interact with LLMs, it is necessary to run an AI platform such as Open WebUI, an open source project available on GitHub.Cyber security leaders have raised concerns over the ease with which employees can access popular LLMs, which presents a data leakage risk. Among the widely reported leaks is Samsung Electronics use of ChatGPT to help developers debug code. The code in effect, Samsung Electronics intellectual property was uploaded into the ChatGPT public LLM and effectively became subsumed into the model.The tech giant quickly took steps to ban the use of ChatGPT, but the growth in so-called copilots and the rise of agentic AI have the potential to leak data. Software providers deploying agentic technology will often claim they keep a customers private data entirely separate, which means such data is not used to train the AI model. But unless it is indeed trained with the latest thinking, shortcuts, best practices and mistakes, the model will quickly become stale and out of date.An AI model that is open can be run in a secure sandbox, either on-premise or hosted in a secure public cloud. But this model represents a snapshot of the AI model the developer released, and similar to AI in enterprise software, it will quickly go out of date and become irrelevant.However, whatever information is fed into it remains within the confines of the model, which allows organisations willing to invest the resources needed to retrain the model using this information. In effect, new enterprise content and structured data can be used to teach the AI model the specifics of how the business operates.There are YouTube videos demonstrating that an LLM such as the Chinese DeepSeek-R1 model can run on an Nvidia Jetson Nano embedded edge device or even a Raspberry Pi, using a suitable adapter and a relatively modern GPU card. Assuming the GPU is supported, it also needs plenty of video memory (VRAM). This is because for best performance, the LLM needs to run in memory on the GPU.Inference requires less memory and less GPU cores, but the more processing power and VRAM available, the faster the model is able to respond, as a measure of tokens it can process per second. For training LLMs, the number of GPU cores and VRAM requirements go up significantly, which equates to extremely costly on-premise AI servers. Even if the GPUs are run in the public cloud with metered usage, there is no getting away from the high costs needed to run inference workloads continuously.Nevertheless, the sheer capacity of compute power available from the hyperscalers means that it may be cost effective to upload training data to an open LLM model hosted in a public cloud.As its name suggests, a large language model is large. LLMs require huge datasets for training and immense farms of powerful servers for training. Even if an AI model is open source, the sheer cost of the hardware means that only those organisations that are prepared to make upfront investments in hardware or reserve GPU capacity in the public cloud have the means to operationalise LLMs fully.But not everyone needs an LLM and that is why there is so much interest in models that can run on much cheaper hardware. These so-called small language models (SLM) are less compute intensive, and some will even run on edge devices, smartphones and personal computers (see box).Read more about AI modelsThe Amazon Nova family of GenAI models boasts faster speeds and lower costs, and caters to needs from text and image generation to video creation and multimodal application.Gartner expert Nicholle Lindner explains how companies like Klarna are leveraging AI to drive new revenue, and what businesses need to consider when embracing the technology.Tech giant Google is becoming more assertive with its AI releases by introducing new experimental versions of Gemini and a low-end model.0 Comments ·0 Shares ·56 Views
-
RAG AI: Do it yourself, says NYC data scientistwww.computerweekly.comOrganisations should build their own generative artificial intelligence-based (GenAI-based) on retrieval augmented generation (RAG) with open sources products such as DeepSeek and Llama.This is according to Alaa Moussawi, chief data scientist at New York City Council, who recently spoke at the Leap 2025 tech event in Saudi Arabia.The event, held near the Saudi capital Riyadh, majored on AI and came as the desert kingdom announced $15bn of planned investment in AI.But, says Moussawi, theres nothing to stop any organisation testing and deploying AI with very little outlay at all, as he described the councils first such project way back in 2018.New York City Council is the legislative branch of the New York City government thats mainly responsible for passing laws and budget in the city. The council has 51 elected officials plus attorneys and policy analysts.What Moussawis team set out to do was make the legislative process more fact-based and evidence-driven and make the everyday work of attorneys, policy analysts and elected officials smoother.To that end, Moussawis team built its first AI-like app a duplicate checker for legislation for production use at the council in 2018.Whenever a council member has an idea for legislation, its put into the database and timestamped so it can be checked for originality and credited to the elected official who made that law come to fruition.There are tens of thousands of ideas in the system and a key step in the legislative process is to check whether an idea has been proposed before.If it was, then the idea must be credited to that official, says Moussawi. It is a very contentious thing. Weve had errors happen in the past where a bill got to the point of being voted on and finally another council member recalled they had proposed the idea, but the person who had done the duplicate check manually had somehow missed it.By todays standards, its a rudimentary model, says Moussawi. It uses Googles Word2Vec, which was released in 2013 and captures information about the meaning of words based on those around it.Its somewhat slow, says Moussawi. But the important thing is that while it might take a bit of time five or 10 seconds to return similarity rankings its much faster than a human and it makes their jobs much easier.The key technology behind the duplicate checker is vector embedding, which is effectively a list of numbers the vectors that represent the position of a word in a high-dimensional vector space.That could often consist of over a thousand dimensions, says Moussawi. A vector embedding is really just a list of numbers.Moussawi demonstrated the idea by simplifying things down to two vectors. In a game of cards, for example, you can take the vector for royalty and the vector for woman and they should give you the vector for queen if you add them together.Strong vector embeddings can derive these relationships from the data, says Moussawi. Similarly, if you added the vectors for royalty and men, you can expect to get the vector for king.Thats essentially the technology in the councils duplicate checker. It trains itself by using the full set of texts to generate its vector embeddings.Then it sums over all the word embeddings to create an idea vector, he says. We can measure the distance between this idea for a law and another idea for a law. You could measure it with your ruler if you were working with two-dimensional space, or you apply the Pythagorean theorem extended to a higher dimensional space, which is fairly straightforward. And thats all there is to it the measure of distance between two ideas.Moussawi is a strong advocate that organisations should get their hands dirty with generative AI (GenAI). Hes a software engineering PhD and a close student of developments through the various iterations of neural networks but is keen to stress their limitations.AI text models, including the state-of-the-art models we use today, are about simply predicting the next best word in a sequence of words and repeating the process, he says. So, for example, if you ask a large language model [LLM], Why did the chicken cross the road?, its going to pump it into the model and predict the next word, the, and the next one, chicken and so on.Thats really all its doing, and this should somewhat make you understand why LLMs are actually not intelligent or dont have true thought the way we do.By contrast, Im explaining a concept to you and Im trying to relay that idea and Im finding the words to express that idea. A large language model has no idea what word is going to come next in the sequence. Its not thinking about a concept.According to Moussawi, the big breakthrough in the scientific community that came in 2020 was that compute, datasets and parameters could scale and scale and you could keep throwing more compute power at them and get better performance.He stresses that organisations should bear in mind that the science behind the algorithms isnt secret knowledge: We have all these open source models like Deepseek and Llama. But the important takeaway is that the fundamental architecture of the technology did not really change very much, We just made it more efficient. These LLMs didnt learn to magically think. All of a sudden, we just made it more efficient.Coming up to date, Moussawi says New York City Council has banned the use of third-party LLMs in the workplace because of security concerns.This means the organisation has opted for open source models that avoid the security concerns that come with cloud-based subscriptions or third-party APIs.With the release of the first Llama models, we started tinkering on our local cluster, and you should too. There are C++ implementations that can be run on your laptop. You can do some surprisingly good inference, and its great for developing a proof-of-concept, which is what we did at the council.The first thing to do is to index documents into some vector database. This is all work you just do once on the back end to set up your system, so thats ready to be queried based on the vector database that youve built.Next, you need to set up a pipeline to retrieve the documents relevant to a given query. The idea is that you ask it a prompt and youd run that vector against your vector database legal memos youve stored in your vector database or plain language summaries or other legal documents that youve copied from wherever, depending on your domain.This process is known as retrieval augmented generation or RAG and its a great way to provide your model with scope regarding what its output should be limited to. This significantly reduces hallucinations and, since its pulling the documents that its responding with from the vector database, it can cite sources.These, says Moussawi, provide guardrails for your model and give the end user a way to ensure the output is legitimate because sources are being cited.And thats exactly what Moussawis team did, and his message while he awaits delivery of the council data science teams first GPUs is: What are you waiting for?Read more about AI and Saudi ArabiaStorage technology explained: Vector databases at the core of AI: We look at the use of vector data in AI and how vector databases work, plus vector embedding, the challenges for storage of vector data and the key suppliers of vector database products.Saudi Arabia calls for humanitarian AI after tightening screws on rights protesters: Oppressive state wants global digital identity system at the heart of all AI, to make it trustworthy and prevent it being used for unauthorised surveillance.0 Comments ·0 Shares ·63 Views
-
Opinion: Saudi plans to be an IT superpower, but challenges lie aheadwww.computerweekly.comSaudi Arabia is spending big on IT, and on artificial intelligence (AI) in particular, in an effort to diversify from its historic, massive dependency on oil production and revenues.One huge advantage it has is that it has the cash, with $15bn of planned investment in AI announced at Leap 2025 near Riyadh last week.The event was wall-to-wall with speakers who wove a vision of Saudi Arabia as an IT-driven superpower, a regional datacentre hub and hive of digital innovation and transformation, all encapsulated in the kingdoms Vision 2030 plans.Speaker after speaker conjured persuasive impressions of Saudi Arabias actual and potential potency in IT terms, but to get there it will require a critical mass of tech company presence. That will also mean attracting talent and nurturing its own through education. It also requires infrastructure, physical and digital.Those are all recognised, as is the desire to transition from being an economy dominated by the state sector. So, how far is the kingdom along that road, and can it overcome the challenges it faces?As is often the case at events like this, it feels like theres a huge disparity between the visions of enthusiastic speakers and life beyond the doors of the event halls. And as a long-time visitor to and observer of the region with enough Arabic to get around independently it appears there are some key challenges in the path of Saudi Arabias ambitious plans to be a developed digital economy.In one sense, thats to be expected as the country is nearer the start of its journey rather than the end. But I was left wondering, was the show itself a microcosm that highlighted some of the obstacles?Take getting to the conference. The local taxi app, Careem, advertised 100% off for Leap attendees. It didnt work. When I arrived at the event via my first ride, I couldnt pay via the app, except by charging a wallet in UAE currency, which didnt seem practical or wise given that wasnt the country I was in. And so I couldnt get the discount because I had to pay using cash dollars (always a useful fallback).Meanwhile, the show lacked a taxi lane to take attendees to the right part of the site. Instead, I was dumped in a vast car park a long way from the main exhibition area, which covered a site of several km2. Uber drivers seemed to be persona non grata on-site, as one I went with quickly switched the placard in his window to Careem as he approached the venue.It was also impossible to hail an Uber to the site as the designated pickup spot seemed locked down by drivers set on chiselling the highest possible fare out of attendees. Drivers were understandably desperate to recoup the outlay inherent in having driven 60km from Riyadh and hanging around all day. On one such occasion, I had to threaten to call the police when a taxi driver suddenly announced a doubling of fare after a near two-hour journey from the event to Riyadh. A photo of his meter taken at 7:30pm showed it had been running from 10:30 that morning.Catching taxis often involved drama. I even had the interesting experience of hailing an Uber, the app telling me it had arrived and was then on its way to my destination without me in it. I wouldnt have taken so many taxis, but in Riyadh, if there was a main road in the way think eight lanes of high volume, high speed traffic it was the only way to get across.Read more about Saudi ArabiaSaudi puts $15bn into AI as experts debate next steps. The kingdoms Leap 2025 tech show is the backdrop for huge investment, plus debate over the future of artificial intelligence as a productivity tool but which can also potentially undermine human society.AI at Leap 2025: Huge potential but a threat to the fabric of society? Thought leaders in artificial intelligence gathered at Saudi Arabias Leap 2025 tech show to set out the next steps for enterprise AI and agentic AI, but also AIs potential danger to human society.Inside the event, speakers were scattered across numerous stages over a vast area. The show apps map was tiny, physical maps were non-existent and signage was only useful if you were already near where you wanted to go.There were many hundreds of staff. Their tabards said crowd control and they seemed mostly there to make sure the huge numbers of attendees walked on the right side of the walkways between stands or had badges scanned.Asking directions was overwhelmingly fruitless as crowd control staff rarely knew where anything was. I was reminded of the issue of underemployment, a phenomenon present historically among Arab armies and civil bureaucracies, where huge numbers of people do only basic work or none at all, and lack training and the initiative that can result from it.Meanwhile, if consulting the app map was needed, the Wi-Fi failed at just the wrong time. I missed numerous sessions because navigating the event was so difficult.Well leave aside the queues for male toilets that stretched 25 people long into the main halls.Whats all this got to do with achieving digital transformation goals? Well, as a UK-based journalist, I probably fit the description of skilled foreign worker, perhaps not of a dissimilar level to the kind of tech staff Saudi-based employers may want to attract.Talent retention involves making sure those peoples lives can run smoothly.Sure, you can overcome the daily hassles of life in Saudi by throwing money at them, providing drivers and fixers, and so on, but thats not removing an obstacle, its working around it.Meanwhile, many of the mundane issues of daily life I got a taste of are the result of the stratification and inequality in Saudi society, which has documented challenges in the number and condition of foreign workers as well as overcoming historic deficits in its education system.Again, sure, there are plenty of well-educated Saudis that speak flawless English, but that route is not one open to all members of society, and thats a restriction on the talent pool that developed countries dont face, or face to a far lesser extent.Thats compounded by the domination of the public sector, in which influence and family mean progression is likely often a case of who you know or are related to.Having said all of that, all the Saudis I met were warm, generous and hospitable to a level thats totally disarming to a northern European. And like people everywhere, if given the right chance, Im sure they will rise to the challenges in front of them. The question is, how painlessly can Saudi state and society navigate the change required for that to happen?0 Comments ·0 Shares ·52 Views
-
A path to better data engineeringwww.computerweekly.comTodays data landscape presents unprecedented challenges for organisations, due to the need for businesses to process thousands of documents in numerous data formats. These, as Bogdan Raduta, head of research for FlowX.ai, points out, can range from PDFs and spreadsheets, to images, to multimedia, which all need to be brought together and processed into meaningful information.Each data source has its own data model and requirements, and unless they can be brought together in a meaningful way, organisations end up dealing with data silos. This can mean users are forced to move between one application and another, and cutting and pasting information from different systems to get useful insights to drive informed decision-making.However, traditional data engineering approaches struggle with the complexity of pulling in data in different formats. While conventional ETL [extract, translate and load] data pipelines excel at processing structured data, they falter when confronting the ambiguity and variability of real-world information, says Raduta. What this means is that rule-based systems become brittle and expensive to maintain as the variety of data sources grows.In his experience, even modern integration platforms, designed for application programming interface (API)-driven workflows, struggle with the semantic understanding required to process natural language content effectively.With all of the hype surrounding artificial intelligence (AI) and data, the tech industry really should be able to handle this level of data heterogeneity. But, Jesse Anderson, managing director of Big Data Institute, argues that there is a lack of understanding of the job roles and skills needed for data sciences.One misconception, according to Anderson, is that data scientists have traditionally been mistaken for people who create models and do all of the engineering work required. But he says: If you ever want to hear how something data-related cant be done, just go to the no team for data warehousing, and youll be told, no, it cant be done.This perception of reality doesnt bode well for the industry, he says, because the data projects dont go anywhere.Anderson believes that part of the confusion comes from the two quite different definitions of the data engineering role.One definition describes a structured query language (SQL)-focused person. This, he says, is someone who can pull information from different data sources by writing queries using SQL.The other definition is a software engineer with specialised knowledge in creating data systems. Such individuals, says Anderson, can write code and write SQL queries. More importantly, they can create complex systems for data where a SQL-focused person is totally reliant on less complex systems, often relying on low-code or no-code tools.The ability to write code is a key part of a data engineer who is a software engineer, he says. As complicated requirements come from the business and system design, Anderson says these data engineers have the skills needed to create these complex systems.However, if it were easy to create the right data engineering team in the first place, everyone would have done it. Some profound organisational and technical changes are necessary, says Anderson. Youll have to convince your C-level to fund the team, convice HR that youll have to pay them well, and convince business that working with a competent data engineering team can solve their data problems.In his experience, getting on the right path for data engineering takes a concerted effort, which means it does not evolve organically as teams take on different projects.Recalling a recent problem with data access, Justin Pront, senior director of product at TetraScience, says: When a major pharmaceutical company recently tried to use AI to analyse a year of bioprocessing data, they hit a wall familiar to every data engineer: their data was technically accessible but practically unusable.Pront says the companys instrument readings sat in proprietary formats, so critical metadata resided in disconnected systems. What this meant, he says, is that simple questions, such as enquiring about the conditions for a particular experiment, required manual detective work across multiple databases.This scenario highlights a truth Ive observed repeatedly scientific data represents the ultimate stress test for enterprise data architectures. While most organisations grapple with data silos, scientific data pushes these challenges to their absolute limits, he says.For instance, scientific data analysis relies on multi-dimensional numerical sets, which Pront says comes from a dizzying array of sensitive instruments, unstructured notes written by bench scientists, inconsistent key-value pairs and workflows so complex that the shortest ones total 40 steps.For Pront, there are three key principles from scientific data engineering that any organisation looking to improve data engineering needs to have a grip on. These are the shift from file-centric to data-centric architectures, the importance of preserving context from source through transformation via data engineering, and the need for unified data access patterns that serve immediate and future analysis needs.According to Pront, the challenges faced by data engineers in life sciences offer valuable lessons that could benefit any data-intensive enterprise. Preserving context, ensuring data integrity and enabling diverse analytical workflows apply far beyond scientific domains and use cases, he says.Discussing the shift to a data-centric architecture, he adds: Like many business users, scientists traditionally view files as their primary data container. However, files segment information into limited-access silos and strip away crucial context. While this works for the individual scientist analysing their assay results to get data into their electronic lab notebook (ELN) or lab informatics management system (LIMS), it makes any aggregate or exploratory analysis or AI and ML [machine learning] engineering time and labour-intensive.Pront believes modern data engineering should focus on the information, preserving relationships and metadata that make data valuable. For Pront, this means using platforms that capture and maintain data lineage, quality metrics and usage context.In terms of data integrity, he says: Even minor data alterations in scientific work, such as omitting a trailing zero in a decimal reading, can lead to misinterpretation or invalid conclusions. This drives the need for immutable data acquisition and repeatable processing pipelines that preserve original values while enabling different data views.In regulated industries like healthcare, pharmaceutical sector and financial services, data integrity from acquisition at a file or source system through data transformation and analysis is non-negotiable.Read more about data engineeringForrester why digitisation needs strong data engineering skills: How do enterprises become adaptive? If you cant measure it responsively, you cant manage it as an adaptive enterprise.The install and rise of the citizen data engineer: Businesses are increasingly relying on these often self-trained individuals to alleviate some of the workload of their data engineering teams.Looking at data access for scientists, Pront says there is a tension between immediate accessibility and future utility. This is clearly a situation that many organisations face. Scientists want, and need, seamless access to data in their preferred analysis tools, so they end up with generalised desktop-based tooling such as spreadsheets or localised visualisation software. Thats how we end up with more silos, he says.However, as Pront notes, they also use cloud-based datasets colocated with their analysis tools to ensure the same quick analysis while the entire enterprise benefits from having the data prepped and ready for advanced applications, AI training and, where needed, regulatory submissions. He says data lakehouses built on open storage formats such as Delta and Iceberg have emerged in response to these needs, offering unified governance and flexible access patterns.Returning to the challenge of making sense of all the different types of data an organisation needs to process, as Raduta from FlowX.ai has previously noted, ETL falls far short of what businesses now need.One promising area of AI that the tech sector has developed is large language models (LLMs). Raduta says LLMs offer a fundamentally different approach to data engineering. Rather than relying on the deterministic transformation rules inherent in ETL tools, he says: LLMs can understand context and extract meaning from unstructured content, effectively turning any document into a queryable data source.For Raduta, this means LLMs offer an entirely new architecture for data processing. At its foundation lies an intelligent ingestion layer that can handle diverse input sources. But unlike traditional ETL systems, Raduta says the intelligent ingestion layer not only extracts information from data sources, it has the ability to understand what all the different data sources it ingests are actually saying.There is unlikely to be a single approach to data engineering. TetraSciences Pront urges IT leaders to consider data engineering as a practice that evolves over time. As Big Data Institutes Anderson points out, the skills required to evolve data engineering, combine programming skills and traditional data science skills in a way that means IT leaders will need to convince the board and their HR people that to attract the right data engineering skills they will need to pay a premium for staff.0 Comments ·0 Shares ·36 Views
-
Kenyan AI workers form Data Labelers Associationwww.computerweekly.comNAN - FotoliaNewsKenyan AI workers form Data Labelers AssociationA group of Kenyan data workers whose labour provides the backbone of modern artificial intelligence systems set up the Data Labelers Association to improve their working conditions and raise awareness about the challenges they faceBySebastian Klovig Skelton,Data & ethics editorPublished: 14 Feb 2025 17:59 Artificial intelligence (AI) workers in Kenya have launched the Data Labelers Association (DLA) to fight for fair pay, mental health support and better overall working conditions.Employed to train and maintain the AI systems of major technology companies, the data labellers and annotators say they formed the DLA to challenge the systemic injustices they face in the workplace, with 339 members joining the organisation in its first week.While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is thatthe technology requires a significant amount of human labourto complete even the most basic functions.Otherwise known asghost, micro or click work, this labour is used to train and assure AI algorithms by disseminating the discrete tasks that make up the AI development pipeline to a globally distributed pool of workers.Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers are massively underpaid often earning just cents for tasks that take a number of hours to complete and yet still face frequent pay disputes over withheld wages that are never resolved.Screenshots shared by the DLA show that in the worst instances, data workers have been paid nothing for around 20 hours of work.The workers power all these technological advancements, but theyre paid peanuts and not even recognised, said DLA president Joan Kinyua, adding that while the labellers work on everything from self-driving cars to robot vacuum cleaners, many of the products their largely hidden labour powers are not even available in Kenya.Given the job specs of a lot of data work which requires workers to be graduates and have high-speed internet connections and quality machines DLA vice-president Ephantus Kanyugi said workers are being forced to making big upfront investments in their education and equipment, only to be paid a few cents per task. The workers power all these technological advancements, but theyre paid peanuts and not even recognised Joan Kinyua, Data Labelers AssociationHe added employers in the sector are disincentivised to pay more or even follow through on paying people for the work theyve done because the large surplus labour pool means when people inevitably get frustrated and leave, they already have someone in the pipeline to replace them.DLA secretary Michael Geoffrey Abuyabo Asia added that weak labour laws in Kenya are being deliberately exploited by tech companies looking to cheaply outsource their data annotation work.A contract is supposed to be agreed within the confines of the law, but they know the law is not there, so it becomes a loophole theyre utilising, he said.DLA members added the lack of formal employment contracts with clear and consistent terms throughout the sector also leads to a lack of longer-term job security, as work can change unpredictably when, for example, jobs are randomly taken offline, and allows sudden account deactivations or dismissals to take place without warning or recourse.On the contracts, Kinyua added there is no consistency, as while some are indecipherable due to the legal jargon, others will cover just a few days, and in some instances there is no contract at all.The lack of security is heightened by the fact the workers do not have access to healthcare, pensions or labour unions. The workers said all of this combines to create highly variable workloads and income, which then makes it difficult for the workers to plan for the future.On top of the day-to-day precarity faced by data workers, the DLA said many also have to deal with content moderation trauma as a result of having to consistently interact with disturbing and graphic images, as well as retribution from companies when they do raise issues about their work conditions.Any time we raise our voice, especially the taskers who are on the lowest level, we are dismissed automatically, said one DLA member, who added contracts were no help in these instances because they would typically only specify a start and end date without any other information, and that she herself was dismissed when speaking up on behalf of others.To help alleviate these issues, the DLA will focus on getting workers mental health support, giving them legal assistance to deal with pay or employment disputes, providing them with professional development opportunities, and running advocacy campaigns to highlight common problems faced by data labellers.The DLA will also seek to put in place collective bargaining agreements with the data companies that sit between workers and the large tech firms whose models and algorithms they are ultimately training.As part of its efforts to push for better working conditions in the sector, the association is already working with the African Content Moderators Union and Turkopticon which largely works with data labellers on Amazons Mechanical Turk platform as well as the Distributed AI Research (DAIR) Institute.The organisation added it is already in touch with Kenyan politicians and is communicating with the Ministry of ICT to help lawmakers better understand the nature of their work and how conditions for platform workers can be improved.Read more about data workersAmazon Mechanical Turk workers suspended without explanation: A likely glitch in Amazon Payments resulted in hundreds of Mechanical Turk workers being suspended from the platform without proper explanation or pay.US lawmakers write to AI firms about gruelling work conditions: Lawmakers wrote to nine tech companies including Amazon, Google and Microsoft about the working conditions of those they employ to train and maintain their artificial intelligence systems.AI disempowers logistics workers while intensifying their work: Conversations on the algorithmic management of work largely revolve around unproven claims about productivity gains or job losses less attention is paid to how AI and automation negatively affect low-paid workers.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueMaximising agentic AIs impact: the importance of a strong data foundation Data MattersSLM series - SUSE: Balanced realities for AI model use cases CW Developer NetworkView All Blogs0 Comments ·0 Shares ·52 Views
-
Gartner: CISOs struggling to balance security, business objectiveswww.computerweekly.comAround the world, security leaders say they are struggling to balance the need to appropriately secure their data and the need to maximise efficient use of this data to hit their business objectives, according to a study produced by analysts at Gartner, who found that only 14% of cyber leaders were keeping on top of this.The analysts poll of 318 senior security leaders conducted in the summer of 2024 found 35% were confident they could secure data assets, and 21% were confident they could use data to achieve their business goals. The ability to do both was beyond six in seven.Nathan Parks, senior specialist for research at Gartner, said this was clearly something that needed to be addressed.With only 14% of SRM leaders able to secure their data while supporting business goals, many organisations can face increased vulnerability to cyber threats, regulatory penalties and operational inefficiencies, ultimately risking their competitive edge and stakeholder trust, he said.In light of its findings, Gartner has developed a five-point checklist for security leaders security and risk leaders, in its parlance to better align their business needs to stringent data security requirements, and successfully achieve both effective data protection and business enablement goals:CISOs should try to ease governance-related friction for the business by co-creating data security policies and standards with input and feedback from end users across the business;They should try to align data-security related governance efforts through partnering better with the businesss other internal functions to identify areas of overlap and potential synergy;They should clearly identify and delineate any non-negotiable cyber security requirements that the business must absolutely meet when handling previously unknown or unexpected data security risks;On generative artificial intelligence (GenAI) and decision-making related to it, they should take care to define appropriate, high-level guardrails that enable stakeholders to experiment within set parameters;And finally, they should collaborate with the businesss data and analytics teams to secure board-level buy-in on data security levels.Gartners final point, on building more effective working relationships with senior leadership whose core work is not invested in cyber security, is a perennial thorn in the side of many security leaders, who frequently lament diverging attitudes.This was highlighted in a recent study published by Cisco-owned security analytics and observability specialist Splunk, which polled chief information security officers (CISOs) in 10 countries, including the UK and US. Splunk found that CISOs were increasingly participating in boardrooms, but highlighted big gaps between their priorities and other board members.For example, said Splunk, when it came to innovating with emerging tech, such as GenAI, 52% of CISOs spoke of this as a priority compared to 33% of other board members, on upskilling or reskilling cyber employees, 51% of CISOs thought this was a priority compared with 27% of board members, and on contributing to revenue growth initiatives, 36% of CISOs said they prioritised this, compared with 24% of board members.Though the full report is more nuanced than these statistics might suggest, the study also showed that only 29% of CISOs thought they were getting the budget they needed to work effectively, while 41% of board members felt security budgets were absolutely fine.Read more about CISO attitudes and trendsThe healthcare CISO role involves more fiduciary responsibility and cyber security accountability than in years past.Elastic CISO Mandy Andress argues that security leaders should be seeking to build closer ties with their organisational legal teams.Those who get the role of a CISO may have overcome some professional hurdles, but are they ready to face what comes as part of the job? And who do they ask for advice? We look at the mentoring dilemma.0 Comments ·0 Shares ·63 Views
-
Government launches consultation on plan to streamline business through e-invoicingwww.computerweekly.comArto - stock.adobe.comNewsGovernment launches consultation on plan to streamline business through e-invoicingGovernment announces 12-week consultation on electronic invoicing as part of its plan for changeByKarl Flinders,Chief reporter and senior editor EMEAPublished: 14 Feb 2025 12:00 The government has asked businesses for comment on a UK approach to electronic invoicing (e-invoicing), which is part of its plan to grow the economy.A 12-week consultation on e-invoicing in business asked firms and other stakeholders for feedback on topics including different models of e-invoicing, if a mandated or voluntary approach is best and whether e-invoicing should be complemented by real-time digital reporting.This is part of the governments plan for change, which has kickstart economic growth as one of its five missions. Both HMRC and the Department for Business and Trade (DBT) are behind the plans, which could improve tax collection and business efficiency.According to the government announcement, the consultation will gather views on standardising e-invoicing and how to increase its adoption across UK businesses and the public sector. It will also look at different e-invoicing models, with evidence sought from businesses whether they currently use e-invoicing or not.The government said the use of e-invoicing technology could help businesses get their tax right first time, reduce invoicing and data errors, improve the accuracy of VAT returns, help close the tax gap, and save time and money.It added that e-invoicing can speed up business-to-business payments, which improves cash flow and reduces paperwork.DBT minister Gareth Thomas said small businesses are at the heart of the economy and vital to the countrys growth. The potential of digitising taxes, speeding up payments and streamlining administrative tasks will provide real benefits to the economy, supporting smaller firms and boosting growth, he said. This is why we want to make sure e-invoicing works for SMEs [small and medium-sized enterprises], because cash flow can make all the difference between staying afloat or going under.Read more about tech in governmentThe government cited success stories where e-invoicing has speeded up payments for business.It said an unnamed NHS trust has used e-invoicing to reduce the time it takes to get invoices ready for processing from 10 days to 24 hours, with queries from suppliers reduced by 15%. It also said that in Australia, government agencies are settling e-invoices in five days rather than 20 days with traditional invoices.It also highlighted research from UK accounting software firm Sage, which found that e-invoicing streamlines routine tasks including data entry and tax filing, resulting in 3% productivity gains.The consultation, Promoting electronic invoicing across UK businesses and the public sector, is open until 7 May.E-invoicing simplifies processes, reduces errors and helps businesses to get paid faster, said James Murray, exchequer secretary to the Treasury. By cutting paperwork and freeing up valuable time and money, it will help improve firms productivity and their ability to grow and succeed.As part of the prime ministers plan for change, we have begun our work to transform the UKs tax system into one that is focused on helping businesses and the economy to grow.The government said about 130 countries already have or are in the process of implementing e-invoicing structures and standards, which includes what data they should include and its format.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueSLM series - SUSE: Balanced realities for AI model use cases CW Developer NetworkShould the UK spend hundreds of billions on AI? Cliff Saran's Enterprise blogView All Blogs0 Comments ·0 Shares ·70 Views
More Stories