Computer Weekly
Computer Weekly
Computer Weekly is the leading technology magazine and website for IT professionals in the UK, Europe and Asia-Pacific
1 pessoas curtiram isso
378 Publicações
2 fotos
0 Vídeos
0 Anterior
Atualizações recentes
  • Kenyan AI workers form Data Labelers Association
    www.computerweekly.com
    NAN - FotoliaNewsKenyan AI workers form Data Labelers AssociationA group of Kenyan data workers whose labour provides the backbone of modern artificial intelligence systems set up the Data Labelers Association to improve their working conditions and raise awareness about the challenges they faceBySebastian Klovig Skelton,Data & ethics editorPublished: 14 Feb 2025 17:59 Artificial intelligence (AI) workers in Kenya have launched the Data Labelers Association (DLA) to fight for fair pay, mental health support and better overall working conditions.Employed to train and maintain the AI systems of major technology companies, the data labellers and annotators say they formed the DLA to challenge the systemic injustices they face in the workplace, with 339 members joining the organisation in its first week.While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is thatthe technology requires a significant amount of human labourto complete even the most basic functions.Otherwise known asghost, micro or click work, this labour is used to train and assure AI algorithms by disseminating the discrete tasks that make up the AI development pipeline to a globally distributed pool of workers.Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers are massively underpaid often earning just cents for tasks that take a number of hours to complete and yet still face frequent pay disputes over withheld wages that are never resolved.Screenshots shared by the DLA show that in the worst instances, data workers have been paid nothing for around 20 hours of work.The workers power all these technological advancements, but theyre paid peanuts and not even recognised, said DLA president Joan Kinyua, adding that while the labellers work on everything from self-driving cars to robot vacuum cleaners, many of the products their largely hidden labour powers are not even available in Kenya.Given the job specs of a lot of data work which requires workers to be graduates and have high-speed internet connections and quality machines DLA vice-president Ephantus Kanyugi said workers are being forced to making big upfront investments in their education and equipment, only to be paid a few cents per task. The workers power all these technological advancements, but theyre paid peanuts and not even recognised Joan Kinyua, Data Labelers AssociationHe added employers in the sector are disincentivised to pay more or even follow through on paying people for the work theyve done because the large surplus labour pool means when people inevitably get frustrated and leave, they already have someone in the pipeline to replace them.DLA secretary Michael Geoffrey Abuyabo Asia added that weak labour laws in Kenya are being deliberately exploited by tech companies looking to cheaply outsource their data annotation work.A contract is supposed to be agreed within the confines of the law, but they know the law is not there, so it becomes a loophole theyre utilising, he said.DLA members added the lack of formal employment contracts with clear and consistent terms throughout the sector also leads to a lack of longer-term job security, as work can change unpredictably when, for example, jobs are randomly taken offline, and allows sudden account deactivations or dismissals to take place without warning or recourse.On the contracts, Kinyua added there is no consistency, as while some are indecipherable due to the legal jargon, others will cover just a few days, and in some instances there is no contract at all.The lack of security is heightened by the fact the workers do not have access to healthcare, pensions or labour unions. The workers said all of this combines to create highly variable workloads and income, which then makes it difficult for the workers to plan for the future.On top of the day-to-day precarity faced by data workers, the DLA said many also have to deal with content moderation trauma as a result of having to consistently interact with disturbing and graphic images, as well as retribution from companies when they do raise issues about their work conditions.Any time we raise our voice, especially the taskers who are on the lowest level, we are dismissed automatically, said one DLA member, who added contracts were no help in these instances because they would typically only specify a start and end date without any other information, and that she herself was dismissed when speaking up on behalf of others.To help alleviate these issues, the DLA will focus on getting workers mental health support, giving them legal assistance to deal with pay or employment disputes, providing them with professional development opportunities, and running advocacy campaigns to highlight common problems faced by data labellers.The DLA will also seek to put in place collective bargaining agreements with the data companies that sit between workers and the large tech firms whose models and algorithms they are ultimately training.As part of its efforts to push for better working conditions in the sector, the association is already working with the African Content Moderators Union and Turkopticon which largely works with data labellers on Amazons Mechanical Turk platform as well as the Distributed AI Research (DAIR) Institute.The organisation added it is already in touch with Kenyan politicians and is communicating with the Ministry of ICT to help lawmakers better understand the nature of their work and how conditions for platform workers can be improved.Read more about data workersAmazon Mechanical Turk workers suspended without explanation: A likely glitch in Amazon Payments resulted in hundreds of Mechanical Turk workers being suspended from the platform without proper explanation or pay.US lawmakers write to AI firms about gruelling work conditions: Lawmakers wrote to nine tech companies including Amazon, Google and Microsoft about the working conditions of those they employ to train and maintain their artificial intelligence systems.AI disempowers logistics workers while intensifying their work: Conversations on the algorithmic management of work largely revolve around unproven claims about productivity gains or job losses less attention is paid to how AI and automation negatively affect low-paid workers.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueMaximising agentic AIs impact: the importance of a strong data foundation Data MattersSLM series - SUSE: Balanced realities for AI model use cases CW Developer NetworkView All Blogs
    0 Comentários ·0 Compartilhamentos ·18 Visualizações
  • Gartner: CISOs struggling to balance security, business objectives
    www.computerweekly.com
    Around the world, security leaders say they are struggling to balance the need to appropriately secure their data and the need to maximise efficient use of this data to hit their business objectives, according to a study produced by analysts at Gartner, who found that only 14% of cyber leaders were keeping on top of this.The analysts poll of 318 senior security leaders conducted in the summer of 2024 found 35% were confident they could secure data assets, and 21% were confident they could use data to achieve their business goals. The ability to do both was beyond six in seven.Nathan Parks, senior specialist for research at Gartner, said this was clearly something that needed to be addressed.With only 14% of SRM leaders able to secure their data while supporting business goals, many organisations can face increased vulnerability to cyber threats, regulatory penalties and operational inefficiencies, ultimately risking their competitive edge and stakeholder trust, he said.In light of its findings, Gartner has developed a five-point checklist for security leaders security and risk leaders, in its parlance to better align their business needs to stringent data security requirements, and successfully achieve both effective data protection and business enablement goals:CISOs should try to ease governance-related friction for the business by co-creating data security policies and standards with input and feedback from end users across the business;They should try to align data-security related governance efforts through partnering better with the businesss other internal functions to identify areas of overlap and potential synergy;They should clearly identify and delineate any non-negotiable cyber security requirements that the business must absolutely meet when handling previously unknown or unexpected data security risks;On generative artificial intelligence (GenAI) and decision-making related to it, they should take care to define appropriate, high-level guardrails that enable stakeholders to experiment within set parameters;And finally, they should collaborate with the businesss data and analytics teams to secure board-level buy-in on data security levels.Gartners final point, on building more effective working relationships with senior leadership whose core work is not invested in cyber security, is a perennial thorn in the side of many security leaders, who frequently lament diverging attitudes.This was highlighted in a recent study published by Cisco-owned security analytics and observability specialist Splunk, which polled chief information security officers (CISOs) in 10 countries, including the UK and US. Splunk found that CISOs were increasingly participating in boardrooms, but highlighted big gaps between their priorities and other board members.For example, said Splunk, when it came to innovating with emerging tech, such as GenAI, 52% of CISOs spoke of this as a priority compared to 33% of other board members, on upskilling or reskilling cyber employees, 51% of CISOs thought this was a priority compared with 27% of board members, and on contributing to revenue growth initiatives, 36% of CISOs said they prioritised this, compared with 24% of board members.Though the full report is more nuanced than these statistics might suggest, the study also showed that only 29% of CISOs thought they were getting the budget they needed to work effectively, while 41% of board members felt security budgets were absolutely fine.Read more about CISO attitudes and trendsThe healthcare CISO role involves more fiduciary responsibility and cyber security accountability than in years past.Elastic CISO Mandy Andress argues that security leaders should be seeking to build closer ties with their organisational legal teams.Those who get the role of a CISO may have overcome some professional hurdles, but are they ready to face what comes as part of the job? And who do they ask for advice? We look at the mentoring dilemma.
    0 Comentários ·0 Compartilhamentos ·24 Visualizações
  • Government launches consultation on plan to streamline business through e-invoicing
    www.computerweekly.com
    Arto - stock.adobe.comNewsGovernment launches consultation on plan to streamline business through e-invoicingGovernment announces 12-week consultation on electronic invoicing as part of its plan for changeByKarl Flinders,Chief reporter and senior editor EMEAPublished: 14 Feb 2025 12:00 The government has asked businesses for comment on a UK approach to electronic invoicing (e-invoicing), which is part of its plan to grow the economy.A 12-week consultation on e-invoicing in business asked firms and other stakeholders for feedback on topics including different models of e-invoicing, if a mandated or voluntary approach is best and whether e-invoicing should be complemented by real-time digital reporting.This is part of the governments plan for change, which has kickstart economic growth as one of its five missions. Both HMRC and the Department for Business and Trade (DBT) are behind the plans, which could improve tax collection and business efficiency.According to the government announcement, the consultation will gather views on standardising e-invoicing and how to increase its adoption across UK businesses and the public sector. It will also look at different e-invoicing models, with evidence sought from businesses whether they currently use e-invoicing or not.The government said the use of e-invoicing technology could help businesses get their tax right first time, reduce invoicing and data errors, improve the accuracy of VAT returns, help close the tax gap, and save time and money.It added that e-invoicing can speed up business-to-business payments, which improves cash flow and reduces paperwork.DBT minister Gareth Thomas said small businesses are at the heart of the economy and vital to the countrys growth. The potential of digitising taxes, speeding up payments and streamlining administrative tasks will provide real benefits to the economy, supporting smaller firms and boosting growth, he said. This is why we want to make sure e-invoicing works for SMEs [small and medium-sized enterprises], because cash flow can make all the difference between staying afloat or going under.Read more about tech in governmentThe government cited success stories where e-invoicing has speeded up payments for business.It said an unnamed NHS trust has used e-invoicing to reduce the time it takes to get invoices ready for processing from 10 days to 24 hours, with queries from suppliers reduced by 15%. It also said that in Australia, government agencies are settling e-invoices in five days rather than 20 days with traditional invoices.It also highlighted research from UK accounting software firm Sage, which found that e-invoicing streamlines routine tasks including data entry and tax filing, resulting in 3% productivity gains.The consultation, Promoting electronic invoicing across UK businesses and the public sector, is open until 7 May.E-invoicing simplifies processes, reduces errors and helps businesses to get paid faster, said James Murray, exchequer secretary to the Treasury. By cutting paperwork and freeing up valuable time and money, it will help improve firms productivity and their ability to grow and succeed.As part of the prime ministers plan for change, we have begun our work to transform the UKs tax system into one that is focused on helping businesses and the economy to grow.The government said about 130 countries already have or are in the process of implementing e-invoicing structures and standards, which includes what data they should include and its format.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueSLM series - SUSE: Balanced realities for AI model use cases CW Developer NetworkShould the UK spend hundreds of billions on AI? Cliff Saran's Enterprise blogView All Blogs
    0 Comentários ·0 Compartilhamentos ·16 Visualizações
  • Government renames AI Safety Institute and teams up with Anthropic
    www.computerweekly.com
    Daniel - stock.adobe.comNewsGovernment renames AI Safety Institute and teams up with AnthropicAddressing the Munich Security Conference, UK government technology secretary Peter Kyle announces a change to the name of the AI Safety Institute and a tie-up with AI company AnthropicByBrian McKenna,Enterprise Applications EditorPublished: 14 Feb 2025 9:52 Peter Kyle, secretary of state for Science, Innovation and Technology will use the Munich Security Conference as a platform to re-name the UKs AI Safety Institute to the AI Security Institute.According to a statement from the Department for Science, Innovation & Technology Press, the new name reflects [the AI Security Institutes] focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber attacks, and enable crimes such as fraud and child sexual abuse.The AI Security Institute will not, the government said, focus on bias or freedom of speech, but on advancing the understanding of the most serious risks posed by AI technology. The department said safeguarding Britains national security and protecting citizens from crime will become founding principles of the UKs approach to the responsible development of artificial intelligence.Kyle will set out his vision for a revitalised AI Security Institute in Munich, just days after the conclusion of the AI Action Summit in Paris, where the UK and the US refused to sign an agreement on inclusive and sustainable artificial intelligence (AI). He will also, according to the statement, be taking the wraps off a new agreement which has been struck between the UK and AI company Anthropic.According to the statement: This partnership is the work of the UKs new Sovereign AI unit, and will see both sides working closely together to realise the technologys opportunities, with a continued focus on the responsible development and deployment of AI systems.The UK will put in place further agreements with leading AI companies as a key pillar of the governments housebuilding-focused Plan for Change.Kyle said: The changes Im announcing today represent the logical next step in how we approach responsible AI development helping us to unleash AI and grow the economy as part of our Plan for Change.The work of the AI Security Institute wont change, but this renewed focus will ensure our citizens and those of our allies are protected from those who would look to use AI against our institutions, democratic values, and way of life.The main job of any government is ensuring its citizens are safe and protected, and Im confident the expertise our AI Security Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.The AI Security Institute will work with the Defence Science and Technology Laboratory, the Ministry of Defences science and technology organisation, to assess the risks posed by what the department called frontier AI. It will also work with the Laboratory for AI Security Research (LASR), and the national security community, including building on the expertise of the National Cyber Security Centre.The AI Security Institute will launch a new criminal misuse team which will work jointly with the Home Office to conduct research on a range of crime and security issues. One such area of focus will be on tackling the use of AI to make child sexual abuse images, with this new team exploring methods to help to prevent abusers from harnessing AI to commit crime. This will support work announced previously that makes it illegal to own AI tools which have been optimised to make images of child sexual abuse.The chair of the AI Security Institute, Ian Hogarth, said: The institutes focus from the start has been on security and weve built a team of scientists focused on evaluating serious risks to the public. Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.Dario Amodei, CEO and co-founder of Anthropic, added: AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropics AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents.We will continue to work closely with the UK AI Security Institute to research and evaluate AI capabilities in order to ensure secure deployment.Read more about the UK government and AI securityUK government commits an initial pot of 4m to fund research into AI risks, which will increase to 8.5m as the scheme progresses.UK government launches AI assurance platform for enterprises.UK government launches first national artificial intelligence strategy.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueShould the UK spend hundreds of billions on AI? Cliff Saran's Enterprise blogSLM series - Cotera: 10 key SLM facts for software engineers(and businesses) CW Developer NetworkView All Blogs
    0 Comentários ·0 Compartilhamentos ·21 Visualizações
  • Top cryptography experts join calls for UK to drop plans to snoop on Apples encrypted data
    www.computerweekly.com
    Over a hundred cyber security experts, companies and civil society groups have signed a letter calling for the Home Secretary, Yvette Cooper to drop demands for Apple to create a back-door that would allow the UK government access to encrypted communications and data stored on Apples iCloud service.The letter follows disclosures this week that the Home Office has issued a secret order to Apple, requiring the company to give the UK access to all encrypted material stored by any Apple users anywhere in world on its cloud servers.The Home Offices intervention has raised alarm bells among members of congress in the US, who have raised concerns that the move will weaken the security and privacy of ordinary American citizens as well as government officials and government agencies that use Apple computers and iPhones for official business.Apple introduced its Advanced Data Protection for iCloud (APD) as an optional security feature in December 2022. It allows users to extend Apples end-to-end encryption from messaging to personal data, including photos, notes and iCloud backups, offering according to Apple, invaluable protection for users private information from threats to data security.Risk to UK data sharing with EURobin Wilton, senior director of the Internet Society, one of the signatories to the letter, said that the Home Offices plans could threaten Britains data protection adequacy status with the EU, potentially disrupting the exchange of data between companies in the UK and the EU.The UK government has insisted not only on accessing Apples data, but insisted on access to it even after it reaches the United States. That raises questions whether the UK can retain its adequacy under GDPR, he told Computer Weekly.In an open letter prepared by the Global Encryption Coalition, a network of civil society groups, businesses and trade associations, cyber security experts warn that the UKs move to create a back-door into peoples personal data jeopardises the security and privacy of millions of people, undermines the UK tech sector and sets a dangerous precedent for global cyber security.The letter has been signed by prominent cyber security experts, including cryptographer Phil Zimmerman, inventor of the email encryption software PGP, Ronald Rivest one of the inventors of the RSA encryption algorithm, cybersecurity author Bruce Schneier, andDavid R. Jefferson, former supercomputer scientist at the US Lawrence Livermore National Laboratory. It remains open for further signatures until 20 February.UK tech industry will suffer reputational damageThe letter warns that the move by the UK government to secretly undermine the security of Apples encrypted storage, creates the risk that Apple and other technology companies may pull their services out of the UK just as the UK government is stressing the role of tech companies in boosting economic growth.For some global companies, they may choose to leave the UK market rather than face the global reputational risks that breaching the security of their products would entail. UK companies will also suffer reputational damage, as foreign investors and consumers will consider whether their products are riddled with secret UK government-mandated security vulnerabilities, it warns.Leaks to the Washington Post revealed that the Home Office issued a Technical Capability Notice (TCN), under Section 253 of the Investigatory Powers Act 2016 requiring Apple to provide access to encrypted data stored by Apple users anywhere in the world on its iCloud service.If the move succeeds, it would mean the worlds second-largest provider of mobile devices would be built on top of a system security flaw, putting all of its users security and privacy at risk, not just in the UK but globally.Risk to UK national securityThe letter warns that government moves against encryption, threaten to undermine the UKs national security.For national security professionals and government employees, access to end-to-end encrypted services allows them to safeguard their personal life. Ensuring the security and privacy of government officials is vital for helping prevent extortion or coercion attempts, which could lead to greater national security damage, it says.According to the letter, the consensus among cybersecurity experts is that there is no way to provide government access to end-to-end encrypted data without breaking end-to-end encryption.It cites Ciaran Martin, former director and founder of the UK governments National Cyber Security Centre, part of GCHQ, who wrote in a 2021 paper that E2EE [end-to-end encryption] must expand, legally unfettered, for the betterment of our digital homeland.Undermining the confidentiality of cloud services would have a harmful impact on people at the greatest risk, including families, survivors of domestic violence, and LGBTQ+ individuals, the letter argues. For these and other groups, the confidentiality guaranteed by end-to-end encryption can be critical in preventing harassment and physical violence, it says.International human rights bodies have recognised the importance of end-to-end encryption to enable people to communicate and express their views safely and securely, the signatories argue. The European Court of Human Rights has confirmed the importance of anonymity in promoting the free flow of ideas and information including protecting people from reprisals for their views.In a landmark case in February 2024, the ECHR found that an order issued by Russia to the messaging app, Telegram, requiring it to disclose technical information including encryption keys breached human rights law.To ensure the national and economic security of the United Kingdon, the Home Office must end its technical capability notice forcing Apple to break its end-to-end encryption, the letter states.Human rights groups that have signed the letter include Article19, Access Now, Digital Rights Ireland, Privacy International and Big Brother Watch.It has also been signed by prominent British academics including Richard Clayton of the University of Cambridge, visiting professor Ian Brown and Peter Sommer of Birmingham City University.Read more about the UK's secret order against AppleA hitherto unknown British organisation which even the government may have forgotten about is about to be drawn into a global technical and financial battle, facing threats from Apple to pull out of the UKTech companies brace after UK demands back door access to Apple cloud
    0 Comentários ·0 Compartilhamentos ·13 Visualizações
  • UK accused of political foreign cyberattack on US after serving secret snooping order on Apple
    www.computerweekly.com
    hanohiki - stock.adobe.comNewsUK accused of political foreign cyberattack on US after serving secret snooping order on AppleUS administration asked to kick UK out of 65-year-old UK-USA Five Eyes intelligence sharing agreement after secret order to access encrypted data of Apple usersByDuncan Campbell ,2QQ Ltd, Sussex UniversityPublished: 13 Feb 2025 17:54 An unprecedented letter from the US Congress, released today, accuses the UK of a foreign cyberattack waged through political means. The claim refers to a Home Office secret demand last month (reported by CW here, here and here) that Apple break the security protecting its Advanced Data Protection cloud security system to let British spies into anyones secure files.In a letter to the recently appointed US Director of National Intelligence (DNI) Tulsi Gabbard, Senator Ron Wyden of Colorada and Representative Andy Biggs of Arizona bluntly ask the administration to kick the UK out of the 65-year-old UK-USA signals intelligence sharing agreement, commonly known as Five Eyes if they do not now withdraw the demand to Apple.If the U.K. does not immediately reverse this dangerous effort, we urge you to reevaluate U.S.-U.K. cybersecurity arrangements and programs as well as U.S. intelligence sharing with the U.K, the new DNI is advised.Politically, on other issues, the signatories are on opposed sides of US politics. Wyden is a liberal democrat who has campaigned for healthcare and the environment; Biggs is a loud Trump supporter and a noted organiser of the MAGA squad. Wyden, from Oregon, serves on the Senate Intelligence and Finance Committee; Biggs, from Arizona, chairs the House Judiciary Subcommittee on Crime and Federal Government Surveillance. Their unified complaint against British tactics and conduct is potentially a unique event in the turbulent political period since Donald Trumps accession. Damage to information sharing with USThe letter was also copied to incoming British Ambassador Peter Mandelson. The British Embassy, Home Office and DNI have not made any official comment on the letter at the time of writing.The representatives have asked the DNI to tell Congress if the administration accepts British claims that it can impose gag orders on demands to American companies to provide user data, or to make technical changes to their systems and software. They also demand to know if the Home Office warned the US Government about the January notice, before it was revealed in the press. The British move against Apple also threatens to prejudice recent valuable gains in co-operative information sharing. It took four years for the US and Britain to agree a Data Access Agreement in 2022 that does allow Apple to provide data and files from UK iCloud Accounts, provided that the user has not turned on advanced security. This arrangement was authorised under the CLOUD Act (Clarifying Lawful Overseas Use of Data) and was, according to the Department of Justice, the first agreement of its kind, allowing each countrys investigators to gain better access to vital data to combat serious crime in a way that is consistent with privacy and civil liberties standards.The data flows both aways, allowing US agents automatic access to British controlled data. Under the Data Access Agreement, service providers in one country may respond to qualifying, lawful orders for electronic data issued by the other country, without fear of running afoul of restrictions on cross-border disclosures , the DoJ noted.Home office greedy for everythingAccording to UK academic and industry sources, the recent better level of access to some iCloud data may have caused the Home Office to get impatient and greedy for everything, and to proceed without legally required technical caution. According to reliable industry sources, the recent notice was not first scrutinised by the statutory Technical Advisory Panel (TAP), which includes vetted outside cryptosecurity and computer science experts. If this is correct, then the UK Judicial Commissioner who authorised the Notice to Apple and the Home Secretary may both have been misled, requiring the procedure for issuing the Notice to be reviewed.The representatives reminded DNI Gabbard that at her confirmation hearing she stated that backdoors lead down a dangerous path that can undermine Americans' Fourth Amendment rights and civil liberties, warning later that compulsory mechanisms to bypass encryption or privacy technologies undermines user security, privacy, and trust and poses significant risks of exploitation by malicious actors.We urge you to put those words into action by giving the U.K. an ultimatum, their letter concludes. Back down from this dangerous attack on U.S. cybersecurity or face serious consequences.Beijing could exploit UK 'backdoor'American cryptographers and cryptosecurity experts back the demand and have warned that Beijing would quickly exploit the British order to allow access to encrypted data. The U.S. should pass laws that forbid U.S. companies from installing encryption backdoors at the request of foreign countries, according to Matt Green, a leading cryptographer and professor of computer science at John Hopkins University. This would put companies like Apple in a bind. But it would be a good bind!In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueSLM series - Cotera: 10 key SLM facts for software engineers(and businesses) CW Developer NetworkWhat to expect from F5 AppWorld 2025 CW Developer NetworkView All Blogs
    0 Comentários ·0 Compartilhamentos ·13 Visualizações
  • EU looks to ramp up sovereign tech as Trump trade war begins
    www.computerweekly.com
    As the European Union (EU) prepares to respond to Trumps 25% tariff on aluminium and steel, a widely reported paper from Goldman Sachs has suggested that applying pressure on US technology providers is among the levers the EU could use to negotiate with the US.Under the Digital Markets Act, the EU Competitions Authority has numerous ongoing investigations related to US big tech, which some experts believe could be deployed as levers to negotiate a more favourable trade agreement with the US. At the same time, the EU has recognised its shortcomings as regards to heavy reliance on US tech and has ambitions to develop sovereign AI capabilities analogous to Trumps Project Stargate initiative.Former Google CEO Eric Schmidt told the BBC that there should be government oversight on private tech companies developing AI models, but he warned that over-regulation would stifle innovation. Schmidt also saw US export controls of semiconductors as a way to prevent adversaries getting hold of the technology to develop powerful AI systems.Given the US ongoing trade war with its allies, policymakers are wary of how far will Trump go and whether the US would impose export controls to curb the UK and Europes ambitions to develop sovereign cloud and sovereign AI capabilities. In retaliation, would the EU impose tariffs on US tech?On ING, global head of macro Carsten Brzeski, and senior economist for Germany and global trade Inga Fechner recently co-wrote an article discussing the trade-war, noting that while Europe will try to prepare for an upcoming possible trade war with the US, trade wars are not be won by the trade-surplus country. It is always the surplus countries that have more to lose. Therefore, Europe might want to consider another route: the strengthening of the domestic economy.Think of reducing dependency on the US by increasing domestic military industries, including reducing too many technological standards of weapons systems and pooling of defence purchases, and deregulation of the tech sector, including significant investments, they said.At the end January, the European Commission presented the Competitiveness Compass, which sets out an industrial strategy for new tech in Europe. The Competitive Compass points to a Europe where future technologies, services and clean products are invented, manufactured and put on the market, while being the first continent to become climate neutral.The Competitive Compass is based on findings from the Draghi report, from Mario Draghi, the former European Central Bank President. To reignite the EU innovation engine, the European Commission said: We want to create a habitat for young innovative startups, promote industrial leadership in high-growth sectors based on deep technologies, and promote the diffusion of technologies across established companies and SMEs.The strategy will involve so-called AI gigafactories and will apply AI initiatives to drive development and industrial adoption of AI in key sectors. There is also a dedicated EU startup and scaleup strategy to address the obstacles that are preventing new companies from emerging and scaling up.The EU is proposing a 28th legal regime to simplify applicable rules, including relevant aspects of corporate law, insolvency, labour and tax law, and to reduce the costs of failure. It claims that this will make it possible for innovative companies to benefit from one single set of rules wherever they invest and operate in the Single Market.This week, European Commission president Ursula von der Leyen launched InvestAI, an initiative to mobilise 200bn for investment in AI, including a new European fund of 20bn for AI gigafactories. She described the public-private partnership, as akin to CERN for AI, which would enable scientists and companies of all sizes to develop advanced very large models needed to make Europe an AI continent.The EUs InvestAI fund is supporting our future AI gigafactories across the EU. Each will have around 100,000 last-generation AI chips. There will inevitably be questions over what exactly is meant by last-generation AI chips, but previous generations of AI acceleration hardware are cheaper, and DeepSeek has shown that it is not always necessary to run state-of-the-art technology to achieve comparable results to US AI firms such as OpenAI that do have access to these chips.Read more about tech sector challengesForrester AI and cyber security drive up IT spending: Despite artificial intelligence and cyber security increasing investment, technical debt remains a significant drain on IT budgets.Nvidia investigation signals widening of US and China chip war: The Biden administration has expanded sanctions to prevent China extending its AI capabilities now, China is going after Nvidia.
    0 Comentários ·0 Compartilhamentos ·13 Visualizações
  • Baltic skills programme to help reduce European skills gap via Africa
    www.computerweekly.com
    Dilok - stock.adobe.comNewsBaltic skills programme to help reduce European skills gap via AfricaLithuanian project sets out to connect professionals in Africa with European businesses in need of skillsByKarl Flinders,Chief reporter and senior editor EMEAPublished: 13 Feb 2025 12:41 IT professionals in Africa are being connected to tech businesses in the Baltic region as part of a European Commission-funded project.Through a focus on people with skills, the Digital Explorers programme aims to address skills shortages in the Baltic tech sector and increase more business and government engagement between the Baltic nations and African countries.At the Turing College data science school in Lithuanias capital Vilnius, the programme has already remotely trained 90 junior to mid-level data analysts from Africa. These trainees will then travel to and work within the Baltic region, particularly within its rich tech startup sector. It is hoped the project will create a model for the wider EU region to follow.ilvinas vedkauskas, managing director at Lithuanian think tank OSMOS, which is behind the project, said it creates unexpected country partnerships such as between Lithuania and Nigeria countries that are very different.We built the project around people, digital explorers and their digital journeys. We create connections that set the path for more business to business and government to government type of engagement between countries, vedkauskas told Computer Weekly.Digital Explorers began in Lithuania and has since been expanded into all three Baltic countries to also include Estonia and Latvia. In Africa, Nigeria and Kenya are part of the project, which also includes Armenia.We are piloting first and foremost, seeing how these sort of arrangements could work on a global scale, said vedkauskas.He added that the training, which is initially remote, exposes the talent in Africa to the practices and modern interaction in the Baltics: All the mentors are industry professionals, the learning environment very much resembles the way teamwork is organised in tech companies here. We connect talent with the businesses that are looking to find new employees and explore new markets as well as diversify their teams.For the junior to mid-level data analysts, the programme is offering internships with participants relocated to the Baltic countries for six months. Digital Explorers subsidises the scholarships and pays the travel costs. Then its up to the company to mentor the intern to provide the working environment, said vedkauskas.After the six-month internship, trainees either return home and get support from Digital Explorers to help them reintegrate or they may remain in the Baltics and get support integrating for the long term. Since 2019, 53 trainees have relocated to the Baltics.vedkauskas said the project faces challenges, which need to be overcome through advocacy and informed persuasion, adding: The picture is not all rosy. Immigration regulations are becoming increasingly strict, making relocations and people-to-people exchange challenging, to say the least.Mercy Kimalat, CEO of the association of startup and SME enablers of Kenya (ASSEK), a visiting ambassador in the Digital Explorers programme, said: The startup ecosystem in the Baltics is impressively mature and stable, but it still needs to continue to grow and innovate. This is where global partnerships and fresh talent and ideas could really benefit us all, offering a chance to have honest discussions about resiliency.Kimalat said that accomplishing the initiatives goals is not without its difficulties: The entrepreneurial environments in both Kenya and the Baltics have faced many challenges over the past couple of years, from rising inflation to regional political perturbations, so it is inspiring to learn how different startup ecosystems continue to thrive and encourage growth.Read more about tech hiringMostbusinesses now have a CISO, but perceptions of what CISOs are supposed to do, and confusion over the value they offer, may be holding back harmonious relations.Women make up a small proportion of the tech sector despite accounting for almost half of the UK workforce but areefforts to attract more women into the sectora distraction from trying to keep the women the sector already has?Job postings for several IT roles returned to pre-pandemic levels last year, dropping when compared with 2023.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueSLM series - Cotera: 10 key SLM facts for software engineers(and businesses) CW Developer NetworkWhat to expect from F5 AppWorld 2025 CW Developer NetworkView All Blogs
    0 Comentários ·0 Compartilhamentos ·42 Visualizações
  • Interview: Why Samsung put a UK startup centre stage
    www.computerweekly.com
    Oxford Semantic Technologies, is an example of what Matt Clifford, chair of the Advanced Research and Invention Agency (ARIA), was thinking about when he drafted the proposals for the government 50-point AI ppportunities action plan.In 2017, three University of Oxford professors Ian Horrocks, Boris Motik and Bernardo Cuenca Grau formed Oxford Semantic Technologies to take to market a novel approach to data discovery, which uses Knowledge Representation and Reasoning (KRR).KRR is a branch of artificial intelligence (AI) that represents a logical and knowledge-based approach. Unlike machine learning, which finds patterns in vast datasets and draws statistical outputs, KRR aims to improve the accuracy of AI inference by making logical and explainable decisions based on data combined with expert knowledge.The companys technology caught the attention of Samsung Electronics and was acquired by Samsung in July last year. Co-founder Horrocks was among the speakers at last months Galaxy Unpacked event, where Samsung unveiled its latest flagship smartphone, the Galaxy S25.In a recent podcast interview with Computer Weekly recorded after Galaxy Unpacked, Horrocks described the experience as pretty amazing, which he says was the culmination of many years of research.Download this podcastThe DeepSeek effectWhat is interesting about the technology developed by Oxford Semantic Technologies is that it does not require vast amounts of compute to run AI, as Horrocks explains: One of the reasons why Samsung was so excited about our knowledge graph system is the fact that it can run on the phone. You can build it with a relatively small footprint and relatively small compute requirement.Among the benefits of using on-device AI, as Horrocks points out, is: You do need to move potentially sensitive personal data off into the cloud. You can do everything on your own device, so youre in control. The AI on the phone cant use what it cant see and isnt sharing your sensitive personal data.The idea that AI does not require vast farms of hugely expensive high-performance compute is a departure from the generally accepted approach deployed across the industry. In fact, a direct effect of Chinas DeepSeek is that it demonstrates to the AI world that AI can be done on the cheap.This resulted in financial market turmoil, especially as server manufacturers and the likes of Nvidia have modelled their projected growth on the exponential rise in demand for AI servers, powered by the latest and most expensive generation of graphics processing units (GPUs).Supporting the UK AI opportunities action planThe fact that a company spun out of Oxford University and has not only been acquired by Samsung, but has had its technology is embedded in the latest Samsung flagship smartphones shows that the UK can create world-class AI.Ian Horrocks, co-founder of Oxford Semantic Technologies, would like to see prime minister Keir Starmer provide more support for research, particularly in terms of PhD students. I looked at the UK governments AI opportunities plan and it seems that there are some good ideas there, he says. We should be building on our existing UK strengths in higher education, which I think is a massive opportunity for the UK.Horrocks points out that the UK is renowned for its world-leading universities. But, he says: One of the things thats a real a challenge for us is funding PhD students. It seems to me this is a really cheap and effective way of attracting PhD students here, and if you train them in the UK, many of them are then willing to work and provide a massive boost to the UK economy.Given that the worlds leading smartphone manufacturer, Samsung, not only uses AI technology from a UK startup, but is also powering its devices using processor cores based on UK chip designer, Arms technology, demonstrates the UKs strengths in high tech.But as Horrocks notes, fueling such innovations requires research and development and the ability to attract the best minds in the world to come to the UK to study and work.When asked about DeepSeek, Horrocks says: I still think its very interesting how it challenges the orthodox view that generative AI [GenAI] is just all about compute power for training and inference.In fact, anyone can download the open source distribution of DeepSeek and run it locally on a personal computer. But for many, the incredibly low cost of $0.14 per million tokens to query the cloud-base version undercuts rival large language models.The US has banned the export of high-end chips from Nvidia to China in a bid to stifle Chinese AI research and development.With more details of DeepSeek being revealed, what is becoming apparent is that its R1 model used inferior AI processor technology. Unlike the US AI firms, which have access to the latest Nvidia graphics processors (GPUs), it has now transpired that DeepSeek used the Huawei Ascend 910C AI chips for inference.According to Huawei, the Ascend 910C is slightly inferior to the Nvidia H100 in performing basic learning tasks. However, it claimed that the Ascend 910C can offer lower energy costs and higher operational efficiency than its more powerful rival. Yet, even with the lower processing performance, benchmarks show that DeepSeek is just as good as OpenAI.On-device AIIrrespective of what policial instruments are used to target DeepSeek which, at the time of writing, is being sanctioned by a growing number of countries its existence demonstrates that a large language model (LLM) can run on relatively low-spec hardware. It can be downloaded from an open source repository on GitHub and there are small versions of the model that can run entirely disconnected from a network on a PC or Mac.But Horrocks and the team at Oxford Semantic Technologies have been able to get AI to run on even smaller devices. Oxford Semantic Technologys RDFox has a very small footprint in terms of processing requirements, as Horrocks explains: One of the reasons why Samsung was so excited about our knowledge graph system is that RDFox can actually run directly on the phone.Horrocks says that one of the challenges the people behind Oxford Semantic Technologies were able to tackle is how to achieve what he calls logical reasoning in a small footprint. This ultimately led to the RDFox product.I was working with a brilliant colleague at Oxford called Boris Motik, who had this idea that to address this problem using a combination of modern computer architecture with some very clever, novel data structures and algorithms, he adds.According to Horrocks, the main benefit, which RDFox shares with DeepSeek, is that potentially sensitive personal data does need to be uploaded to the public cloud for processing. You can do everything there on your own device, so then youre in control. You can control what the AI on the phone can see and what it cant see, and you know that it isnt sharing your sensitive personal data, he adds.It is too early to tell how significant on-device AI will become. It is already providing enhanced and richer search internet results, but this is just scratching the surface of what is possible once millions of people have AI on a device that they carry around everywhere. The brief feature which, on the Galaxy S25, summarises the days activities is something people may find useful.One of the things thats important, Horrocks adds, is that it integrates across multiple devices. So, maybe when you go to the gym or youre walking the dog, maybe you have a wearable device, the devices AI can learn a users daily routine.And, as Horrocks points out, this represents a big opportunity for Samsung.Read more about DeepSeekDeepSeek explained everything you need to know: DeepSeek, a Chinese AI firm, is disrupting the industry with its low-cost, open source large language models, challenging US tech giants.Assessing if DeepSeek is safe to use in the enterprise: The AI vendor has found popularity with its reasoning model. However, based on geopolitical tensions and safety tests, there are questions about whether enterprises should use it.
    0 Comentários ·0 Compartilhamentos ·39 Visualizações
  • UK government sanctions target Russian cyber crime network Zservers
    www.computerweekly.com
    The UK government has sanctioned Russian entity Zservers, as well as six individual members of the cyber group and its UK representative, XHOST.In a Foreign, Commonwealth and Development Office statement under the names of foreign secretary David Lammy and minster of state for security Dan Jarvis, the government said Zservers provides vital infrastructure for cyber criminals as they plan and execute attacks against the UK.The government characterises it as a component in a supply chain that supports and conceals the operations of ransomware gangs. Ransomware exponents rely on these services, it is said, to launch attacks, extort victims and store stolen data. Putin has built a corrupt mafia statedriven by greed and ruthlessness, said Lammy. It is no surprise that the most unscrupulous extortionists and cyber criminals run rampant from within his borders.This government will continue to work with partners to constrain the Kremlin and the impact of Russias lawless cyber underworld, he said. We must counter their actions at every opportunity to safeguard the UKs national security and deliver on our plan for change.The plan for change involves building 1.5 million homes in England, fast-tracking planning decisions on 150-plus major economic infrastructure projects, as well as attaining an NHS standard of 92% of patients in England waiting no longer than 18 weeks for elective treatment.As for Zservers, the government said the group advertises itself as a bulletproof hosting (BPH) provider.Read more about Russian cyber attacks on the UK and responsesUK imposes sanctions on Conti ransomware gang leaders.NCSC exposes Russian cyber attacks on UK political processes.NCA-led Operation Destabilise disrupts Russian crime networks that funded the drugs and firearms trade in the UK, helped Russian oligarchs duck sanctions, and laundered money stolen from the NHS and others by ransomware gangs.BPH providers like Zservers, said the government, protect and enable cyber criminals, offering a range of purchasable tools which mask their locations, identities and activities. Targeting these providers can disrupt hundreds or thousands of criminals simultaneously.The UK is working alongside the US and Australia in this effort. The government cited sanctions against ransomware groups LockBit and Evil Corp as part of an ongoing campaign, which includes the National Crime Agency (NCA)s identification of a prominent member of the Evil Corp cyber crime collective who also worked as an affiliate of the LockBit ransomware gang, Aleksandr Ryzhenkov.LockBit affiliates are known, said the government, to have used Zservers as a launch pad for targeting the UK, enabling ransomware attacks against various targets, including the non-profit sector.Ransomware attacks by Russian affiliated cyber crime gangs are some of the most harmful cyber threats we face today, and the government is tackling them head on, said Jarvis. Denying cyber criminals the tools of their trade weakens their capacity to do serious harm to the UK.We have already announced new world-first proposals to deter ransomware attacks and destroy their business model. With these targeted sanctions and the full weight of our law enforcement, we are countering the threats we face to protect our national security.The list of those sanctioned is: Zservers, XHOST Internet Solutions LP, Aleksandr Bolshakov (employee), Aleksandr Mishin (employee), Ilya Sidorov (employee), Dmitriy Bolshakov (employee), Igor Odintsov (employee) and Vladimir Ananev (employee).Since Russias attack on Ukraine was launched three years ago, western countries have applied economic sanctions against Russia and Russian individuals, with limited impact, including the unintended consequence of enabling Chinese spies to penetrate Russian defence research institutes, dubbed Twisted Panda.The Economist published a podcast in 2024 evidencing a consensus that the Russian economy has proved shockingly resilient to western sanctions, thanks largely to non-Nato countries giving succour to Russia. Historically, only the prevention of the so-called war of the stray dog between Greece and Bulgaria in 1925 can be chalked up to sanctions, even if they have some limited value.Meanwhile, the Google Threat Intelligence Group has recently published a report detailing a systematic and growing convergence of cyber criminality with cyber warfare, mainly based in Russia and China.
    0 Comentários ·0 Compartilhamentos ·63 Visualizações
  • AI Action Summit: UK and US refuse to sign inclusive AI statement
    www.computerweekly.com
    The UK and US governments refused to sign a joint international declaration on inclusive and sustainable artificial intelligence (AI) as the AI Action Summit drew to a close.A total of 61 countries including France, China, India, Japan, Australia and Canada have signed a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.This includes promoting AI accessibility to reduce digital divides between rich and developing countries; ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that positively shape labour markets.This summit has highlighted the importance of reinforcing the diversity of the AI ecosystem, it said.It has laid an open, multi-stakeholder and inclusive approach that will enable AI to be human rights-based, human-centric, ethical, safe, secure and trustworthy, it continued, adding that this rests on countries beginning to have multi-stakeholder dialogues and cooperation on AI governance.We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights.While the UK and US governments have not immediately outlined the exact reasons for their refusal to sign the statement, a spokesperson for prime minister Keir Starmer has said the government would only ever sign up to initiatives that are in UK national interests.During the summit, US vice-president JD Vance said that excessive regulation of the AI sector could kill a transformative industry, adding that we need international regulatory regimes that foster the creation of AI technology rather than strangle it.Aside from European regulation, he also criticised cooperation with China, as well as any regulation that threatens the interests of US companies. We feel very strongly that AI must remain free from ideological bias and that American AI will not be co-opted into a tool for authoritarian censorship, said Vance.The Trump administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints, he added. America cannot and will not accept that, and we think its a terrible mistake, not just for the United States of America, but for your own countries.The AI Action Summit follows the inauguralAI Safety Summit hosted by the UK government at Bletchley Park in November 2023, and theAI Seoul Summit inSouth Korea in May 2024. While both previous summits were criticised for a lack of inclusivity, they largely focused on risks associated with the technology and placed an emphasis on improving its safety through international scientific cooperation and research.In contrast, the AI Action Summit has seen politicians and industry figures decry the burden of AI red tape, while simultaneously committing hundreds of billions of further investment to AI-related infrastructure, which aims to rapidly scale the technology.Independent fact-checking charity Full Fact criticised the UK governments refusal to sign the statement, saying that it risks undercutting Britains hard-won credibility as a world leader for safe, ethical and trustworthy AI.We need bolder government action to protect people from corrosive AI-generated misinformation that can damage public health and disrupt democracy at unprecedented speed and scale, said Andrew Dudfield,Adam Leon Smith an AI expert from BCS, the Chartered Institute for IT, added: When we surveyed technology experts last year, 88% said it was important that the UK government takes a lead in shaping global ethical standards in AI and other high-stakes technologies.Whether thats through a declaration or not, the worlds richest countries will ultimately need to show they can put geo-politics aside, balance AI innovation with safety, and be responsible enough to work together at this critical moment in human history, he said.Read more about artificial intelligenceGoogle drops pledge not to develop AI weapons: Google has dropped an ethical pledge to not develop artificial intelligence systems that can be used in weapon or surveillance systems.Government opens up bidding for AI growth zones: As part of its AI opportunities action plan, the government is encouraging local authorities to put in bids for AI growth zones.DWP fairness analysis reveals bias in AI fraud detection system: Information about peoples age, disability, marital status and nationality influences decisions to investigate benefit claims for fraud, but the Department for Work and Pensions says there are no immediate concerns of unfair treatment.Given the emphasis placed on deregulation by Vance and other key political figures during the summit, some argued they are setting the stage for a race to the bottom on AI regulation.Jeni Tennison, executive director of non-profit Connected by Data, said: Its unsurprising that the current US administration would decline to sign a commitment to more inclusive, equitable and sustainable AI given their version of free speech blacklists these terms. Its unclear if the UK government objects to any particular part of the summit statement, or is simply trying to stay on the good side of Trump and US investors.Commenting on Vances confused speech about excessive regulation, she added: He reinforced the USs deregulatory, race-to-the-bottom trajectory for AI, while championing insurgent competition and worker voice: the very things regulation is needed for.Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, said rhetoric arguing that regulation will stifle innovation is misleading. Regulation is trying to make AI less biased, more explainable and less harmful, she said.Who does really win when there is no regulation? How is your life improved if sexist AI decides that your child is not allowed to attend university? How is your life improved when opaque AI fires you and you dont even receive an explanation? How is your life improved if AI is allowed to spread misinformation on the web? How is your life improved if we use unsafe and untested AI in healthcare?I think we really need to question this rhetoric, she said. Who wins if there is no regulation? Is it eight billionaires or the other eight billion people?Criticising the content of the statement itself, Gaia Marcus, director of the Ada Lovelace Institute, said it fails to build on the mission of making AI safe and trustworthy, and the safety commitments of previous summits. There are no tools to ensure tech companies are held accountable for harms. And there is a growing gap between public expectations of safety and government action to regulate.However, Marcus did note that the summit has offered some alternatives for the future. It proposes a future where AI is used sustainably through the Coalition for Sustainable AI, she said. And a future where the technologies, tooling and infrastructure are widely accessible for use in the public interest through the AI foundation ifMarcus added that governments must build and invest in alternatives to ensure the value and benefits of tech advances are felt by everyone in a way that avoids paying extortionate rent to a few large companies for a generation.Mike Bracken, a founding partner at digital transformation consultancy Public Digital, said while the UK and US governments refusal to sign is a visible sign of tension, it should not overshadow the actual, progressive delivery-based outcomes of the summit, such as the sustainability coalition and the public interest AI foundation, known as Current AI.I have attended many government-backed events which result in statements and handshakes and warm words; the ones that really matter are the ones that result in institutions, money, change and delivery, he said, adding that the real success of the summit is in how it has recast AI as a public good through these initiatives. Its not simply seen as an extension of monopolistic technology providers, said Bracken.He added that hes much more concerned about getting institutions ready for public AI delivery than signing statements, noting that diplomatic relations will improve as a result of working together through the other channels established during the summit.I dont mean to diminish to the communique, but the institutional funds moved by France to rapidly embrace AI, the support of that by the European Union, the creation and backing for open source tooling, and the quite detailed approaches towards creating data sharing agreements for public bodies they are all far more important than the communique, in my opinion, said Bracken.
    0 Comentários ·0 Compartilhamentos ·71 Visualizações
  • AI Action Summit: Two major AI initiatives launched
    www.computerweekly.com
    Two major artificial intelligence (AI) initiatives have been launched at the AI Action Summit in Paris, focused on promoting sustainable and public-interest applications of the technology.The first is Current AI, a public interest foundation launched by French president Emmanuel Macron that seeks to steer the development of the technology in more socially beneficial directions.Backed by nine governments including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as core partners), the initiative aims to reshape the AI landscape by expanding access to high-quality datasets; investing in open-source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact.Key focus areas for the initiative will include healthcare, linguistic diversity, science, and issues such as trust, safety and AI auditing. Current AI can change the world of AI, said Macron. By giving access to data, infrastructures and computing power to a large number of partners, Current AI will contribute to developing our own AI ecosystems in France and Europe, to diversifying the market, and to fostering innovation throughout the world, in a fair and transparent way.The second initiative launched is the Coalition for Environmentally Sustainable AI, which aims to bring together stakeholders across the AI value chain for dialogue and ambitious collaborative initiatives.Consisting of 91 partners so far including 37 tech firms, 11 countries and five international organisations the coalition will be spearheaded by France, the UN Environment Program (UNEP) and the International Telecommunication Union (ITU).I want to accelerate the use of artificial intelligence in Norway, said Norwegian minister of digitalisation Karianne Tung. It will increase workforce availability, create new jobs, and help address societal challenges. At the same time, we know that AI requires a lot of energy. It is therefore important that countries work together to make technology more sustainable.According to a statement on the coalitions website, its primary goal is to make sure that the momentum will be continued for future events (AI Summits, COP and other international gatherings), with other national authorities and international organisations.This coalition serves as a platform to identify stakeholders committed to the topic, and as a hub for giving visibility to open-ended, collaborative international initiatives pursuing the advance of science, standards and solutions for aligning AI and environmental goals, it says.Current AI founder Martin Tisn who was a key organiser of the AI Action Summit said the goal of creating the public interest foundation is to create a financial vehicle to provide a North Star for public financing of critical efforts, such as using AI in a range of healthcare contexts.We have a critical window to shape the future of artificial intelligence, Tisn said in a statement. AI has the power to transform access to jobs, healthcare and education for the better, but only if we act now. Current AI will drive a shift towards open, people-first technologies.Weve seen the harms of unchecked tech development and the transformative potential it holds when aligned with the public interest. By supporting innovation that benefits all, we can ensure AI serves the public good.Read more about artificial intelligenceGoogle drops pledge not to develop AI weapons: Google has dropped an ethical pledge to not develop artificial intelligence systems that can be used in weapon or surveillance systems.AI Action Summit: UK and US refuse to sign inclusive AI statement: The UK and US governments decisions not to sign a joint declaration has attracted strong criticism from a range of voices, especially in the context of key political figures calling for AI red tape to be cut.Government opens up bidding for AI growth zones: As part of its AI opportunities action plan, the government is encouraging local authorities to put in bids for AI growth zones.The initiative is currently backed by a $400m investment from the French government, philanthropists and industry partners, and will aim to raise a total of $2.5bn over the next five years.Speaking during the summit, United Nations secretary general Antnio Guterres urged international collaboration on AI to ensure the technology expediates sustainable development, rather than contribute to entrenching global inequality.The power of AI carries immense responsibilities, he said. Today, that power sits in the hands of a few. While some companies and some countries are racing ahead with record investments, most developing nations find themselves left out in the cold. This growing concentration of AI capabilities risks deepening geopolitical divides.We must prevent a world of AI haves and have-nots. We must all work together so that artificial intelligence can bridge the gap between developed and developing countries not widen it. It must accelerate sustainable development not entrench inequalities.Guterres added that the launch of Current AI is an important contribution to global AI capacity building, noting that his office will soon present a report on innovative voluntary financing models and capacity-building initiatives to help all countries harness AI as a force for good.Commenting on the turn to open source infrastructure, Mitchell Baker, chair of the Mozilla Foundation, said it was a positive move. Just over a year ago at Bletchley Park, open source AI was framed as a risk, she said. In Paris, we saw a major shift. There is now a growing recognition that openness isnt just compatible with AI safety and advancing public-interest AI its essential to it.Baker added that its clear we need infrastructure beyond private, purely profit-driven AI. That means building AI that serves society and promotes true innovation even when it doesnt fit neatly into short-term business incentives. The conversations in Paris show that were making progress, but theres more work to do.Mike Bracken, a founding partner at digital transformation consultancy Public Digital, described the public-interest focus of Current AI as a really strong outcome of the summit, and said its important the body already has a budget.Its got a mandate, its got funding, he said. Its got a substantial number of funders from governments, NGOs, philanthropic organisations and individual countries, so people are putting their hands in their pockets.Weve got loads of institutions that do ethics committees, think tanks and societies for this and that, but public institutions that act globally to create public AI assets that can change peoples lives? We havent had one of those before. Thats big news.
    0 Comentários ·0 Compartilhamentos ·70 Visualizações
  • Microsofts February 2025 Patch Tuesday corrects 57 bugs, three critical
    www.computerweekly.com
    MR - stock.adobe.comNewsMicrosofts February 2025 Patch Tuesday corrects 57 bugs, three critical Microsoft is correcting 57 vulnerabilities in its February Patch Tuesday, two of which are being actively exploited in the wild, and three of which are criticalByBrian McKenna,Enterprise Applications EditorPublished: 12 Feb 2025 16:00 Microsoft followed up its massive January Patch Tuesdayupdate containing fixes for 159 vulnerabilities with a more modest crop this month. This time, it released fixes for 57 new Common Vulnerabilities and Exposures (CVEs) in its update, three of which are critical.Dustin Childs of theZero Day Initiative described one of the vulnerabilities as unprecedented in the wild. This is a Windows storage elevation of privilege (EOP) vulnerability, CVE-2025-21391.In a blog post, Childs said: This is a type of bug we havent seen exploited publicly. The vulnerability allows an attacker to delete targeted files. How does this lead to privilege escalation? My colleague Simon Zuckerbraun details the technique here. While weve seen similar issues in the past, this does appear to be the first time the technique has been exploited in the wild. Its also likely paired with a code execution bug to completely take over a system. Test and deploy this quickly.In Computer Weeklys sister title SearchWindowsServer, Tom Walat picked out two new zero-day vulnerabilities that Microsoft has fixed in this Patch Tuesday, including the EOP that Childs highlighted.The first new zero-day is a Windows Ancillary Function Driver for WinSock elevation-of-privilege vulnerability (CVE-2025-21418) rated important with a CVSS (Common Vulnerability Scoring System) score of 7.8. This bug affects all currently supported Windows desktop and server systems, he wrote.The second new zero-day is the storage EOP vulnerability (CVE-2025-21391) that Childs commented on, to which Walat added: To exploit the vulnerability, the attacker only needs local access to the network with low privileges. If successful, the attacker can delete files on a system to cause service disruptionsand possibly perform other actions, such as elevating their privileges.Childs also picked out CVE-2025-21376, a Windows Lightweight Directory Access Protocol (LDAP) remote code execution (RCE) vulnerability. This vulnerability allows a remote, unauthenticated attacker to run their code on an affected system simply by sending a maliciously crafted request to the target, he wrote. Since theres no user interaction involved, that makes this bug wormable between affected LDAP servers. Microsoft lists this as exploitation likely, so even though this may be unlikely, I would treat this as an impending exploitation. Test and deploy the patch quickly.In the CVE notes to this critical vulnerability, which has a CVSS rating of 8.1, Microsoft stated: An unauthenticated attacker could send a specially crafted request to a vulnerable LDAP server. Successful exploitation could result in a buffer overflow which could be leveraged to achieve remote code execution.There are also several Microsoft Excel bug fixes in this update, including CVE-2025-21387, an RCE vulnerability. This is one of several Excel fixes where the Preview Pane is an attack vector, which is confusing as Microsoft also notes that user interaction is required, said Childs. They also note that multiple patches are required to address this vulnerability fully. This likely can be exploited either by opening a malicious Excel file or previewing a malicious attachment in Outlook. Either way, make sure you get all the needed patches tested and deployed.This vulnerability is one of six Excel flaws that Microsoft corrected this month, in what proved to be a relatively light Patch Tuesday.Read more about Patch TuesdayFebruary 2025: Microsoft plugs two zero-days for February Patch Tuesday.January 2025: The largest Patch Tuesday of the 2020s so far brings fixes for more than 150 CVEs ranging widely in their scope and severity including eight zero-day flaws.December 2024: Microsoft has fixed over 70 CVEs in its final Patch Tuesday update of the year, and defenders should prioritise a zero-day in the Common Log File System Driver, and another impactful flawin the Lightweight Directory Access Protocol.November 2024: High-profile vulns in NTLM, Windows Task Scheduler, Active Directory Certificate Services and Microsoft Exchange Servershould be prioritised from Novembers Patch Tuesday update.October 2024: Stand-out vulnerabilities in Microsofts latest Patch Tuesday drop include problems in Microsoft Management Consoleand the Windows MSHTML Platform.September 2024: Four critical remote code execution bugs in Windows and three critical elevated privileges vulnerabilitieswill keep admins busy.August 2024: Microsoft patches six actively exploited zero-days among over 100 issuesduring its regular monthly update.July 2024: Microsoft has fixed almost 140 vulnerabilities in its latest monthly update, with a Hyper-V zero-daysingled out for urgent attention.June 2024: An RCE vulnerability in a Microsoft messaging feature and a third-party flaw in a DNS authentication protocol are the most pressing issues to address inMicrosofts latest Patch Tuesday update.May 2024: A critical SharePoint vulnerability warrants attention this month, but it is another flaw that seems to be linked to the infamous Qakbot malwarethat is drawing attention.April 2024: Support for the Windows Server 2008 OS ended in 2020, but four years on and there's a live exploit of a security flawthat impacts all Windows users.March 2024: Two critical vulnerabilities in Windows Hyper-V stand out onan otherwise unremarkable Patch Tuesday.February 2024: Two security feature bypasses impacting Microsoft SmartScreen are on the February Patch Tuesday docket,among more than 70 issues.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueA road-warrior kit: Dell Pro keyboard, mouse, briefcase & backpack Inspect-a-GadgetSLM series: NTT DATA - Cost-effective solutions for real-time industrial AI CW Developer NetworkView All Blogs
    0 Comentários ·0 Compartilhamentos ·77 Visualizações
  • Forrester: AI and cyber security drive up IT spending
    www.computerweekly.com
    Reddogs - stock.adobe.comNewsForrester: AI and cyber security drive up IT spendingDespite artificial intelligence and cyber security increasing investment, technical debt remains a significant drain on IT budgetsByCliff Saran,Managing EditorPublished: 12 Feb 2025 16:00 Market analysis from Forrester has forecast that organisations in Europe are set to increase IT spending by 5% during 2025.According to the analysts Global tech forecast 2024-2029, spending will reach $4.9tn in 2025, with accelerated investment in artificial intelligence (AI), cyber security and cloud infrastructure. Generative AI (GenAI), cyber security and cloud services are poised to drive growth by 5.6% in 2025, up from 4.6% in 2024.Europe is set to spend less than the US and Asia Pacific as a proportion of gross domestic product (GDP). The research from Forrester shows that the US accounted for 41% of global tech spend and 46% of AI software spend in 2024. Almost 70% of the top 24 companies by market capitalisation that saw the fastest growth from 2015 to 2023 come from the US, and more than half of those are media and information companies.Forresters forecast shows that Asia Pacific tech spend will grow 5.6% in 2025. The analyst predicts that the Asia Pacific region will witness a surge in real GDP growth that far exceeds the global average, led by countries including India, the Philippines, Vietnam and Indonesia. Government initiatives in China and India, and increased investment in GenAI and semiconductors in Japan and South Korea, will help drive tech spend. India will see the fastest growth, with tech spend expected to increase by 9.6% in 2025.According to Forrester, software and IT services combined will account for 66% of global technology spend in 2025, fuelled by increased investment in cyber security services and the modernisation of legacy systems. Software alone will grow at a rate of 10.5%, and is expected to capture 60% of global tech spend growth by 2029, making it the fastest-growing tech sector.Over the next five years, technology investments will reshape industries at an unprecedented pace, said Michael OGrady, principal forecast analyst at Forrester.In a blog post, OGrady said that the analysts forecast suggests software spending by enterprises and governments will reach 1.7% of global GDP by 2029, nearly doubling its worth since 2016. GenAI will revolutionise sectors such as financial services, media and retail, enhancing customer experiences with more personalised and human-like virtual assistants and customer service solutions, he said.Read more IT strategy storiesForrester: IT budget growth pushed towards cross-functional teams: Analyst firm Forrester has forecast a slight increase in IT budgets for 2025, but technical debt is one of the areas likely to feel the squeeze.How to optimise SAM budget in todays heterogeneous environments: Hidden software costs and ecosystem complexity make managing a mixed software estate harder than ever. Heres how to optimise your spending.Among the big growth areas, according to Forresters analysts, will be in IT consulting and system integration services, which account for 19% of global IT spending. OGrady said that tech outsourcing and hardware maintenance make up 15% of global IT spending in its latest forecast. Driven by infrastructure as a service, outsourcing services growth outpaces that of consulting service, he said.According to OGrady, companies that focus their investments on cyber security and AI will not only strengthen their competitive edge, but also achieve sustainable growth.However, he warned that they need to balance their rapid tech investments with ongoing efforts to manage legacy systems and reduce technical debt. Legacy systems still capture two-thirds of global tech spending, said OGrady. With the half-life of tech skills at less than five years, skills renewal of the tech workforce is vital.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueA road-warrior kit: Dell Pro keyboard, mouse, briefcase & backpack Inspect-a-GadgetSLM series: NTT DATA - Cost-effective solutions for real-time industrial AI CW Developer NetworkView All Blogs
    0 Comentários ·0 Compartilhamentos ·76 Visualizações
  • AI at Leap 2025: Huge potential but a threat to the fabric of society?
    www.computerweekly.com
    At Saudi Arabias Leap 2025 event in February, the kingdom announced huge investment in artificial intelligence (AI), while the conference element of the show majored heavily in key questions about AI. These ranged from the next steps, such as getting business value from enterprise AI, to challenges in developing agentic AI, as well as the more distant future in robotics and causal AI.Also addressed was the need to focus on how AI will change society and how to ensure it is a force for good rather than one that undermines social coherence.Yaser Al-Onaizan, CEO of the National Center for AI in the Saudi Data and AI Authority (SDAIA), focused on agentic AI as the next step, as in AI that works on our behalf.The large language models [LLMs] understand how language is constructed, the sequences that people generate, he said. But the promise of AI is that it will be in everything we do and touch every day. But it needs to be invisible. It cannot be in your face it should be listening to you, understanding you and doing things based on your opinion, without you even asking sometimes.So, for example, you can interact with a model and instead of just giving you information about flights, it can go on and reserve flights or make a hotel reservation for you.But, said Al-Onaizan, the challenge is for AI to work on humans behalf and to get things right, to understand common sense, so that decisions made autonomously fit with whats practical for those on whose behalf it works.Meanwhile, Lamia Youseff, founder of Jazz Computing, is an industry thought leader in AI and cloud, with a CV that includes Google, Microsoft, Facebook and Apple, and academic research institutions including Stanford, MIT and UCSB.She says we can see AI in several phases and inflection points:Enterprise AI for which big data led the way by gathering huge amounts of data together for analysis. Enterprise AI introduces optimisations and has brought a tsunami of new products and services.Agentic AI the next step for the next two years will bring agents that work on our behalf, taking commands, conversing with LLMs, breaking commands into steps and taking actions.Robotics and humanoids which will require great innovation in communications and machine understanding of human language to combine LLMs and robotics, such as in driverless cars, and critically, with the ability to interact in a 3D world.Causal AI where AI can predict incredibly complex real-world events, such as stock market fluctuations.Elsewhere, speakers focused on how to gain business value from AI. This included Aiden Gomez, CEO of Canadian company Cohere, which specialises in use of LLMs in enterprises.His focus is on making LLMs useful for enterprises by helping build an application stack that makes use of them.You have to be technical, you need to be a developer, to be able to build something on top of this model to create value on the other side, he said.Challenges include being able to lower barriers so AI can integrate with internal systems, and security where AI moves out of the proof-of-concept phase and touches the most sensitive customer data.The biggest thing the enterprise world needs is good solutions that can plug in and go, he said. The barrier is for enterprises to adopt AI securely, which means completely private deployments. Then we can start to shift the work that humans are doing onto these models. To succeed, they need to be able to use the systems that humans today use to get their job done and integrate generative AI with the internal software and systems.Gomez added: When people were just testing out the technology, the security piece wasnt so important, because they werent putting mission-critical data into those systems. Now, were moving out of the proof-of-concept phase and were going into production, and these models are touching the most sensitive customer data, so security is front of mind.He, too, pointed to agentic AI as the next stage and the challenges that need to be solved. The first is the use of reasoning models and the second is learning from experience.Just think about what reasoning is, said Gomez. What happens now is you can ask the model whats 1+1, or you can ask it to prove Fermats last theorem, and the model would spend the same amount of time answering both of those questions, which makes no sense.With reasoning, you can spend different amounts of energy on different difficulties and problems. So now we can approach things dramatically more efficiently, but more effectively. Thats a major unlock for agentic AI. You want agents to be able to think through problems and really reason about it, he added.The second thing is learning from experience. So, with a human, when you tell them, You did something wrong. Heres how to fix it, they remember that and they learn forever not to make that same mistake. When models have that capability to learn from experience with the user it will unlock the ability to teach your own model just by interacting with it.Finally, Lambert Hogenhout, chief of data, analytics and emerging technologies at the United Nations, warned of the dangers of AI to human society if we are passive in our approach to it. In other words, AI has the capacity to undermine human agency and even be a force that can work against us if uncontrolled.In his view, the key risks posed by AI include threats to:Autonomy in which AI makes you more competent, almost perfect, but means you are thereafter forced to interact with AI to be the person it makes you.Identity in which AI even has the capability to build a replica of someones personality if it has enough information about the person. It takes identity theft to a new level.Purpose where if AI replaces many jobs, what is left for humans, and how do we make sure we focus on what humans are good at, such as cooperation and creativity?Happiness and connection to society its very important for humans to feel happy and connected to society. If that is undermined, there will be problems.AI is giving us lots of ways to improve our business and our private life, said Hogenhout. But in the long term, say 20 years, I think nobody predicted what the world is going to look like 20 years from now.Some people have a very positive view, where the robots do the work and we can spend our days playing golf or watching interesting movies. But the dystopian view says we will lose our purpose in life, and for humans it is going to become quite miserable, except for a few AI billionaires.Hogenhout pointed to how smartphones have changed everything and how AI is likely to do the same.First, he spoke about the issue of autonomy.Ive always wanted to be funnier when I respond to messages from my friends. Ive wanted to be more eloquent when I write emails to my boss. And you know, I can with the help of AI. But it means I rely on AI for every communication, and when everybody does that, when everybody is perfect, can you afford not to use it, to be the only non-augmented human? Were going to be forced to augment ourselves with AI.And on identity, Hogenhout talked about how AI has the potential to comprehensively clone a human beings personality.What if somebody takes everything Ive ever written emails, posts, everything and it knows how to respond like me? But there are actually companies doing that already.Theres an app called Hereafter thats meant for elderly people. Your grandfather, for example, can be interviewed and information added about his life. Then, once grandpa dies, on your phone there is grandpas voice and you can ask him things like whats happening with the Super Bowl or a football match and it responds exactly like Grandpa would have.That forces the question, said Hogenhout, of what distinguishes us if it is so easy to replicate identity down to the level of voice, of opinions? At the moment, were too passive. Were just waiting for the next amazing AI system to come out. Were not thinking about how this is going to affect our lives Lambert Hogenhout, United NationsFinally, Hogenhout looked at purpose.The meaning in our lives comes from the work we do, but its already clear a lot of jobs are going to be replaced. Its true, we do dumb stuff a lot of the time, but we have incredible skills and abilities, he said.What makes us quite unique as a species is our creativity. Theres also this sense of not accepting reality as it is. But, were not just innovators. I think were very good at cooperating. Theres a reason that were all together here at this conference, because we want to learn from each other. These connections are very important in society.Its important for us to take decisions and to feel fulfilment. We want to make sure AI increases living connections, that we are not eliminated, that it makes a good society. A society where a number of people are excluded is not going to work. It will create problems, added Hogenhout.His conclusion is that AI will change us in big ways but we need to ensure we act intentionally. Otherwise, the potential to lock large swathes of humanity out of a core whose lives are augmented by AI will lead to a broken society, from which problems will result.At the moment, I think were too passive. Were just waiting for the next amazing AI system to come out. Were not thinking ahead, thinking about how this is going to affect our lives.Read more about the future of AI and agentic AISalesforces agentic AI platform to transform business automation: CRM giants Agentforce lets organisations build and deploy autonomous agents to automate business processes through advanced learning and data integration.AI is meant to free up time and yet, somehow, its stealing it: Realising the liberatory potential of artificial intelligence requires a culture shift that places people before profits and productivity.
    0 Comentários ·0 Compartilhamentos ·78 Visualizações
  • CCRC reviewing 17 Post Office convictions with potential Capture software involvement
    www.computerweekly.com
    CEO testifies on federal agencies as lawmakers clash on MuskAxon Enterprise CEO Rick Smith testified that federal agencies like the FTC engage in regulatory overreach and that their power ...DeepSeek, data privacy on lawmakers' radarU.S. lawmakers are taking steps to ban DeepSeek from government devices, which should signal to enterprises the inherent risks of...RFI vs. RFP vs. RFQ: How they differ and which is best for youRFIs, RFPs and RFQs all help software buying teams gather different information, and teams usually only send out one or two. ...
    0 Comentários ·0 Compartilhamentos ·56 Visualizações
  • Cisco: We will get better on AI power consumption
    www.computerweekly.com
    At Cisco Live EMEA in Amsterdam, the networking giant launched a number of artificial intelligence (AI) and datacentre infrastructure products and opened up over some of the tricky issues around supporting the intense energy demands that AI workloads make of current systems, that have been a subject of discussion and concern since ChatGPT went mainstream at the end of 2022.Last year, National Grid CEO John Pettigrew said AI would be a significant factor in an anticipated sixfold increase in datacentre power requirements between now and the mid-2030s, and a more recently published RAND research report estimated that AI-driven compute would add 68GW (gigawatts) of power demand worldwide by 2027 almost as much electricity as is consumed by California alone and over 325GW by 2030.Tom Gillis, senior vice-president and general manager of Ciscos security, datacentre, internet and cloud infrastructure group, spoke of an approaching inflection point given the astonishing power requirements of building datacentres fit for the AI era.And in conversation with Computer Weekly on the fringes of the show, Cisco executive vice-president and chief product officer Jeetu Patel said that while the firm rightly acknowledges the criticality of sustainability, it believes that it is helping keep things moving in the right direction on this issue.The larger sustainability issue we should keep in mind is that our products are getting more and more efficient from a power perspective, said Patel.However, he continued, if the use case that naturally arises from this improving efficiency is that Cisco users feel comfortable running more and more AI workloads across its infrastructure, the consequential rise in power consumption will quickly wipe out the gains made from the more efficient, probably liquid-cooled, electronics.The way I think about this is that you have to get AI to a point where intelligent outcomes come about from AI because it is smart enough and thats the race right now. Once we get there, AI will be able to help us solve a lot of power problems and sustainability problems, said Patel.Asked whether or not the second Trump administrations environmental policies which include an increased commitment to exploiting the USs fossil fuel reserves and the countrys withdrawal from the Paris Agreement that aims to limit global heating to 1.5C would impact Ciscos sustainability goals, Patel said it was too early to tell how things would pan out over the next four years.However, he said, it is clear that the tech industry today finds itself in both a compute- and energy-constrained environment, particularly in regard to the appetite for power that AI workloads have.We will see some major breakthroughs on that front that reduce the cost of compute, and therefore reduce the cost of power consumption, he said.For example, DeepSeek. DeepSeek might have actually been very good for the environment, because what it did was taught the models to train at a lower cost and identify techniques that can actually be more efficient.This then will allow you to get many, many more models built at a fraction of the cost of what you could do, he suggested.Read more about AI and sustainabilityThe soaring enterprise demand for AI services is causing all sorts of problems for datacentre operators and their sustainability goals, which will need addressing in 2025.Europes datacentre operators need to get their power report cards to comply with the European Commissions drive to greener EU datacentres.While improvements in energy efficiency have kept electricity consumption in datacentres in check, according to theInternational Energy Agency, to reach net zero, emissions must halve by 2030.
    0 Comentários ·0 Compartilhamentos ·75 Visualizações
  • Entrepreneurship with a conscience - the new mandate for tech leaders
    www.computerweekly.com
    In todays rapidly evolving digital landscape, entrepreneurship isnt just about disruption, innovation, and profitability anymore. As technology and enterprise leaders, we face a new mandate - to align business growth with social responsibility.Technology influences every facet of our lives, from how we communicate and work to how we shop, learn, and even think. With that influence comes a profound responsibility to wield power ethically and thoughtfully. Its no longer just about chasing profits; its about making a meaningful, lasting impact on society.When I first started in the tech world in the mid-2000s, success was often measured by metrics like market share, revenue growth, and valuations. The primary focus was on how quickly a company could scale, outmanoeuvre competitors, and secure investor confidence.But over the years, there has been a profound shift in priorities and expectations. Stakeholders -whether theyre customers, employees, investors, or entire communities - are demanding more from businesses. They expect companies to address critical issues such as climate change, data privacy, diversity and inclusion, and digital accessibility. And this isnt just a moral obligation anymore - its becoming an essential component of long-term business success.From my experience across various industries and as a member of the Forbes Technology Council, I can confidently say that true leadership isnt just about scaling businesses - its about scaling impact.C-suite technologists and entrepreneurs need to recognise that our innovations affect real people and communities in profound ways. Success today is defined by how well we integrate purpose into profit, ensuring that our technological advancements contribute positively to society while driving sustainable growth.Two companies stand out to me as prime examples of how social responsibility and profitability can coexist harmoniously:While not a tech company, outdoor clothing firm Patagonia sets the gold standard for corporate responsibility. Its unwavering dedication to environmental sustainability - using recycled materials in its products and donating a significant portion of profits to environmental causes - proves that businesses can thrive while prioritising the planet. Patagonias success demonstrates that consumers are willing to support companies that align with their values. The pace of change places an even greater responsibility on leaders to ensure that technologies do not inadvertently harm the communities we aim to serve Dax GrantUnder Satya Nadellas visionary leadership, Microsoft has prioritised accessibility, ensuring products and services cater to people with disabilities. Its "AI for good" initiative exemplifies how technology can be leveraged to address global challenges, from healthcare and education to environmental conservation and humanitarian efforts. Microsofts commitment to inclusivity and ethical innovation has not only enhanced its brand reputation but driven financial performance.These companies illustrate that prioritising social good doesnt just enhance brand reputation - it creates resilient, future-proof organisations capable of navigating the complexities of the modern business landscape.Conversely, ignoring social responsibility can have dire consequences for companies. In the tech industry, the speed of innovation often outpaces regulation. This rapid pace places an even greater responsibility on us as leaders to self-regulate and ensure that our technologies do not inadvertently harm the very communities we aim to serve.So, how can technology leaders and entrepreneurs integrate social responsibility into their business models without compromising profitability? Based on my experiences and observations, here are some practical strategies to achieve this balance:Define your purpose early: From day one, be clear about your mission beyond making money. What societal issue are you addressing? How does your product or service improve lives or contribute to the greater good? Embedding purpose early ensures it becomes a guiding principle as you scale. When your mission is deeply ingrained in your companys DNA, it influences every decision, from product development to marketing and customer engagement.Build a diverse and inclusive team: Diverse teams foster innovation, creativity, and resilience. They bring different perspectives and experiences to the table, helping identify blind spots and avoid groupthink. Its not just about diverse hiring practices but ensuring representation in leadership, decision-making, and company culture.Prioritise transparency and accountability: Be open and honest about business practices, from supply chain ethics to data handling and environmental impact. Transparency builds trust with stakeholders and creates a culture of accountability within the organisation. Regularly communicate your progress on social and environmental goals.Align profit with positive impact: Seek business models that tie financial success to positive societal outcomes. For example, consider donating a portion of profits to charitable causes, investing in sustainable practices, or designing technology that increases accessibility for underserved communities. Social enterprises, for instance, are businesses that prioritise social impact while remaining financially viable.Measure what matters: Go beyond traditional key performance indicators and track metrics related to social impact. This could include carbon footprint reduction, diversity and inclusion statistics, community engagement, and employee wellbeing. Measuring and reporting on these metrics not only holds your company accountable but also demonstrates your commitment to stakeholders.Foster a culture of responsibility: Encourage employees to consider the broader impact of their work. Implement volunteer programmes, establish ethical coding guidelines, and set sustainability goals to cultivate a culture where responsibility is valued and celebrated. Recognise and reward employees who contribute to social and environmental initiatives, reinforcing the importance of these values.As technology continues to shape our world, the role of the tech entrepreneur is evolving. The next generation of leaders wont just drive innovation; theyll do so with a deep sense of responsibility and purpose. Technological entrepreneurial leadership today requires balancing ambition with empathy, recognising that every business decision ripples out into society.The future of entrepreneurship lies at the intersection of profit and purpose. As the tech industry continues to influence every facet of our lives, those of us who embrace this dual mandate wont just survive - well thrive.Well leave a lasting, positive impact on the world, creating a legacy of innovation, integrity, and social responsibility. The time to act is now, and the responsibility is ours to lead with conscience, compassion, and conviction.Read more about ethical leadership in techDigital Ethics Summit 2024: recognising AIs socio-technical nature - At trade association TechUKs eighth annual Digital Ethics Summit, public officials and industry figures and civil society groups met to discuss the ethical challenges associated with the proliferation of technology.Swedish CIO contributes best practices for ethical use of artificial intelligence - IT leaders are scrambling to keep up with AI technology, but many are losing sight of its ethical impact and what CIOs need to do to ensure responsible use.Is diversity suffering because of budget management? Many are considering quitting their jobs over the next year because of rising workloads and falling team sizes. Are firms misallocating budgets and causing retention issues?
    0 Comentários ·0 Compartilhamentos ·74 Visualizações
  • Google: Cyber crime meshes with cyber warfare as states enlist gangs
    www.computerweekly.com
    Getty ImagesNewsGoogle: Cyber crime meshes with cyber warfare as states enlist gangsA report from the Google Threat Intelligence Group depicts China, Russia, Iran and North Korea as a bloc using cyber criminal gangs to attack the national security of western countriesByBrian McKenna,Enterprise Applications EditorPublished: 12 Feb 2025 0:01 Cyber crime has evolved to become a threat to the security of western states, according to a threat intelligence report from Google, published on the eve of the 2025 Munich Security Conference.This coming weekend marks the 61st edition of the Atlanticist conference, which was inaugurated in 1963 to facilitate collaboration between West Germany and the US, as well as other Nato countries.The Google Threat Intelligence Groups report, Cyber crime: A multifaceted national security threat, says western policymakers should be taking cyber criminality just as seriously as operations conducted by nation states.Ben Read, a senior manager at the group, said: The vast cyber criminal ecosystem has acted as an accelerant for state-sponsored hacking, providing malware, vulnerabilities, and in some cases full-spectrum operations to states. These capabilities can be cheaper and more deniable than those developed directly by a state. These threats have been looked at as distinct for too long, but the reality is that combating cyber crime will help defend against state-backed attacks.The report looks at how nation states hostile to the North Atlantic countries, such as Russia, China, Iran and North Korea, are increasingly co-opting cyber criminal groups to forward their geopolitical and economic ambitions. It also looks at the deep societal impact of cyber crime, from economic destabilisation to its toll on critical infrastructure, including healthcare.Healthcares share of posts on data leak sites has doubled over the past three years, according to the report. One example it gives is how, in March 2024, the Russian Anonymous Marketplace (RAMP) forum actor badbone, who has been associated with the INC ransomware gang, sought illicit access to Dutch and French medical, government and educational organisations, stating that they were willing to pay 2-5% more for hospitals, particularly those with emergency services.The report sheds light into how what it calls the Big Four Russia, China, Iran and North Korea have used cyber crime, including ransomware usage, to enable espionage.It states that Russia has mobilised its cyber criminals to spy and mount disruptive operations in support of the war with Ukraine. It says GRU-linked APT44 (aka Sandworm), a unit of Russian military intelligence, has employed malware available from cyber crime communities to conduct espionage and disruptive operations in Ukraine.Another example the report gives is UNC2589, a threat cluster whose activity has been publicly attributed to the Russian General Staff Main Intelligence Directorate (GRU)s 161st Specialist Training Center (Unit 29155). This, says the report, has conducted full-spectrum cyber operations, including destructive attacks, against Ukraine.And Russian group CIGAR (aka RomCom), a group that has focused on cyber crime, has conducted espionage operations against the Ukrainian government since 2022, according to the report.The reports authors say CIGARs expansion from cyber crime into espionage activity likely supporting Russian state objectives began in October 2022, when it conducted a phishing campaign targeting Ukrainian military-related entities. CIGAR continued, says the report, to conduct intrusion activity targeting primarily Ukraine and Europe through 2023 and 2024, including campaigns leveraging zero-days in Microsoft Word, Firefox and Windows.The report says China augments its spying operations by using advanced persistent threat groups like APT41 to mix ransomware deployment with intelligence collection. Deliberately mixing ransomware activities with espionage intrusions supports the Chinese governments public efforts to confound attribution by conflating cyber espionage activity and ransomware operations.APT41 is said to work from China and is most likely a contractor for the Ministry of State Security. In addition to state-sponsored espionage campaigns against a wide array of industries, APT41 is said to have a long history of conducting financially motivated operations. The groups cyber crime activity has mostly focused on the video game sector, including ransomware deployment.The report also suggests that Irans economic difficulties could be behind ransomware and hack-and-leak operations by cyber criminals.The report highlights what it characterises as a North Korean regime policy of stealing cryptocurrency to fund missile development and nuclear programmes, as well as everyday operational costs.It contends that the effects of cyber crime extend beyond stolen money or data breaches. These erode public trust, destabilise essential services, and, in the most severe cases, cost lives, say the authors. They maintain that the growing convergence of cyber crime and state-sponsored hacking requires robust action on par with the threat posed by nation-state adversaries.The reports authors argue: The collaborative nature of cyber crime means that a disrupted group will be quickly replaced by others offering the same service. Achieving broader success will require collaboration between countries and public and private sectors on systemic solutions such as increasing education and resilience efforts.Sandra Joyce, vice-president of the Google Threat Intelligence Group, said: Cyber crime has unquestionably become a critical national security threat to countries around the world. The marketplace at the centre of the cyber crime ecosystem has made every actor easily replaceable and the whole problem resilient to disruption. Unfortunately, many of our actions have amounted to temporary inconveniences for these criminals, but we cant treat this like a nuisance and we will have to work harder to make meaningful impacts.The group advocates that governments elevate cyber crime as a national security priority and emulate private sector best security practices. Ransomware and other forms of cyber crime predominantly exploit insecure, often legacy technology architectures.Read more about cyber crime and cyber warfareWhat is cyber warfare?Microsofts Digital defense report 2024 notes that Russia outsourced some cyber espionage operations against Ukraine to otherwise independent cyber crime gangs.Microsoft, OpenAI warn nation-state hackers are abusing large language models.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueRethinking AIs place in the software stack Data MattersThe journey to Agentic AI impact in 2025 Data MattersView All Blogs
    0 Comentários ·0 Compartilhamentos ·83 Visualizações
  • Fujitsu public sector boss says supplier has advantage in HMRC bid despite Post Office scandal
    www.computerweekly.com
    Fujitsu is targeting a lucrative contract renewal with HMRC worth over 200m despite reports that the government department planned to replace the scandal-tainted IT supplier, leaked details of an internal meeting haverevealed.During the meeting, staff at the Japanese suppliers UK operation were told by the companys public sector lead that reports it was being replaced on the HMRC Trader Support Services (TSS) contract were inaccurate.HMRC recently extended its TSS contract with Fujitsu for 67m. According to reports, the extension is for one year until Fujitsu is replaced.The TSS is a free support service for businesses moving goods between Great Britain and Northern Ireland. Fujitsus head of UK public sector, Dave Riley, confidently told staff at an internal meeting that the supplier intended to bid for the TSS contract and would be bidding to win. There is a great deal of public anger at the government for continuing to award contracts to Fujitsu despite its involvement in thePost Office scandal.In September 2024, HMRC announced a competitive procurement exercise for the next phase of TSS, which will run from 2026. According to sources, Fujitsu staff were told that reports that the supplier was being replaced on the TSS project were inaccurate. The contract was worth 241m when it was originally signed in 2020, and Riley is said to have emphasised Fujitsus advantage as the incumbent supplier. The broadcast of ITVs dramatisation of the Post Office scandal in January 2024 drew attention to Fujitsus role in the scandal, after which the supplier promised not to bid for new government contracts as an olive branch. However, it has controversially continued to win contracts worth hundreds of millions of pounds of taxpayer money.Describing reports that HMRC was planning to replace Fujitsu on the contract, Riley told Fujitsu staff: The quote in the article that HMRC allegedly made is a slight misquote. So, what they have said is they have promised to run a competitive tender to replace the current TSS contract and Fujitsu have been invited to partake in that.Government are aware that we are going to bid for this work, and I think we have a unique point of view, given that were the current incumbent, in how we approach the next generation.Computer Weekly asked HMRC whether Fujitsu could bid for the next phase of the TSS contract, but it had not responded by the time this article was published.Sources told Computer Weekly that HMRC is also due to sign off another contract with Fujitsu as part of an arrangement known internally as North Star. The contract worth hundreds of millions of pounds with no competitive tender includes hardware and cloud procurement.When Computer Weekly asked HMRC for details of the North Star deal, a spokesperson at the government department said: We follow government procurement rules when awarding contracts and, once contracts are approved, we publish details on Contracts Finder irrespective of the award route or supplier.According to government figures on spending, which take into account all deals worth over 25,000, HMRC spent over 240m with Fujitsu in 2024.This could be much higher and even exceed 500m in 2025, according to sources, with a potential TSS renewal, an extension to the Computer Environment for Self Assessment (CESA) service and the North Star deal.Fujitsu had not responded to a request for comment about the North Star deal when this article was published.Separately, hundreds of Fujitsu staff working on the HMRC contract went on strike at the end of last month in a dispute over pay. Staff employed by HMRC, but doing similar jobs to Fujitsu colleagues working alongside them as part of an outsourcing deal, received a much larger pay rise, according to the union representing the Fujitsu employees.In January last year,Fujitsus head of Europe, Paul Patterson,promised to pause bidding for government workuntil after the completion of the statutory public inquiry into the Post Office scandal.During questioning by MPsat a business and trade select committee hearing in January, Patterson acknowledged Fujitsus part in the scandal, telling MPs and victims: We were involved from the start; we did have bugs and errors in the system, and we did help the Post Office in their prosecutions of subpostmasters. For that, we are truly sorry.But the bidding pause, described as hollow by former MP, now peer Kevan Jones, did not include deals with existing customers in the public sector, of which there are many. Last March, Computer Weekly revealed leaked internal communicationsthat showed Fujitsu was still targeting about 1.3bn worth of UK government contracts over 12 months. Further leaked documents revealed that Fujitsu instructed staffhow to get around its self-imposed ban.2024 was a year Fujitsu would like to forget
    0 Comentários ·0 Compartilhamentos ·80 Visualizações
  • MPs demand bank bosses come clean over IT outages following Barclays crash
    www.computerweekly.com
    CLRCRMCKNewsMPs demand bank bosses come clean over IT outages following Barclays crashTreasury committee wants banks to provide details of how IT failures have affected their businesses over the past two yearsByKarl Flinders,Chief reporter and senior editor EMEAPublished: 11 Feb 2025 16:30 MPs have written to bosses at the UKs biggest banks to shed light on the impact of IT failures on their businesses, after a Barclays outage caused chaos on payday last month.The recent three-day outage at Barclays Bank has heightened concerns over the stability of the banking sector, and the Treasury Committee has written to bosses at nine banks and building societies requesting information about IT outages.CEOs at Barclays, Santander, NatWest, Danske Bank UK, Nationwide Building Society, Allied Irish Bank, HSBC, Bank of Ireland and Lloyds Banking Group were asked for information on the scale and impact of IT failures over the past two years. They have until 26 February to respond.Barclays suffered a three-day outage from 31 January to 2 February. This began on payday and clashed with the HMRCs self-assessment deadline.The committee asked Barclays 10 questions, while the other eight banks were asked the same four questions.The questions for Barclays included what caused the latest outage and how it affected customers, as well as how the bank intends to prevent such a failure happening again.Banks dont like to talk about IT failures, and while Barclays was quick to rule out a cyber security issue, it would not give details about the cause.Read more about banking IT outagesThe other eight bank bosses were asked to provide an overview of the number of instances and amount of time in total services have been unavailable to customers due to IT failure over the past two years, how many customers have been affected, the amount of compensation that has been paid to their customers, and a description of the reason for the failures. You can read the letters to the bank CEOs here.When a banks IT system goes down, it can be a real problem for our constituents, who were relying on accessing certain services so they can buy food or pay bills, said Treasury Committee chair Meg Hillier MP. For it to happen at a major bank such as Barclays at such a crucial time of year is either bad luck or bad planning. Either way, its important to learn what has happened and what will be done about it.She said the closure of high street branches in favour of online banking means bank crashes hit customers harder. The rapidly declining number of high street bank branches makes the impact of IT outages even more painful; thats why Ive decided to write to some of our biggest banks and building societies, said Hillier.One source in the IT sector who has worked at Barclays in the past said: It is quite a cautious firm, and it wont want to say anything that could have ramifications later if they say something wrong. I am sure theyre going to have to report this to the regulators, and they might get called in front of government.The Barclays outage sounds like somebody has probably changed or tweaked something and thats caused the problem, which has obviously affected multiple systems, they added. My guess would be its something shared which could be a software component, or it could be some infrastructure.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueThe journey to Agentic AI impact in 2025 Data MattersYour three point plan for Safer Internet Day When IT Meets PoliticsView All Blogs
    0 Comentários ·0 Compartilhamentos ·74 Visualizações
  • AI Action Summit: European AI investment ramps up
    www.computerweekly.com
    Laurent - stock.adobe.comNewsAI Action Summit: European AI investment ramps upA number of private companies and European governments have announced large-scale investments in artificial intelligence during the two-day AI Action Summit in ParisBySebastian Klovig Skelton,Data & ethics editorPublished: 11 Feb 2025 14:45 European governments and private companies have committed around 200bn to artificial intelligence (AI)-related investments, including datacentres and gigafactories, over the course of the AI Action Summit in Paris.It follows the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in November 2023, and theAI Seoul Summit inSouth Korea in May 2024.During the first day of the Paris Summit, dozens of major corporations and startups led by venture capital firm General Catalyst launched the EU AI Champions Initiative, to invest 150bn in European AI over the next five years.It has already called for simplified AI regulation in Europe, greater investment in infrastructure, and a public campaign to improve peoples understanding and trust of the technology. By seizing the moment, working with greater intention and embracing deep collaboration, Europe can seize a generational opportunity by leading in applied AI, integrating it into our industrial base to boost productivity, resilience and economic sovereignty, said Jeannette zu Frstenburg, managing director and head of Europe at General Catalyst.Backers of the initiative include Deutsche Bank, German defence startup Helsing, French AI developer Mistral, and Swedish music-streaming giant Spotify.Speaking on the first day of the Summit, European CommissionerUrsula von der LeyenWe aim to mobilise a total of 200bn for AI investments in Europe, she said. This unique public-private partnership, akin to a Cern for AI, will enable all our scientists and companies not just the biggest to develop the most advanced, very large models needed to make Europe an AI continent.Read more about AI investmentSaudi puts $15bn into AI as experts debate next steps: The kingdoms Leap 2025 tech show is the backdrop for huge investment, plus debate over the future of artificial intelligence as a productivity tool but which can also potentially undermine human society.UK government unveils AI-fuelled industrial strategy: Labour plans to implement the 50 recommendations set out by entrepreneur Matt Clifford to boost the use of AI in the UK.Elon Musk distances himself from Trumps Stargate AI mission: Just a few days into the Donald Trump presidency and there appears to be a disagreement brewing around funding of OpenAI and the Stargate Project.In total, the combined investment from private companies and the EU makes it the largest public-private investment in the world.In the run-up to the summit, French president Emmanuel Macron announced the country would attract 109bn worth of private investment in datacentres and AI projects in the coming years, including up to 50bn from the United Arab Emirates for a datacentre, and a further 20bn investment in AI infrastructure from Canadas Brookfield Corporation. Further investments are expected from French companies Iliad SA, Orange SA and Thales SA.On the second day of the summit, the UK government announced its own investment into AI research for cancer and drug discovery, which will be worth 82.6m.The UK government also announced it would expand its involvement in the European High-Performance Computing (EuroHPC) Joint Undertaking, by committing a further 7.8m to funding UK researchers and businesses participation. It claimed the investment would enable British AI and computing researchers to work unobstructed with their European peers.The focus of this summit has been on how we can put AI to work in the public interest, and todays announcements are living proof of how the UK is leading that charge through our plan for change, said digital secretary Peter Kyle. Weve already set out a bold new blueprint for AI which will help to spark a decade of national renewal, and key to that plan is supporting our expert researchers and businesses with the support they need to drive forward their game-changing innovations.Today, we open new avenues for them to do exactly that building bridges with our international partners so the entire global community can share in the boundless opportunities of AI-powered progress and backing new innovative companies applying AI to tackle real-world challenges.As part of its AI opportunities action plan, the UK government is also encouraging local authorities to put in bids for AI growth zones, which it has claimed will boost local and regional economic growth opportunities, particularly in deindustrialised areas of the country, through the construction of datacentres and other key infrastructure.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueYour three point plan for Safer Internet Day When IT Meets Politics"Just throw more bandwidth at it" - Yeah right... Some people never learn! Networks GenerationView All Blogs
    0 Comentários ·0 Compartilhamentos ·65 Visualizações
  • AI Action Summit: global leaders decry AI red tape
    www.computerweekly.com
    Key European politicians gathered at the AI Action Summit have committed to cutting red tape to ensure artificial intelligence (AI) is able to flourish throughout the continent, signalling closer alignment with the USs light-touch approach to regulation.The Paris Summit follows the inauguralAI Safety Summit hosted by the UK government at Bletchley Parkin November 2023, and the second AI Seoul Summit inSouth Korea in May 2024, both of which largely focused on risks associated with the technology and placed an emphasis on improving its safety through international scientific cooperation and research.However, there are concerns from some civil society groups and AI practitioners there has been a shift away from this focus on safety during the latest AI Summit, as politicians and industry figures are now seemingly prioritising speed and innovation over safety and regulation. US vice-president JD Vance, for example, told the summit on 11 February: Excessive regulation of the AI sector could kill a transformative industry ... we need international regulatory regimes that foster the creation of AI technology rather than strangle it, and we need our European friends in particular to look to this new frontier with optimism rather than trepidation.Vances comments follow US president Donald Trump revoking an Executive Order on 20 January 2025 signed by predecessor Joe Biden that required AI developers to share safety test results with the US government for systems that posed risks to national security, the economy or public health, which prompted concerns at the time about regulatory divergence between the US, Europe and China.Vance added that while a light-touch approach does not mean throwing all safety concerns out the window, focus matters, and we must focus now on the opportunity to catch lightning in a bottle.Adopting a more aligned, light-touch regulatory approach was also encouraged by industry figures, on the basis it would boost productivity and innovation.During a speech delivered on the first day of the AI Summit, Google CEO Sundar Pichai said it was important for different regulatory regimes to be aligned: AI cant flourish if there is a fragmented regulatory environment, with different rules across different countries and regions.He added that while history will look back on today as the beginning of a golden age of innovation, positive outcomes cannot be guaranteed: European competitiveness depends on productivity, so driving adoption is key The biggest risk could be missing out.Pichai also called for governments to invest more in AI innovation ecosystems, highlighting rapid adoption of the technology throughout France: How do we create more of these pockets in more places?Similar sentiments were shared by OpenAI CEO Sam Altman in an op-ed for Le Monde published ahead of the summit, who encouraged European politicians to focus on innovation over regualtion: If we want growth, jobs and progress, we must allow innovators to innovate, builders to build and developers to develop.He added: In Europe, much of the conversation has focused on what former European Central Bank president Mario Draghi has called a European innovation gap with the United States and China that poses an existential challenge to the EUs future.Both French president Emmanuel Macron and European Union (EU) digital chief Henna Virkkunen strongly indicated that the bloc would simplify its rules and implement them in a business-friendly way to help AI on the continent scale.Its very clear we have to resynchronise with the rest of the world, said Macron, adding that the French government will adopt a Notre Dame strategy, referring to how the cathedral was rebuilt within five years of the 2019 fire: We showed the rest of the world that when we commit to a clear timeline, we can deliver The Notre-Dame approach will be adopted for datacentres, for authorisation to go to the market, for AI and attractiveness.Virkkunen added: I agree with industries on the fact that now, we also have to look at our rules, that we have too much overlapping regulation We will cut red tape and the administrative burden from our industries.Following the announcement that France is set to invest around 109bn in datacentres and AI-related projects over the next few years, Macron declared that France is back in the AI race.European Commissioner Ursula von der Leyen, however, dismissed the idea that Europe had been left behind in any way: The AI race is far from over. Truth is, we are only at the beginning. The frontier is constantly moving, and global leadership is still up for grabs, adding that Europes own distinctive approach should focus on collaborative, open-source solutions.She ended her speech by announcing an additional 200bn for EU AI investment, 20bn of which she indicated would be used on gigafactories to help train very large models: We provide the infrastructure for large computational power. Researchers, entrepreneurs and investors will be able to join forces.However, she added: At the same time, I know that we have to make it easier, and we have to cut red tape and we will.While it is hoped that world leaders attending the summit will sign a joint, non-binding declaration a draft of which highlights the importance of inclusive, sustainable approaches to AI, as well as the risks of market concentration around the technology it has been reported that the US and UK are unlikely to sign.Some are concerned that the rhetoric coming out of the summit indicates a worrying shift in the global AI landscape.Kasia Borowska, managing director and co-founder of Brainpool AI a global network of 500 AI and machine learning (ML) experts that build custom AI tools for businesses said that Vances speech in particular means governments are prioritising innovation over regulation, adding there are serious questions around the safety of AIs further development.If we rush to win the AI arms race without establishing robust control mechanisms for existing AI technologies, we will be ill-prepared to manage AGI [artificial general intelligence], she said. Regardless of who achieves AGI first, a race-to-the-top approach that prioritises speed over safety could lead to disastrous consequences for everyone. We must implement proper safeguards now, before we reach AGI, when it may be too late.Chris Williams, a partner at global law firm Clyde & Co, added that while there remains enormous hype around what AI can actually achieve, the focus has clearly shifted away from balancing AI safety and innovation.The safety first narrative around AI, which was once prevalent among those now in government has clearly given way to a focus on doing what is necessary to foster innovation, and a good example of this is the UK which aims to become an AI superpower. No matter the jurisdiction, whether it be the UK or US, the need to create legislative safeguards are being viewed as nice to haves rather than essential cornerstones to developing AI in a way that is safe, responsible and ethical, he said.At this stage, the regulatory response might need to be more fluid and less prescriptive to avoid stifling innovation, but it would likely need to include a long-term view of gradually stepping up checks and balances as AI becomes more advanced.Commenting on the draft deceleration, Gaia Marcus, director of the Ada Lovelace Institute, said that governments must re-focus on the technologys safety, which dominated the previous two international AI Summits.Based on the initial draft, we are concerned that the scaffolding provided by the official summit declaration is not strong enough, he said, adding that while it highlights widespread consensus on key structural risks such as AI market concentration and sustainability challenges, it fails to build on the mission of making AI safe and trustworthy, and the safety commitments of previous summits. There are no tools to ensure tech companies are held accountable for harms. And there is a growing gap between public expectations of safety and government action to regulate.There will be no greater barrier to the transformative potential of AI than a failure in public confidence ... like-minded countries that recognise the costs of unaddressed risks must find other forums to continue building the safety agenda.Read more about artificial intelligenceGoogle drops pledge not to develop AI weapons: Google has dropped an ethical pledge to not develop artificial intelligence systems that can be used in weapon or surveillance systems.Elon Musk capitalises on DeepSeek confusion to bid for OpenAI: The market disruption resulting from DeepSeek has reset artificial intelligence, and now Elon Musk and a consortium of investors want to grab OpenAI.Government opens up bidding for AI growth zones: As part of its AI opportunities action plan, the government is encouraging local authorities to put in bids for AI growth zones.
    0 Comentários ·0 Compartilhamentos ·77 Visualizações
  • New componentry extends NetApp ASA and E-series block storage
    www.computerweekly.com
    JHVEPhoto - stock.adobe.comNewsNew componentry extends NetApp ASA and E-series block storageOne-time king of the filers adds anti-ransomware to its more recent block storage families, while also adding in an extra FAS array, all on the back of upgraded componentsByAntony Adshead,Storage EditorPublished: 11 Feb 2025 13:00 NetApp has added new models to its all-flash ASA block storage family, inserted a new FAS hybrid flash filer (the FAS50), introduced new E-series HPC-oriented SANs (the EF300C and EF600C) and expanded ransomware detection, plus recovery guarantees to the block storage range.The new ASA A20, A30 and A50 are largely the result of component upgrades in CPU, PCIe and memory. They occupy the range at entry level and mid-range in the ASA block storage family, and complement the existing A70, A90 and A1K while replacing the A250 and A400.The A20 scales from 15TB (terabytes) to 734TB raw capacity with 3.2PB (petabytes) possible with data reduction and up to 19PB possible when clustered. Those figures are 68TB to 1.8PB raw for the A50, with up to 48PB effective capacity in a cluster.NetApps ASA series (All-SAN Array) is itsflash block storage offering in hardware form for on-premise deployments with the ability to add cloud capacity. ASA is effectively NetApps AFF all-flash array family withNAS functionalityturned off.All the new A series arrays are 2U in form factor. Theres somewhat of an overlap in terms of capacities between the A50 and the existing A70, but the latter comes in a 4U form factor to allow for greater connectivity options, said NetApp chief technologist Grant Caley.The [existing] higher spec arrays take cards that provide more I/O options. On the lower spec modular systems, it is integrated.According to NetApp, the A20 is 72% quicker than the existing A150, while the A30 and A50 come in as 109% and 171% more rapid than the A250 and A400 they supersede. The A150 and A250 were introduced in May 2023, and had capacities of 0.5PB and 1.1PB respectively.The upgrade to the ASA products comes a little after NetApp did similarly for its AFF all-flash file storage-oriented products. That delay is just down to the time it takes, said Caley. There are only so many engineers in the day, he said. And AFF is the platform thats popular with customers.Elsewhere, NetApp has effected similar component upgrades to its E-series models, with a new EF300C and EF600C. E-series arrived when NetApp bought Engenio in 2011. They run the SanTricity OS and were spinning disk only, with flash added later as NetApp adapted them for its first foray into flash storage in 2013.Theyre still hybrid flash which means they can also have HDDs but the new models can now use high-density QLC flash drives in 30TB and 60TB capacities.I call it simple SAN, said Caley. There are just snapshots and replication in a very high density array with massive throughput for HPC-type use cases.And were seeing something of an upsurge in HDD popularity as flash costs have risen, mostly for secondary and lower performance workloads.There is also a new entrant in the FAS range. This is the successor to the long-established NetApp filer range and is still hybrid flash. The new FAS50 slots in between the 2820 and the FAS70 and offers between 100TB and 10.6PB of raw capacity in a high availability system and up to 127PB in a cluster.Meanwhile, later this year, NetApp will release NetApp OnTap Autonomous Ransomware Protection with artificial intelligence (ARP/AI) for block. These include a repackaging for block storage of NetApps ransomware guarantees as Ransomware Detection Program, in which the company promises to help the customer recover free of charge initially if its ransomware detection fails to spot the intrusion.NetApp is confident it wont come to this, and points to its ransomware detection and recovery functionality. This comes in the form of machine learning-based anomaly detection that spots unusual patterns involving data encryption, as well as anomalous user activity.As with previous iterations, when the AI spots anomalous activity it triggers an immediate snapshot to which the customer can recover.Finally, in BlueXP NetApps hybrid cloud storage management platform customers will now also be able to simulate ransomware attacks and their recovery process.Read more about NetApp storageNetApp boosts AFF, StorageGrid and E-series hardware with 60TB drives. Bigger drives, new CPUs and controller hardware add to upgrades to NetApp arrays that started in September, while the supplier undergoes transition to data management as a key focus.NetApp maintains push to data management for AI. From data storage to intelligent data infrastructure thats the plan from NetApp, which has announced data curation for artificial intelligence as well as additions to its ASA and FAS storage arrays.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueYour three point plan for Safer Internet Day When IT Meets Politics"Just throw more bandwidth at it" - Yeah right... Some people never learn! Networks GenerationView All Blogs
    0 Comentários ·0 Compartilhamentos ·78 Visualizações
  • F1s Red Bull charges 1Password to protect its 2025 season
    www.computerweekly.com
    Identity and access management (IAM) tech supplier 1Password has signed a multi-year deal to become exclusive cyber security partner at Oracle Red Bull Racing, its first foray into the IT-forward world of Formula 1 motor racing.The tie-up will see Red Bull implement 1Passwords Extended Access Management (XAM) services to strengthen its security posture and safeguard its data across devices, applications, 24 races and its Milton Keynes headquarters.As is traditional, 1Password branding will also be incorporated into Red Bulls team assets amid a wider sponsor shake-up that has seen it part ways with crypto firm ByBit and a number of Mexico-based organisations linked to former driver Sergio Perez.Its logos will appear on the halo driver protection devices, steering wheels and chassis of 2025s soon-to-be-unveiled Honda-powered RB21 car, driven by World Drivers Champion Max Verstappen of the Netherlands and rookie partner Liam Lawson of New Zealand.The supplier will also work alongside Red Bulls Pepe Jeans Academy Programme, which supports F1 Academy, a programme set up to address F1s lack of female talent and inspire girls to become future drivers, engineers and leaders. It will be additionally supporting 18-year-old Brit Alisha Palmowski, who will be driving with the Campos squad this season.Everyone at Oracle Red Bull Racing is excited to welcome 1Password to the Red Bull family as our exclusive cyber security partner, said Red Bull team principal and CEO Christian Horner.As the start of the 2025 Formula 1 season approaches, it is critical that our entire organisation has secure, trusted access to critical information so we can continue to make confident, data-driven decisions trackside and back at the factory in Milton Keynes. Security should empower productivity and integrate effortlessly into the way people work David Faugno, 1Password1Password investing and partnering in the Red Bull Racing Pepe Jeans Academy Programme shows further proof of the spirit and depth of our collaboration. We look forward to pushing the limits and innovating with 1Password, including debuting a team-first steering wheel branding display that will feature when Max and Liam first hit the track, said Horner.David Faugno, co-CEO of 1Password, added: Partnering with a world champion like Oracle Red Bull Racing is an incredible opportunity. As a dominant force in Formula 1, their success relies on engineering excellence, innovation and seamless, secure access to critical information anywhere.At 1Password, we believe security should empower productivity and integrate effortlessly into the way people work. Thats why we protect every sign-in, every point of access, and every piece of critical information so the team can stay focused on what they do best: winning.The focus of the relationship 1Passwords XAM tool is designed to ensure trusted access across end user organisations to safeguard critical data.In Red Bulls case, beyond protecting its drivers and team staffers and their devices as they travel around the world, 1Password will centre on secured access to applications, helping gain visibility and a centralised view of the teams app estate including unsanctioned, bring-your-own tools with universal sign-on to allow authorised users to access managed and out-of-scope software. This will help the team safeguard its proprietary innovation to ensure that only trusted and authorised users can access relevant data from secure devices.On the software development side, 1Password will help Red Bull automate secrets management and detect unencrypted SSH keys while fully integrating into its existing software development workflows.More broadly, the partners say the deal will help improve efficiency and productivity across Red Bulls widely distributed global teams, providing them with secure access to tools and systems in a hybridised environment.The 2025 Formula 1 season begins in Melbourne, Australia, on the weekend of 14 to 16 March 2025, with pre-season testing taking place in Sakhir, Bahrain, at the end of February.Read more about IT in Formula 1Learn how the technical teams behind Formula One are using Salesforces tools to enhance fan activation and engagement at 24 races across the world, and how they are bringing AI into play with Agentforce capabilities.A multi-region campaign will teach pre-teen children cyber security basics with a little help from Formula 1 star Alex Albon.We speak to Formula One's lead cloud architect, Ryan Kirk, and AWS about a partnership that strives to deliver data for greater fan engagement.
    0 Comentários ·0 Compartilhamentos ·75 Visualizações
  • Saudi puts $15bn into AI as experts debate next steps
    www.computerweekly.com
    Antony AdsheadNewsSaudi puts $15bn into AI as experts debate next stepsThe kingdoms Leap 2025 tech show is the backdrop for huge investment, plus debate over the future of artificial intelligence as a productivity tool but which can also potentially undermine human societyByAntony Adshead,Storage EditorPublished: 11 Feb 2025 10:37 The past two years of artificial intelligence (AI) have been like science fiction, but now it needs to move to the next stage, namely agentic AI that can work for us while being completely unobtrusive. Meanwhile, AI threatens to undermine our autonomy, even providing the ability to clone our identities and personalities.Those were some of the views put forward at this weeks Leap 2025 tech show in Riyadh, Saudi Arabia, at which the kingdom announced almost $15bn worth of planned investment in AI.The announcement came in the context of Saudi Arabias multi-year plan Vision 2030, which seeks to diversify the country away from its historic heavy dependence on oil production revenues.Saudi Arabia is placing big bets on IT, datacentre capability and AI in particular, with the latter a massive focus of the show and an agenda packed out with discussions around applications of AI and its next steps, particularly agentic AI.Projects announced in AI included a $1.5bn agreement between AI infrastructure provider Groq and Aramco Digital a subsidiary of long-established state oil company Aramco to expand AI-powered inference infrastructure and cloud computing. Also, Saudi state-owned manufacturing conglomerate ALAT and Lenovo committed $2bn to establish an advanced manufacturing including semiconductors and technology centre that integrates AI and robotics.Others included Googles new AI-driven digital infrastructure and the launch of a computing cluster, and Qualcomm announcing availability of its ALLAM Arabic large language model (LLM) on Qualcomm AI Cloud.The kingdom also revealed that $42.4bn had been invested in technology-related infrastructure since 2022.These included Databricks investing $300m in integrated platform as a service (PaaS) for developers in AI tools; SambaNova committing $140m to build advanced AI infrastructure; Salesforce investing $500m to develop Hyperforce and enhance cloud capabilities for regional customers; and Tencent Cloud allocating $150m to establish the Middle Easts first AI-powered cloud region.With a couple of years of generative AI behind us, most notably in the form of ChatGPT, a big theme at the event was the next stages in AI development and discussions of its longer-term effects on humanity.Of the former, agentic AI is touted as one of the next big things in AI. Here, the challenge is to move beyond AI as something we seek out and use, but instead works as a barely noticed adjunct to human activities that operates for us.That was a view expounded by Yaser Al-Onaizan, CEO of the National Center for AI in the Saudi Data and AI Authority (SDAIA).The promise of AI is that it will be in everything that we do and we touch every day, said Al-Onaizan. It needs to be invisible. It cannot be in your face it should be listening to you, understanding you and doing things based on your opinion.Al-Onaizan said the new generation of models is about doing something on your behalf. For example, instead of just giving you information about flights, it can go on and reserve flights, for example, or make a hotel reservation for you.But some speakers warned of the threats posed by AI. Among these were Lambert Hogenhout, chief of data, analytics and emerging technologies at the United Nations. He said AI brings huge potential for vastly multiplied productivity and for large numbers of people to work in better ways, but warned that it also brings threats to human autonomy, via fraud and undermining identity, purpose and connection to society.He said: We want to make sure AI increases living connections, that we are not eliminated. That it makes a good society. The society where a number of people are excluded is not going to work. It will create problems.Elsewhere, attendees focused on how to gain business value from AI. This included Aiden Gomez, CEO of Canadian company Cohere, which specialises in use of LLMs in enterprises.He said: A generative model is kind of like a CPU its a general piece of technology. You could deploy it inside any vertical for any purpose, like a CPU. But, in and of itself, just owning a CPU isnt valuable. Its what you build with it that is valuable.So, for that piece, you do have to be technical. You need to be a developer, to be able to build something on top of this model to create value on the other side.Read more about IT in Saudi ArabiaSaudi Arabia calls for humanitarian AI after tightening screws on rights protesters. Oppressive state wants global digital identity system at the heart of all AI, to make it trustworthy and prevent it being used for unauthorised surveillance.CW Middle East: Can Saudi Arabia build the Silicon Valley of the Middle East? Also, read how demand for skilled IT professionals is increasing rapidly in the Middle East, which is facing a talent crunch. Meanwhile, Gulf Cooperation Council smart city initiatives are gathering momentum.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueYour three point plan for Safer Internet Day When IT Meets Politics"Just throw more bandwidth at it" - Yeah right... Some people never learn! Networks GenerationView All Blogs
    0 Comentários ·0 Compartilhamentos ·81 Visualizações
  • Elon Musk capitalises on DeepSeek confusion to bid for OpenAI
    www.computerweekly.com
    Funtap - stock.adobe.comNewsElon Musk capitalises on DeepSeek confusion to bid for OpenAIThe market disruption resulting from DeepSeek has reset artificial intelligence, and now Elon Musk and a consortium of investors want to grab OpenAIByCliff Saran,Managing EditorPublished: 11 Feb 2025 12:00 After being fired by the company he co-founded, hired by Microsoft and then returning as OpenAI chief, Sam Altman is again facing a crisis. His ongoing stand-off with the worlds richest man, Elon Musk, has taken another turn after Musk and a bunch of like-minded investors announced they would be putting in a bid of $97bn to acquire OpenAI.Responding to the bid, Altman tweeted: No thank you, but we will buy Twitter for $9.74bn if you want.Last year, Musk filed a complaint in San Francisco Superior Court alleging that OpenAI CEO and president Greg Brockman breached the founding agreement of the creation of OpenAI.The company was originally founded as a not-for-profit organisation, but to purchase the compute capacity it needed, it reorganised to create a for-profit business.The company received $10bn of support from Microsoft, which included access to the Microsoft cloud, for running OpenAI large language models (LLMs) such as ChatGPT.In January, the Trump administration announced Project Stargate. Supported by tech giants including Oracle, its run as a new company that intends to invest $500bn over the next four years in building AI infrastructure for OpenAI in the US.In a blog posted on Monday, covering how humanity will need to adapt to the era of artificial general intelligence, where machines are able to tackle cognitive tasks equivalent to humans, Altman affirmed his commitment to the Microsoft partnership, writing: We do not intend to alter or interpret the definitions and processes that define our relationship with Microsoft. We fully expect to be partnered with Microsoft for the long term.Read more LLM storiesDeepSeek explained: Everything you need to know: DeepSeek, a Chinese AI firm, is disrupting the industry with its low-cost, open source large language models, challenging US tech giants.OpenAI o3 explained: Everything you need to know: OpenAI o3 is the successor to the o1 reasoning model. It is the second release from the OpenAI reasoning model branch.However, OpenAIs position as the leader in LLM development has been put into jeopardy following the release of Chinas DeepSeek AI model, which massively undercuts what OpenAI charges.While lawmakers are trying to curb DeepSeek, due to user data potentially being shared with China, the DeepSeek models are open source, which means they can run privately and on any public cloud infrastructure. In fact, Amazon Web Services, Azure and Google Cloud Platform all offer the DeepSeek R1 model.Costs vary depending on the graphics processor unit required, but the cheapest way to run the LLM is via application programming interfaces (APIs), which connect directly to DeepSeeks own cloud version of its LLM.While OpenAI charges $2.50 per million input tokens for its GPT-4o model, directly connecting to DeepSeek through an API is priced at $0.14 per million input tokens in situations where the AI engine is able to draw on previously cached information. Non-cached inputs are priced at $0.55 per million tokens.Arguably, the existence of a model that can be run far cheaper than OpenAI may have some investors spooked. Certainly, the stock market and Nvidias share price crashed after DeepSeeks announcement. The fact that Musk has put in a bid for OpenAI, just weeks after the availability of the new DeepSeek LLM, may well be the tech billionaires attempt to capitalise on the market confusion and potential readjustment as people begin to understand there is more than one way to do AI.There are questions over Musks x.AI business and its Grok LLM. The company recently closed a Series C funding round of $6bn, but now Musk and a band of investors are looking to acquire OpenAI. Clearly, should Musk be successful in his bid, he will be at the centre of the US AI strategy, and the $500bn Stargate initiative.In The Current Issue:Digging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketInterview: Digital tech fuels AutoTraders drive into the futureDownload Current IssueYour three point plan for Safer Internet Day When IT Meets Politics"Just throw more bandwidth at it" - Yeah right... Some people never learn! Networks GenerationView All Blogs
    0 Comentários ·0 Compartilhamentos ·85 Visualizações
  • Technology is changing and so should the civil service
    www.computerweekly.com
    The Prime Ministers call for the complete rewiring of the British state has put the onus on the civil service to match the demands placed upon it by rapid technological advances most notably the rise of generative artificial intelligence (AI).The question is not if or when AI will change how policy is made, but how policy makers can use it to improve outcomes for citizens. The impact will be extensive but not total. There are some parts of the policy making process where, for now, the role of the policy maker is relatively unaffected like officials using their judgement to navigate the competing interests and idiosyncrasies of Whitehall to get things done.But in other areas, the effect will be more apparent and immediate. Tools like Redbox can dramatically reduce the time it takes for a minister to learn about a new topic as well as commissioning an official, they can ask a large language model (LLM). This challenges the traditional ways officials manage the flow of information into ministers.LLMs will also change the intellectual process by which policy is constructed. In particular, they are increasingly useful - and so increasingly being used - to synthesise existing evidence and suggest a policy intervention to achieve a goal.Policy work across Whitehall is already being usefully augmented by LLMs, the most common form of generative AI. The tools available include:Redbox, which can summarise the policy recommendations in submissions and other policy documents and has more than 1,000 users across the Cabinet Office and Department for Science, Innovation and Technology.Consult, which the government says summarises and groups responses to public consultations a thousand times faster than human analysts. Similar tools are used by governments abroad, for example in Singapore.A live demonstration of Redbox at the 2024 civil service Policy Festival showed it analysing a document outlining problems with the operation of the National Grid and summarising ideas from an Ofgem report on how to improve it.While LLMs are advancing quickly and some of their current shortcomings might only be temporary, there remain limits to what they can do.They can synthesise a wide range of sophisticated information, but their subsequent output can be wrong, occasionally wildly so - known as hallucination. LLM outputs might also contain biases for which officials need to correct, including unfair assumptions about certain demographic groups.Because LLMs are trained on available written information, their outputs can lack the nuance and context human experience can provide. Designing new policy to increase, say, the efficiency with which hospitals are run requires possessing advanced knowledge about healthcare policy, of the sort LLMs are increasingly capable of summarising.But it also requires insider insight into the way hospitals actually work vital context like what parts of the system are currently being gamed and how, and an understanding of how doctors, nurses and administrative staff will respond to any changes.LLMs also tend to provide standard answers, struggling to capture information at the cutting edge of a field and provide novel ideas. Unless stretched by the user, they are unlikely to suggest more radical answers and this has consequences, particularly in fast-moving areas of policy. Ironically, AI policy is one such area.Finally, over-credulously incorporating LLM outputs into the policy making process can be dangerous. Evidence, whether scientific, social or other, rarely points in one direction and an LLM summarising evidence might implicitly elevate some political principles over others. If done badly, a policy maker incorporating that output into advice to a minister risks building assumptions into their recommendations which run contrary to that ministers political views.These are all good reasons for caution. But the potential benefits of using LLMs are large. In an AI-augmented policy making process, the policy makers key role will be to introduce the knowledge that an LLM cannot.Policy makers added value will likely manifest in two main ways. The first is in using their expertise to edit and shape LLM first drafts including checking for and correcting hallucinations and untoward biases. This is not that dissimilar to what the best policy makers currently do humans, too, get things wrong or expose biases through their work.The second is by layering policy makers ideas on top of LLM outputs, sometimes being prepared to push them in a more radical direction. This could involve an interactive process, in which an LLM is asked to provide feedback on ideas produced by a policy maker. The time freed up by using LLMs to perform traditionally time-intensive tasks could give policy makers the opportunity to gather and deploy new types of information which can help craft better policy.Particularly important will be the kind of hyper-specific or real-time insider insights which LLMs struggle to capture, which could be acquired in new and creative ways spending time immersed on the frontline, building a professional network which can give real-time reactions to new developments, or something different entirely.However, integrating LLMs into government might make it harder for policy makers to acquire important skills. If domain expertise and insider insights are the things for which policy makers are increasingly valued, they must possess the commensurate skills.But this presents something of a paradox - LLM adoption might not only make domain expertise even more important to possess, but also harder to acquire. It is precisely the activities that LLMs are so efficient at performing gathering and synthesising existing evidence, and using it as the basis for policy solutions that policy makers have tended to use to acquire their first building blocks of expertise.This also has consequences for policy makers ability to gather insider insights. It is all very well freeing up time for policy makers to collect information in new ways, but if they do not have a baseline level of expertise they will find it hard to know where to look for it and how to interpret it.This leaves the civil service with two options. The first is to preserve some basic tasks for more junior officials so they can build the domain expertise needed to intelligently use LLMs.The second is to reinvent the way policy makers acquire expertise, reducing reliance on the now AI-augmented traditional methods. For example, the type of official who is currently a junior policy maker could instead be deployed to the frontline, giving them personal experience of the operation of the state which they can use in a more conventional policy role in Whitehall once they get more senior.Perhaps the best approach would be for the civil service to start by ringfencing tasks, but actively commission test and learn projects to explore more imaginative approaches, and scale those where they work. This could take place alongside implementing more traditional solutions. For example, the civil service has a problem with excess turnover and officials who move between policy areas less frequently would find it easier to develop expertise.Policy making is among the most important and hardest jobs the civil service does, and improving how it is done is a substantial prize. A policy making process which blends human expertise with LLMs will not just be more efficient, but more insightful and connected to citizens concerns.Channelling the adoption of LLMs in the most productive way possible, maximising the benefits while mitigating the risks, is crucial for the civil service to get right. Just letting change happen should not be an option it must be proactively shaped.Jordan Urban is a senior researcher at the Institute for Government.Read more about AI in governmentThe UK government's AI plan covers all the bases but needs a dose of pragmatism - With the launch of its AI Opportunities Action Plan, few people can complain that the UK government is not taking the potential of artificial intelligence seriously.Can UK government achieve ambition to become AI powerhouse? The artificial intelligence opportunities action plan has been largely well received, but there are plenty of questions about how it will be achieved.Major obstacles facing Labours AI opportunity action plan - Skills, data held in legacy tech and a lack of leadership are among the areas discussed during a recent Public Accounts Committee session.
    0 Comentários ·0 Compartilhamentos ·100 Visualizações
  • A two-horse race? Competition concerns cloud AWS and Microsoft
    www.computerweekly.com
    CW+ Premium Content/Computer WeeklyThank you for joining!Access your Pro+ Content below.11 February 2025A two-horse race? Competition concerns cloud AWS and MicrosoftIn this weeks Computer Weekly, Microsoft and AWS dont like it, but the UK competition watchdog says their hold on the cloud market is cause for concern. We talk to AutoTraders CEO about how to become a digital business. And we go behind the scenes at Zoom to see how AI will revolutionise the former lockdown success story. Read the issue now.Access this CW+ Content for Free!Already a member? Login hereFeaturesin this issueDigging into the CMAs provisional take on AWS and Microsofts hold on UK cloud marketbyCaroline DonnellyAmazon Web Services and Microsoft have not taken kindly to the Competition and Markets Authoritys suggestion that their dominant hold on the UK cloud market requires a targeted interventionInterview: Digital tech fuels AutoTraders drive into the futurebyKarl FlindersLed by a technology enthusiast, AutoTrader is on a digital journey that began when it decided to take a different route in 2007View Computer Weekly ArchivesNext IssueMore CW+ ContentView AllE-HandbookComputer Weekly 25 March 2014
    0 Comentários ·0 Compartilhamentos ·105 Visualizações
  • Apple: British techies to advise on devastating UK global crypto power grab
    www.computerweekly.com
    An obscure British government committee is to be asked this month to advise Home Secretary Yvette Cooper whether to go ahead with government demands that Apple provide British agents with a secret backdoor to break into the companies iCloud Advanced Data Protection system, enabling British spies to secretly copy and read users private data. The government committee, called the Technical Advisory Board (TAB), is charged with reviewing secret legal orders given to internet communications companies to arrange surveillance of their users, and to copy their emails and files, or to monitor their calls and videos. Enquiries by Computer Weekly this week revealed, astonishingly, that the Home Office had failed to renew the contracts for TAB members.According to a leak to the Washington Post, previously reported here, the Home Office issued a Technical Capability Notice to Apple in January, ordering them to remove electronic protection to allow access to data that is otherwise unavailable due to encryption. The company has 28 days to ask the Home Secretary to review the order. After getting a review request, Cooper is legally obliged to ask the advisory board to consider the financial consequences for Apple if they comply. Requiring them to destroy the integrity and security of their safest worldwide data storage system would be devasting for the UKs reputation as a centre for secure digital innovation, according to EU and security consultant Professor Ian Brown. It would also be breathtakingly naive and dangerous, after the recent revelations of China using similar back doors in the US telecoms system to run rampant through Americans calls and phone data.The UKs Technical Advisory Board is legally supposed to represent the interests of persons on whom obligations may be imposed. But Apple is not and has never been represented on the TAB. Nor are Google or Meta or any other US and European companies offering similar capabilities to Apple, and who could be threatened with similar secret orders.Apple would never build a backdoor, the company said in a 2024 statement. If faced with legal force, the company warned, they would publicly withdraw critical security features from the UK market, depriving UK users of these protections. This Apple statement was published in opposition to multiple changes to Britains 2016 Investigatory Power Act (IPA) then being considered in the UK Parliament. Industry regulation specialists expect that, if the Home Office persist, Apple would have to withdraw from the UK. The consequence for the UK governments growth policies could be immense.Throwing down its challenge last year, Apple told Parliament that the laws Britain wanted would effectively empower the Home Office to become the global regulator for every technology company around the world with a single affiliate (whether located in the United Kingdom or not) that provides telecommunications services in the United Kingdom. There is no reason why the UK should have the authority to decide for citizens of the world whether they can avail themselves of the proven security benefits that flow from end-to-end encryption. Querying whether the British government had any actual power to control U.S. companies, the memo noted that the IPA purports to apply extraterritorially, permitting the Home Office to assert that it may impose secret requirements on providers located in other countries and that apply to their users globally (emphasis added). Apples conduct to date has flouted the UKs claims to have legal rights to impose secrecy overseas. According to the government website the Technical Advisory Board has an independent chair, and two other independent named members, six industry representatives and an unknown number of civil servants and intelligence agency employees from organisations such as GCHQ and the National Crime Agency.The independent chair of the board is Jonathan W Hoyle, a former civil servant and deputy director of the GCHQ signals intelligence agency. At the same time as taking up repeated contracts as the chair of TAB since 2015, Mr Hoyle moved from GCHQ to become European vice-president of Lockheed Martin, the major supplier of signals intelligence and surveillance equipment to the British and American governments. A second independent member of TAB, Mr Alan Burnett has been Product Manager for the same period at Roke Manor Research Ltd, another major British supplier of signals intelligence and surveillance equipment to GCHQ. In 2011, Mr Burnett and Roke manor boasted of being the first to build Aquila - the most advanced lawful intercept and cyber probe working at 100 GHz and enabling GCHQ and other intelligence agencies to inspect 100 per cent of content 100 per cent of the time.Six industry representatives are also listed, none of whom appear to have training or experience that would assist them to advise the Home Secretary on financial consequences. Four represent British communications providers (Sky, Vodafone, VirginMediaO2 and the GSM Association).Enquiries by Computer Weekly have revealed that the Home Office has not been paying close attention to supporting or managing the Board membership. According to a 2022 government press release, the contract for the chair expired in August and the contracts for all but two listed TAB members expired last month. Asked if the contracts had recently been renewed by the Home Secretary, a press officer initially claimed that TAB was a non-departmental government committee. She then referred our enquiry to a Home Office email address for the Board, listed on the government website. The Home office position then changed after two members of TAB told Computer Weekly that they were not aware that their contracts had expired. TAB member Neil Brown of Decoded Legal called the Home Office and was told that his contract was to be renewed for a further term. I am grateful to you for pointing that out, he added. Mr Brown further said that he was not able to comment on whether TAB had seen the draft Technical Capability Notice to Apple, nor if the Home Office had yet officially asked TAB to conduct a review.Any British backdoor imposed on Apple users would have to subvert and defeat Apples complex security systems. These were upgraded in December 2022, the companys security manual explains. When the user turns on Advanced Data Protection, their trusted device initiates the removal of service keys from Apple data centres This deletion is immediate, permanent and irrevocable. After the keys are deleted, Apple can no longer access any of the data protected by the users service keys .User data is then protected with the new key, which is controlled solely by the users trusted devices, and [is] never available to Apple.To work, the UK Technical Capability Notice will have to explain how Apple could create a way for Britain to steal targeted users keys from selected Apple devices on demand. The methods normally used attack so-called end points (individual or many devices) rather than weaken the encryption system itself, as is sometimes supposed.If US lawmakers now require that Apple reveal the specific demands the UK wants to make of the corporation, it will be possible for US technical experts to see if any realistic or possible method is explained. Or they may confirm that the Home Office has been promoting magical and impossible thinking, as most cybersecurity experts have warned repeatedly for over 30 years. There is no realistic way to leave a door open for good guys and democracies that have rigorous checks and balances but not for cybercriminals or authoritarian states, according to Cameron Perry of the Brookings Institution. No amount of magical thinking can undo the contradiction between promoting strong encryption as a defense against the barrage of identity theft, espionage, and other cybercrimes while opening up new vulnerabilities, Perry added. Backdoors undermine not only security, but also the competitive position of US companies.Were their wishes to be granted, the Home Office would have to go through many further stages of getting specific legal and technical approval to obtain crypto keys, either against individuals (targeted warrants) or against large numbers of Apple users (*bulk warrants) or against specific groups or classes (thematic warrants). They would have to serve equipment interference warrants, to enable necessary updates and tampered apps to be sent to targeted Apple devices located in the UK. Such updates and apps would be official malware. This would mean that to follow Home Office wishes, regarded by academic and industry experts as fantasy, Apple would also have to disable their own security and malware protections on target devices, while also preventing users from noticing that their shields were down.The Home Office is not permitted to go ahead until both the TAB has reported back, and a Judicial Commissioner has re-approved the Notice.Even if some TAB members now warn the Home Secretary not to proceed, they may be ignored. The governments only possible next step then will be a court case in London against Apple - which would be impossible to keep secret, as Apple has made clear. If a case is brought, a Judge could impose a fine, or be asked to apply an injunction with, perhaps, a large and growing penalty for non-compliance. But Apple could and likely would appeal repeatedly in British Courts including the Investigatory Powers Tribunal, and to the European Court of Human Rights. As the possible legal actions in British law are against corporate persons (Apple Inc and any named subsidiaries) no-one could be arrested unless the British government attempted and kept secret a further series of nightmare proceedings against decision takers in the United States, bring them to trial, if need be by extradition. If the British government asks alternatively for large financial penalties, they might be found in breach of trade agreements by international bodies. The Home Office were warned in 2015 by Apple and others that the purported extraterritorial application of the Act was unenforceable. Were they to seek to deport and jail Apple CEO Tim Cook for disobedience to a secret British order, they would face further and very public derision. The Technical Advisory Board will be aware of, and should have to consider the recent revelation that Chinese government hackers, known as Salt Typhoon were able to get into and exploit US law enforcement access backdoors into telephone and communications providers to spy on US citizens and agencies.In the last resort, Apple have said they would withdraw the security of ADP from UK users. If still faced with absurdly large financial penalties, they could withdraw entirely rather than pay or face seizure.Apple Inc and the Home Office have to date both declined to comment officially or attributably on the Notice.The Home Office appears to be faced with a fiasco of their own making. According to Eric Kind, an expert in surveillance technology and privacy rights, who was hired by the Investigatory Powers Commissioners Office to help set up the new law in 2016, the way this stops is the way it always has beforehand - which is that government decide to drop it for fear of too much spilling into the public during the court battle.
    0 Comentários ·0 Compartilhamentos ·95 Visualizações
Mais stories