AI News
AI News
AI News provides news, analysis and opinion on the latest artificial intelligence breakthroughs. Key
1 pessoas curtiram isso
7 Publicações
2 fotos
0 Vídeos
0 Anterior
Atualizações Recentes
  • Study claims OpenAI trains AI models on copyrighted data
    www.artificialintelligence-news.com
    A new study from the AI Disclosures Project has raised questions about the data OpenAI uses to train its large language models (LLMs). The research indicates the GPT-4o model from OpenAI demonstrates a strong recognition of paywalled and copyrighted data from OReilly Media books.The AI Disclosures Project, led by technologist Tim OReilly and economist Ilan Strauss, aims to address the potentially harmful societal impacts of AIs commercialisation by advocating for improved corporate and technological transparency. The projects working paper highlights the lack of disclosure in AI, drawing parallels with financial disclosure standards and their role in fostering robust securities markets.The study used a legally-obtained dataset of 34 copyrighted OReilly Media books to investigate whether LLMs from OpenAI were trained on copyrighted data without consent. The researchers applied the DE-COP membership inference attack method to determine if the models could differentiate between human-authored OReilly texts and paraphrased LLM versions.Key findings from the report include:GPT-4o shows strong recognition of paywalled OReilly book content, with an AUROC score of 82%. In contrast, OpenAIs earlier model, GPT-3.5 Turbo, does not show the same level of recognition (AUROC score just above 50%)GPT-4o exhibits stronger recognition of non-public OReilly book content compared to publicly accessible samples (82% vs 64% AUROC scores respectively)GPT-3.5 Turbo shows greater relative recognition of publicly accessible OReilly book samples than non-public ones (64% vs 54% AUROC scores)GPT-4o Mini, a smaller model, showed no knowledge of public or non-public OReilly Media content when tested (AUROC approximately 50%)The researchers suggest that access violations may have occurred via the LibGen database, as all of the OReilly books tested were found there. They also acknowledge that newer LLMs have an improved ability to distinguish between human-authored and machine-generated language, which does not reduce the methods ability to classify data.The study highlights the potential for temporal bias in the results, due to language changes over time. To account for this, the researchers tested two models (GPT-4o and GPT-4o Mini) trained on data from the same period.The report notes that while the evidence is specific to OpenAI and OReilly Media books, it likely reflects a systemic issue around the use of copyrighted data. It argues that uncompensated training data usage could lead to a decline in the internets content quality and diversity, as revenue streams for professional content creation diminish.The AI Disclosures Project emphasises the need for stronger accountability in AI companies model pre-training processes. They suggest that liability provisions that incentivise improved corporate transparency in disclosing data provenance may be an important step towards facilitating commercial markets for training data licensing and remuneration.The EU AI Acts disclosure requirements could help trigger a positive disclosure-standards cycle if properly specified and enforced. Ensuring that IP holders know when their work has been used in model training is seen as a crucial step towards establishing AI markets for content creator data.Despite evidence that AI companies may be obtaining data illegally for model training, a market is emerging in which AI model developers pay for content through licensing deals. Companies like Defined.ai facilitate the purchasing of training data, obtaining consent from data providers and stripping out personally identifiable information.The report concludes by stating that using 34 proprietary OReilly Media books, the study provides empirical evidence that OpenAI likely trained GPT-4o on non-public, copyrighted data.(Image by Sergei Tokmakov)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Comentários ·0 Compartilhamentos ·21 Visualizações
  • Tony Blair Institute AI copyright report sparks backlash
    www.artificialintelligence-news.com
    The Tony Blair Institute (TBI) has released a report calling for the UK to lead in navigating the complex intersection of arts and AI.According to the report, titled Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI, the global race for cultural and technological leadership is still up for grabs, and the UK has a golden opportunity to take the lead.The report emphasises that countries that embrace change and harness the power of artificial intelligence in creative ways will set the technical, aesthetic, and regulatory standards for others to follow.Highlighting that we are in the midst of another revolution in media and communication, the report notes that AI is disrupting how textual, visual, and auditive content is created, distributed, and experienced, much like the printing press, gramophone, and camera did before it.AI will usher in a new era of interactive and bespoke works, as well as a counter-revolution that celebrates everything that AI can never be, the report states.However, far from signalling the end of human creativity, the TBI suggests AI will open up new ways of being original.The AI revolutions impact isnt limited to the creative industries; its being felt across all areas of society. Scientists are using AI to accelerate discoveries, healthcare providers are employing it to analyse X-ray images, and emergency services utilise it to locate houses damaged by earthquakes.The report stresses that these cross-industry advancements are just the beginning, with future AI systems set to become increasingly capable, fuelled by advancements in computing power, data, model architectures, and access to talent.The UK government has expressed its ambition to be a global leader in AI through its AI Opportunities Action Plan, announced by Prime Minister Keir Starmer on 13 January 2025. For its part, the TBI welcomes the UK governments ambition, stating that if properly designed and deployed, AI can make human lives healthier, safer, and more prosperous.However, the rapid spread of AI across sectors raises urgent policy questions, particularly concerning the data used for AI training. The application of UK copyright law to the training of AI models is currently contested, with the debate often framed as a zero-sum game between AI developers and rights holders. The TBI argues that this framing misrepresents the nature of the challenge and the opportunity before us.The report emphasises that bold policy solutions are needed to provide all parties with legal clarity and unlock investments that spur innovation, job creation, and economic growth.According to the TBI, AI presents opportunities for creatorsnoting its use in various fields from podcasts to filmmaking. The report draws parallels with past technological innovations such as the printing press and the internet which were initially met with resistance, but ultimately led to societal adaptation and human ingenuity prevailing.The TBI proposes that the solution lies not in clinging to outdated copyright laws but in allowing them to co-evolve with technological change to remain effective in the age of AI.The UK government has proposed a text and data mining exception with an opt-out option for rights holders. While the TBI views this as a good starting point for balancing stakeholder interests, it acknowledges the significant implementation and enforcement challenges that come with it, spanning legal, technical, and geopolitical dimensions.In the report, the Tony Blair Institute for Global Change assesses the merits of the UK governments proposal and outlines a holistic policy framework to make it work in practice.The report includes recommendations and examines novel forms of art that will emerge from AI. It also delves into the disagreement between rights holders and developers on copyright, the wider implications of copyright policy, and the serious hurdles the UKs text and data mining proposal faces.Furthermore, the Tony Blair Institute explores the challenges of governing an opt-out policy, implementation problems with opt-outs, making opt-outs useful and accessible, and tackling the diffusion problem. AI summaries and the problems they present regarding identity are also addressed, along with defensive tools as a partial solution and solving licensing problems.The report also seeks to clarify the standards on human creativity, address digital watermarking, and discuss the uncertainty around the impact of generative AI on the industry. It proposes establishing a Centre for AI and the Creative Industries and discusses the risk of judicial review, the benefits of a remuneration scheme, and the advantages of a targeted levy on ISPs to raise funding for the Centre.However, the report has faced strong criticism. Ed Newton-Rex, CEO of Fairly Trained, raised several concerns on Bluesky. These concerns include:The report repeats the misleading claim that existing UK copyright law is uncertain, which Newton-Rex asserts is not the case.The suggestion that an opt-out scheme would give rights holders more control over how their works are used is misleading. Newton-Rex argues that licensing is currently required by law, so moving to an opt-out system would actually decrease control, as some rights holders will inevitably miss the opt-out.The report likens machine learning (ML) training to human learning, a comparison that Newton-Rex finds shocking, given the vastly different scalability of the two.The reports claim that AI developers wont make long-term profits from training on peoples work is questioned, with Newton-Rex pointing to the significant funding raised by companies like OpenAI.Newton-Rex suggests the report uses strawman arguments, such as stating that generative AI may not replace all human paid activities.A key criticism is that the report omits data showing how generative AI replaces demand for human creative labour.Newton-Rex also criticises the reports proposed solutions, specifically the suggestion to set up an academic centre, which he notes no one has asked for.Furthermore, he highlights the proposal to tax every household in the UK to fund this academic centre, arguing that this would place the financial burden on consumers rather than the AI companies themselves, and the revenue wouldnt even go to creators.Adding to these criticisms, British novelist and author Jonathan Coe noted that the five co-authors of this report on copyright, AI, and the arts are all from the science and technology sectors. Not one artist or creator among them.While the report from Tony Blair Institute for Global Change supports the governments ambition to be an AI leader, it also raises critical policy questionsparticularly around copyright law and AI training data.(Photo by Jez Timms)See also: Amazon Nova Act: A step towards smarter, web-native AI agentsWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.The post Tony Blair Institute AI copyright report sparks backlash appeared first on AI News.
    0 Comentários ·0 Compartilhamentos ·27 Visualizações
  • Ant Group uses domestic chips to train AI models and cut costs
    www.artificialintelligence-news.com
    Ant Group is relying on Chinese-made semiconductors to train artificial intelligence models to reduce costs and lessen dependence on restricted US technology, according to people familiar with the matter.The Alibaba-owned company has used chips from domestic suppliers, including those tied to its parent, Alibaba, and Huawei Technologies to train large language models using the Mixture of Experts (MoE) method. The results were reportedly comparable to those produced with Nvidias H800 chips, sources claim. While Ant continues to use Nvidia chips for some of its AI development, one sources said the company is turning increasingly to alternatives from AMD and Chinese chip-makers for its latest models.The development signals Ants deeper involvement in the growing AI race between Chinese and US tech firms, particularly as companies look for cost-effective ways to train models. The experimentation with domestic hardware reflects a broader effort among Chinese firms to work around export restrictions that block access to high-end chips like Nvidias H800, which, although not the most advanced, is still one of the more powerful GPUs available to Chinese organisations.Ant has published a research paper describing its work, stating that its models, in some tests, performed better than those developed by Meta. Bloomberg News, which initially reported the matter, has not verified the companys results independently. If the models perform as claimed, Ants efforts may represent a step forward in Chinas attempt to lower the cost of running AI applications and reduce the reliance on foreign hardware.MoE models divide tasks into smaller data sets handled by separate components, and have gained attention among AI researchers and data scientists. The technique has been used by Google and the Hangzhou-based startup, DeepSeek. The MoE concept is similar to having a team of specialists, each handling part of a task to make the process of producing models more efficient. Ant has declined to comment on its work with respect to its hardware sources.Training MoE models depends on high-performance GPUs which can be too expensive for smaller companies to acquire or use. Ants research focused on reducing that cost barrier. The papers title is suffixed with a clear objective: Scaling Models without premium GPUs. [our quotation marks]The direction taken by Ant and the use of MoE to reduce training costs contrast with Nvidias approach. CEO Officer Jensen Huang has said that demand for computing power will continue to grow, even with the introduction of more efficient models like DeepSeeks R1. His view is that companies will seek more powerful chips to drive revenue growth, rather than aiming to cut costs with cheaper alternatives. Nvidias strategy remains focused on building GPUs with more cores, transistors, and memory.According to the Ant Group paper, training one trillion tokens the basic units of data AI models use to learn cost about 6.35 million yuan (roughly $880,000) using conventional high-performance hardware. The companys optimised training method reduced that cost to around 5.1 million yuan by using lower-specification chips.Ant said it plans to apply its models produced in this way Ling-Plus and Ling-Lite to industrial AI use cases like healthcare and finance. Earlier this year, the company acquired Haodf.com, a Chinese online medical platform, to further Ants ambition to deploy AI-based solutions in healthcare. It also operates other AI services, including a virtual assistant app called Zhixiaobao and a financial advisory platform known as Maxiaocai.If you find one point of attack to beat the worlds best kung fu master, you can still say you beat them, which is why real-world application is important, said Robin Yu, chief technology officer of Beijing-based AI firm, Shengshang Tech.Ant has made its models open source. Ling-Lite has 16.8 billion parameters settings that help determine how a model functions while Ling-Plus has 290 billion. For comparison, estimates suggest closed-source GPT-4.5 has around 1.8 trillion parameters, according to MIT Technology Review.Despite progress, Ants paper noted that training models remains challenging. Small adjustments to hardware or model structure during model training sometimes resulted in unstable performance, including spikes in error rates.(Photo by Unsplash)See also: DeepSeek V3-0324 tops non-reasoning AI models in open-source firstWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Comentários ·0 Compartilhamentos ·25 Visualizações
  • AI streamlines budgeting, but human oversight essential
    www.artificialintelligence-news.com
    Research conducted by Vlerick Business School has discovered that in the area of AI financial planning, the technology consistently outperforms humans when allocating budgets with strategic guidelines in place. Businesses that use AI for budgeting processes experience substantial improvements in the accuracy and efficiency of budgeting plans compared to human decision-making.The studys goal was to interpret AIs role in corporate budgeting, examining how well such technology performs when making financial decisions. Ultimately, its an investigation into whether AIs financial decisions align with a companys long-term strategies and how its decisions compare to human management.The researchers, Kristof Stouthuysen, Professor of Management Accounting and Digital Finance at Vlerick Business School, and PhD researcher, Emma Willems, studied tactical and strategic budgeting approaches.Tactical budgeting is about quick, responsive decisions, referring to short-term, data-driven financial decisions. These are aimed at improving immediate performance, like making adjustments to spending based on market trends.Strategic budgeting typically involves a more comprehensive approach that focuses on future planning, aligning various resources with a businesss vision.According to the research, AI is superior when performing tactical budgeting processes like cost management and resource allocation. However, the need for human insight remains important to ensure accurate and strategic financial planning over the long term.The controlled experiment was achieved by running a management simulation where experienced managers were asked to allocate budgets for a hypothetical automotive parts company. Stouthuysen and Willems then compared these human-made decisions to those produced by an AI algorithm using the same financial data.The results concluded that AI was superior in optimising budgets when a companys strategic financial planning was clearly defined. However, AI struggled to make budgeting decisions when key performance indicators (KPIs) did not align with the companys financial goals.Stouthuysen and Willems work on the study emphasised the importance of a collaboration between humans and AI. As AI continues to evolve, companies that use its strengths in tactical budgeting while maintaining human oversight in strategic planning will gain a competitive edge. The key is knowing where AI should lead and where human intuition remains indispensable.According to the study, AI can theoretically take over from humans when it comes to tactical budgeting, providing more precise and efficient outcomes. Stouthuysen and Willems believe companies need to define their strategic priorities clearly and implement AI for tactical budget-making decisions to maximise financial performances and achieve sustainable growth.The findings challenge the widespread misconception that AI can completely substitute the need for humans in budgeting. Instead, this research emphasises the importance of taking a balanced approach, utilising both AI and humans, assigning tasks to silicon or human processes according to their proven abilities.(Image source: Payday by 401(K) 2013 is licensed under CC BY-SA 2.0.)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Comentários ·0 Compartilhamentos ·22 Visualizações
  • Kay Firth-Butterfield, formerly WEF: The future of AI, the metaverse and digital transformation
    www.artificialintelligence-news.com
    Kay Firth-Butterfield is a globally recognised leader in ethical artificial intelligence and a distinguished AI ethics speaker. As the former head of AI and Machine Learning at the World Economic Forum (WEF) and one of the foremost voices in AI governance, she has spent her career advocating for technology that enhances, rather than harms, society.We spoke to Kay to discuss the promise and pitfalls of generative AI, the future of the Metaverse, and how organisations can prepare for a decade of unprecedented digital transformation.Generative AI has captured global attention, but theres still a great deal of misunderstanding around what it actually is. Could you walk us through what defines generative AI, how it works, and why its considered such a transformative evolution of artificial intelligence?Its very exciting because it represents the next iteration of artificial intelligence. What generative AI allows you to do is ask questions of the worlds data simply by typing a prompt. If we think back to science fiction, thats essentially what weve always dreamed of just being able to ask a computer a question and have it draw on all its knowledge to provide an answer.How does it do that? Well, it predicts which word is likely to come next in a sequence. It does this by accessing enormous volumes of data. We refer to these as large language models. Essentially, the machine reads or at least accesses all the data available on the open web. In some cases, and this is an area of legal contention, it also accesses IP-protected and copyrighted material. We can expect a great deal of legal debate in this space.Once the model has ingested all this data, it begins to predict what word naturally follows another, enabling it to construct highly complex and nuanced responses. Anyone who has experimented with it knows that it can return some surprisingly eloquent and insightful content simply through this predictive capability.Of course, sometimes it gets things wrong. In the AI community, we call this hallucination essentially, the system fabricates information. Thats a serious issue because in order to rely on AI-generated outputs, we need to reach a point where we can trust the responses. The problem is, once a hallucination enters the data pool, it can be repeated and reinforced by the model.While much has been said about generative AIs technical potential, what do you see as the most meaningful societal and business benefits it offers? And what challenges must we address to ensure these advantages are equitably realised?AI is now accessible to everyone, and thats incredibly powerful. Its a hugely democratising tool. It means that small and medium-sized enterprises, which previously couldnt afford to leverage AI, now can.However, we also need to be aware that most of the worlds data is created in the United States first, followed by Europe and China. There are clear challenges regarding the datasets these large language models are trained on. Theyre not truly using global data. Theyre working with a limited subset. That has led to discussions around digital colonisation, where content generated from American and European data is projected onto the rest of the world, with an implicit expectation that others will adopt and use it.Different cultures, of course, require different responses. So, while there are countless benefits to generative AI, there are also significant challenges that we must address if we want to ensure fair and inclusive outcomes.The Metaverse has seen both hype and hesitation in recent years. From your perspective, what is the current trajectory of the Metaverse, and how do you see its role evolving within business environments over the next five years?Its interesting. We went through a phase of huge excitement around the Metaverse, where everyone wanted to be involved. But now weve entered more of a Metaverse winter, or perhaps autumn, as its become clear just how difficult it is to create compelling content for these immersive spaces.Were seeing strong use cases in industrial applications, but were still far from achieving that Ready Player One vision where we live, shop, buy property, and fully interact in 3D virtual environments. Thats largely because the level of compute power and creative resources needed to build truly immersive experiences is enormous.In five years time, I think well start to see the Metaverse delivering on more of its promises for business. Customers may enjoy exceptional shopping experiencesentering virtual stores rather than simply browsing online, where they can feel fabrics virtually and make informed decisions in real time.We may also see remote working evolve, where employees collaborate inside the Metaverse as if they were in the same room. One study found that younger workers often lack adequate supervision when working remotely. In a Metaverse setting, you could offer genuine, interactive supervision and mentorship. It may also help with fostering colleague relationships that are often missed in remote work settings.Ultimately, the Metaverse removes physical constraints and offers new ways of working and interactingbut well need balance. Many people may not want to spend all their time in fully immersive environments.Looking ahead, which emerging technologies and AI-driven trends do you anticipate will have the most profound global impact over the next decade. And how should we be preparing for their implications, both economically and ethically?Thats a great question. Its a bit like pulling out a crystal ball. But without doubt, generative AI is one of the most significant shifts were seeing today. As the technology becomes more refined, it will increasingly power new AI applications through natural language interactions.Natural Language Processing (NLP) is the AI term for the machines ability to understand and interpret human language. In the near future, only elite developers will need to code manually. The rest of us will interact with machines by typing or speaking requests. These systems will not only provide answers, but also write code on our behalf. Its incredibly powerful, transformative technology.But there are downsides. One major concern is that AI sometimes fabricates information. And as generative AI becomes more prolific, its generating massive volumes of data 24/7. Over time, machine-generated data may outnumber human data, which could distort the digital landscape. We must ensure the AI doesnt perpetuate falsehoods it has previously generated.Looking further ahead, this shift raises deep questions about the future of human work. If AI systems can outperform humans in many tasks without fatigue, what becomes of our role? There may be cost savings, but also the very real risk of widespread unemployment.AI also powers the Metaverse, so progress there is tied to improvements in AI capabilities. Im also very excited about synthetic biology, which could see huge advancements driven by AI. Theres also likely to be significant interplay between quantum computing and AI, which could bring both benefits and serious challenges.Well see more Internet of Things (IoT) devices as wellbut that introduces new issues around security and data protection.Its a time of extraordinary opportunity, but also serious risks. Some worry about artificial general intelligence becoming sentient, but I dont see that as likely just yet. Current models lack causal reasoning. Theyre still predictive tools. We would need to add something fundamentally different to reach human-level intelligence. But make no mistakewe are entering an incredibly exciting era.Adopting new technologies can be both an opportunity and a risk for businesses. In your view, how can organisations strike the right balance between embracing digital transformation and making strategic, informed decisions about AI adoption?I think its vital to adopt the latest technologies, just as it would have been important for Kodak to see the shift coming in the photography industry. Businesses that fail to even explore digital transformation risk being left behind.However, a word of caution: its easy to jump in too quickly and end up with the wrong AI solution or the wrong systems entirely for your business. So, I would advise approaching digital transformation with careful thought. Keep your eyes open, and treat each step as a deliberate, strategic business decision.When you decide that youre ready to adopt AI, its crucial to hold your suppliers to account. Ask the hard questions. Ask detailed questions. Make sure you have someone in-house, or bring in a consultant, who knows enough to help you interrogate the technology properly.As we all know, one of the greatest wastes of money in digital transformation happens when the right questions arent asked up front. Getting it wrong can be incredibly costly, so take the time to get it right.Photo by petr sidorov on UnsplashWant to learn more about AI and big data from industry leaders?Check outAI & Big Data Expotaking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events includingIntelligent Automation Conference,BlockX,Digital Transformation Week, andCyber Security & Cloud Expo.
    0 Comentários ·0 Compartilhamentos ·20 Visualizações
  • 0 Comentários ·0 Compartilhamentos ·19 Visualizações
  • 0 Comentários ·0 Compartilhamentos ·20 Visualizações
Mais Stories