• WWW.COMPUTERWEEKLY.COM
    Government funding to help businesses discover AI value
    Daniel - stock.adobe.comNewsGovernment funding to help businesses discover AI valueThe government is betting the bank on the power of artificial intelligence to fix the public sector, mend roads and boost the UK economy ByCliff Saran,Managing EditorPublished: 14 Jan 2025 13:30 The government has announced 7m of funding for 120 artificial intelligence (AI) projects to help small businesses.The funding forms part of the UK Research and Innovation (UKRI) Technology Missions Fund, with support from the Innovate UK BridgeAI programme, and builds on the AI opportunities action plan announced on Monday.The government has also announced 1m of funding to develop bespoke AI tools to support teachers and transform education.Labour hopes the funding for small business will enable companies to trial AI in a range of sectors, including agriculture, transport and construction.The funding part of its plan for change aims to give people the tools they need to harness the power of new technologies such as AI. Other projects include using AI at a bakery to predict sales and forecast how much of each product needs to be made daily.There are also plans to deploy AI tools to predict potholes before they form, which would enable roads to be repaired earlier and more cheaply. Another trial involves using an AI model to determine where mould is likely to grow in buildings, which can then be remediated before it becomes a health and safety issue.Discussing the AI trials and testing, science and technology secretary Peter Kyle said: Putting AI to work right across the economy can help businesses cut waste, move faster and be more productive.The huge range of projects receiving funding today, from farmers and bakers to those tackling potholes on our roads and mould in residential properties, demonstrates the truly limitless benefits of AI that are there for the taking.This latest round of government funding for AI is part of UKRIs 320m investment in technology missions to enable new and existing capabilities and capacity in artificial intelligence, engineering biology, future telecommunications and quantum technologies in the years 2023 to 2025 and beyond.Along with the funding, Innovate UKs BridgeAI programme has been set up to enable businesses to gain access to training and scientific expertise. The Innovate UK-funded programme, which is delivered by a consortium that also includes Digital Catapult, The Alan Turing Institute, STFC Hartree Centre and BSI, provides advice and guidance to help companies develop their AI innovations.Esra Kasapoglu, director of AI and data economy at UKRI Innovate UK, said: The adoption of AI in UK industry is fundamental to supporting the countrys economic growth. Todays investment will enable us, through BridgeAI, to help more companies to unlock the potential of AI in their business.It will also allow further development of projects already demonstrating impact to continue their AI journey.In what appears to be a boost to the governments AI action plan, HSBC has reported that 2024 was a strong year for investment in AI-focused businesses. It announced that UK AI-focused venture-backed businesses raised $4.2bn in venture capital investment in 2024, a 31% increase from $3.2bn in 2023.According to HSBC, this meant that over a quarter (27%) of all venture capital was raised by AI startups in 2024. HSBC also reported that AI companies closed five mega rounds of over $100m, including a $1.1bn round from Wayve.Read more about UK government artificial intelligenceCan UK government achieve ambition to become AI powerhouse? The artificial intelligence opportunities action plan has been largely well received, but there are plenty of questions about how it will be achieved.UK government launches AI assurance platform for enterprises: The platform is designed to drive demand for the UKs artificial intelligence assurance sector and help build greater trust in the technology by helping businesses identify and mitigate a range of AI-related risks.In The Current Issue:Interview: Wendy Redshaw, chief digital information officer, NatWest Retail BankPreparing for AI regulation: The EU AI ActDownload Current Issue
    0 Kommentare 0 Anteile 108 Ansichten
  • WWW.ZDNET.COM
    I used Amazon's Echo Show 21 as my smart home hub - and it's almost perfect
    Is the enormous Echo Show 21 Amazon's best smart display or its biggest missed opportunity? Here's my verdict.
    0 Kommentare 0 Anteile 110 Ansichten
  • WWW.FORBES.COM
    Building Agility Through A Culture Of Continuous Learning
    A few key strategies that organizations can employ to begin fostering a learning culture include the following.
    0 Kommentare 0 Anteile 127 Ansichten
  • Gaming community casts doubt on Elon Musk's Path of Exile 2 achievements
    A hot potato: Tech billionaire Elon Musk's recent foray into the world of hardcore gaming has raised eyebrows within the Path of Exile 2 (PoE 2) community. The Tesla and SpaceX CEO, who previously boasted about his high rankings in games like Diablo 4, streamed an hour and a half of gameplay on January 7, showcasing his level 95 hardcore character, a feat that typically requires immense skill and game knowledge. However, his playing was riddled with inconsistencies and basic errors. The stream began to raise eyebrows just 18 minutes in when Musk accessed his character's stash. A tab labeled "Elon's map" stood out conspicuously, as other tabs lacked similar personalized names. This oddity sparked speculation that someone else might have curated the tab's contents to facilitate easier gameplay, allowing Musk to display his high-level character without risking its demise.As the stream progressed, Musk's gameplay continued to falter. His character's movement appeared erratic and his clearspeed was notably slow for a player of his supposed caliber. In one glaring instance, Musk allowed his character's mana meter to remain empty for a full 10 seconds, causing a significant drop in damage output -- a rookie mistake for someone claiming top-tier status.Further inconsistencies emerged in Musk's item management. He inexplicably left valuable currency items like Chaos Orbs and Exalted Orbs uncollected on the ground, while picking up low-tier keystones that would be useless for a character at his level of progression.Musk's navigation of the game's Atlas map system also raised questions. He appeared to struggle with identifying which nodes he could run, despite his character having theoretically cleared hundreds, if not thousands, of maps to reach its current level and gear status. At one point, when selecting a map to clear, Musk's commentary was limited to noting, "There are four things here," when referring to the map's modifiers -- a simplistic observation for a supposedly experienced player.The stream also revealed that Musk was playing without an active loot filter, a standard tool used by high-level players to efficiently manage item drops. Instead, he was seen manually clicking and dragging items into his inventory, a tedious process that would have required sorting through thousands of dropped items to assemble his character's equipment. // Related StoriesPerhaps the most damning evidence of Musk's questionable expertise came when he reviewed his character's equipment. Inspecting his weapon, Musk commented that it was "only level 62" compared to his level 95 character, demonstrating a fundamental misunderstanding of Path of Exile 2's itemization system. In the game, an item's effectiveness is determined by its item level, not its level requirement - a basic concept that any high-ranking player would undoubtedly understand.Musk repeated this mistake as he examined each piece of his character's gear, stating, "My equipment's pretty low level compared to my character level, but it seems to work pretty well." This comment was particularly jarring given that his character was equipped with high-end gear that had been meticulously collected, crafted, and optimized for his build.Redditors got right to the point. Streamer Quin69 stated, "One hundred percent someone plays an account for him," while Asmongold added, "There's no way he played that account. I'm sorry, but somebody played it for him, one hundred f***ing percent."User InvestmentFew9366 commented, "His gear is better than a lot of full time streamers. No way it is real. Account sharing or boost of some kind." Another Redditor, frenchpatato, pointed out, "I mean he has a f'in mirror tier staff and says he needs to replace it because it's only level 62. You don't need to say anything else."The incident has sparked a broader discussion about the motivations behind Musk's gaming claims. As Redditor Elbjornbjorn noted, "The weird part to me is that he feels he needs to brag about being a pro gamer for some reason. Dude put a car his company designed into space on a rocket his other company designed, why the hell does he need to brag about gaming?"
    0 Kommentare 0 Anteile 131 Ansichten
  • WWW.DIGITALTRENDS.COM
    Timex is making a wearable with a sensor to track brains, not hearts
    Timex has announced a partnership with Pison, a technology company that makes a brain-tracking, neural-sensing platform for wearables, and it intends to integrate it into a new range of products coming early this year. It may also be setting the stage for a new smartwatch powered by Qualcomms Snapdragon W5 processor.The Pison app PisonUnderstanding Pisons sensor and algorithm is essential to understanding what the future Timex products will offer. Pisons electroneurography (ENG) platform measures physiological electricity originating from your brain using a skin biosensor, which when combined with AI-powered software algorithms, can provide insights into mental health, sleep, sports performance, and brain health. Think of it like a heart rate sensor that monitors your brain instead.Recommended VideosWorking with semiconductor experts STMicroelectronics to miniaturize its ENG biosensor for use in wearables, Pison hase integrated it into Qualcomms Snapdragon W5 platform, which is found in a variety of smartwatches, including the Mobvoi TicWatch Pro 5 series. Pison currently sells a wristband that incorporates its sensor and AI algorithms, which are designed to holistically and measurably improve mind-body fitness. Its aimed at people wanting to sharpen mental skills.Please enable Javascript to view this contentPisons device is not especially attractive, and certainly not lifestyle orientated, which limits its appeal. This is likely where the new Timex partnership comes in. Timex president Marco Zambianchi said:Were thrilled to collaborate with Pison on integrating their groundbreaking neural sensor technology into our products. At Timex, our commitment is to deliver exceptional, innovative wearables that enhance the lives of consumers worldwide. This partnership allows us to redefine the boundaries of whats possible in the smartwatch and wearable space, offering unparalleled functionality and value to our customers.The Timex Metropoliton R Andy Boxall / Digital TrendsWhat will a Timex smartwatch or band with Pisons sensor offer? Expect something quite different from a normal fitness-tracking smartwatch, as it will give an insight into your mental health and well-being, including sleep, anxiety, and fatigue levels, while also assessing alertness. It will help athletes improve cognitive ability as well. It will also monitor for neurodegenerative diseases including Parkinsons, Alzheimers, and ALS, plus gauge the effect of impacts to the head from contact sports and for those in the military.An accompanying app will show data collected by the sensor, and Pison has integrated a gesture control system that builds on what weve already tried with Apples Double Tap system to help make the wearable easier to use. Interestingly, the sensor recognizes complex gestures using the electricity generated by the brain when we tell our fingers to move.Timex has experimented with smartwatches and wellness products in the past, such as the Metropolitan R and the wellness-focused Teslar Watch. Pison joins companies like Alphabeats in experimenting with brain tracking wearables that may improve sports performance and focus, or reduce stress.Timex intends to launch its first brain-tracking wearable with Pison technology inside in the spring, and the likely inclusion of the Snapdragon W5 platform suggests it may use Googles Wear OS software.Editors Recommendations
    0 Kommentare 0 Anteile 129 Ansichten
  • WWW.WSJ.COM
    Macquarie to Invest Up to $5 Billion in Applied Digital AI Data Centers
    The deal follows strong investor interest in businesses connected to the AI boom.
    0 Kommentare 0 Anteile 136 Ansichten
  • ARSTECHNICA.COM
    Amazon must solve hallucination problem before launching AI-enabled Alexa
    Giving good answers Amazon must solve hallucination problem before launching AI-enabled Alexa Rollout of upgraded voice assistant hit by delays. Madhumita Murgia and Camilla Hodgson, Financial Times Jan 14, 2025 9:18 am | 0 Credit: Anadolu via Getty Images Credit: Anadolu via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAmazon is gearing up to relaunch its Alexa voice-powered digital assistant as an artificial intelligence agent that can complete practical tasks, as the tech group races to resolve the challenges that have dogged the systems AI overhaul.The $2.4 trillion company has for the past two years sought to redesign Alexa, its conversational system embedded within 500 million consumer devices worldwide, so the softwares brain is transplanted with generative AI.Rohit Prasad, who leads the artificial general intelligence (AGI) team at Amazon, told the Financial Times the voice assistant still needed to surmount several technical hurdles before the rollout.This includes solving the problem of hallucinations or fabricated answers, its response speed or latency, and reliability. Hallucinations have to be close to zero, said Prasad. Its still an open problem in the industry, but we are working extremely hard on it.The vision of Amazons leaders is to transform Alexa, which is currently still used for a narrow set of simple tasks such as playing music and setting alarms, to an agentic product that acts as a personalised concierge. This could include anything from suggesting restaurants to configuring the lights in the bedroom based on a persons sleep cycles.Alexas redesign has been in train since the launch of OpenAIs ChatGPT, backed by Microsoft, in late 2022. While Microsoft, Google, Meta and others have quickly embedded generative AI into their computing platforms and enhanced their software services, critics have questioned whether Amazon can resolve its technical and organisational struggles in time to compete with its rivals.According to multiple staffers who have worked on Amazons voice assistant teams in recent years, its effort has been beset with complications and follows years of AI research and development.Several former workers said the long wait for a rollout was largely due to the unexpected difficulties involved in switching and combining the simpler, predefined algorithms Alexa was built on, with more powerful but unpredictable large language models.In response, Amazon said it was working hard to enable even more proactive and capable assistance of its voice assistant. It added that a technical implementation of this scale, into a live service and suite of devices used by customers around the world, was unprecedented, and not as simple as overlaying a LLM on to the Alexa service.Prasad, the former chief architect of Alexa, said last months release of the companys in-house Amazon Nova modelsled by his AGI teamwas in part motivated by the specific needs for optimum speed, cost and reliability, in order to help AI applications such as Alexa get to that last mile, which is really hard.To operate as an agent, Alexas brain has to be able to call hundreds of third-party software and services, Prasad said.Sometimes we underestimate how many services are integrated into Alexa, and its a massive number. These applications get billions of requests a week, so when youre trying to make reliable actions happen at speed...you have to be able to do it in a very cost-effective way, he added.The complexity comes from Alexa users expecting quick responses as well as extremely high levels of accuracy. Such qualities are at odds with the inherent probabilistic nature of todays generative AI, a statistical software that predicts words based on speech and language patterns.Some former staff also point to struggles to preserve the assistants original attributes, including its consistency and functionality, while imbuing it with new generative features such as creativity and free-flowing dialogue.Because of the more personalised, chatty nature of LLMs, the company also plans to hire experts to shape the AIs personality, voice and diction so it remains familiar to Alexa users, according to one person familiar with the matter.One former senior member of the Alexa team said while LLMs were very sophisticated, they come with risks, such as producing answers that are completely invented some of the time.At the scale that Amazon operates, that could happen large numbers of times per day, they said, damaging its brand and reputation.In June, Mihail Eric, a former machine learning scientist at Alexa and founding member of its conversational modelling team, said publicly that Amazon had dropped the ball on becoming the unequivocal market leader in conversational AI with Alexa.Eric said despite having strong scientific talent and huge financial resources, the company had been riddled with technical and bureaucratic problems, suggesting data was poorly annotated and documentation was either non-existent or stale.According to two former employees working on Alexa-related AI, the historic technology underpinning the voice assistant had been inflexible and difficult to change quickly, weighed down by a clunky and disorganised code base and an engineering team spread too thin.The original Alexa software, built on top of technology acquired from British start-up Evi in 2012, was a question-answering machine that worked by searching within a defined universe of facts to find the right response, such as the days weather or a specific song in your music library.The new Alexa uses a bouquet of different AI models to recognise and translate voice queries and generate responses, as well as to identify policy violations, such as picking up inappropriate responses and hallucinations. Building software to translate between the legacy systems and the new AI models has been a major obstacle in the Alexa-LLM integration.The models include Amazons own in-house software, including the latest Nova models, as well as Claude, the AI model from start-up Anthropic, in which Amazon has invested $8 billion over the course of the past 18 months.[T]he most challenging thing about AI agents is making sure theyre safe, reliable and predictable, Anthropics chief executive Dario Amodei told the FT last year.Agent-like AI software needs to get to the point where...people can actually have trust in the system, he added. Once we get to that point, then well release these systems.One current employee said more steps were still needed, such as overlaying child safety filters and testing custom integrations with Alexa such as smart lights and the Ring doorbell.The reliability is the issuegetting it to be working close to 100 percent of the time, the employee added. Thats why you see us...or Apple or Google shipping slowly and incrementally.Numerous third parties developing skills or features for Alexa said they were unsure when the new generative AI-enabled device would be rolled out and how to create new functions for it.Were waiting for the details and understanding, said Thomas Lindgren, co-founder of Swedish content developer Wanderword. When we started working with them they were a lot more open...then with time, theyve changed.Another partner said after an initial period of pressure that was put on developers by Amazon to start getting ready for the next generation of Alexa, things had gone quiet.An enduring challenge for Amazons Alexa teamwhich was hit by major lay-offs in 2023is how to make money. Figuring out how to make the assistants cheap enough to run at scale will be a major task, said Jared Roesch, co-founder of generative AI group OctoAI.Options being discussed include creating a new Alexa subscription service, or to take a cut of sales of goods and services, said a former Alexa employee.Prasad said Amazons goal was to create a variety of AI models that could act as the building blocks for a variety of applications beyond Alexa.What we are always grounded on is customers and practical AI, we are not doing science for the sake of science, Prasad said. We are doing this...to deliver customer value and impact, which in this era of generative AI is becoming more important than ever because customers want to see a return on investment. 2025 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.Madhumita Murgia and Camilla Hodgson, Financial TimesMadhumita Murgia and Camilla Hodgson, Financial Times 0 Comments
    0 Kommentare 0 Anteile 150 Ansichten
  • WWW.INFORMATIONWEEK.COM
    Are We Ready for Artificial General Intelligence?
    The artificial intelligence evolution is well underway. AI technology is changing how we communicate, do business, manage our energy grid, and even diagnose and treat illnesses. And it is evolving more rapidly than we could have predicted. Both companies that produce the models driving AI and governments that are attempting to regulate this frontier environment have struggled to institute appropriate guardrails.In part, this is due to how poorly we understand how AI actually functions. Its decision-making is notoriously opaque and difficult to analyze. Thus, regulating its operations in a meaningful way presents a unique challenge: How do we steer a technology away from making potentially harmful decisions when we dont exactly understand how it makes its decisions in the first place?This is becoming an increasingly pressing problem as artificial general intelligence (AGI) and its successor, artificial superintelligence (ASI), loom on the horizon.AGI is AI equivalent to or surpassing human intelligence. ASI is AI that exceeds human intelligence entirely. Until recently, AGI was believed to be a distant possibility, if it was achievable at all. Now, an increasing number of experts believe that it may only be a matter of years until AGI systems are operational.Related:As we grapple with the unintended consequences of current AI application -- understood to be less intelligent than humans because of their typically narrow and limited functions -- we must simultaneously attempt to anticipate and obviate the potential dangers of AI that might match or outstrip our capabilities.AI companies are approaching the issue with varying degrees of seriousness -- sometimes leading to internal conflicts. National governments and international bodies are attempting to impose some order on the digital Wild West, with limited success. So, how ready are we for AGI? Are we ready at all?InformationWeek investigates these questions, with insights from Tracy Jones, associate director of digital consultancy Guidehouses data and AI practice, May Habib, CEO and co-founder of generative AI company Writer, and Alexander De Ridder, chief technology officer of AI developer SmythOS.What Is AGI and How Do We Prepare Ourselves?The boundaries between narrow AI, which performs a specified set of functions, and true AGI, which is capable of broader cognition in the same way that humans are, remain blurry.As Miles Brundage, whose recent departure as senior advisor of OpenAIs AGI Readiness team has spurred further discussion of how to prepare for the phenomenon, says, AGI is an overloaded phrase.Related:AGI has many definitions, but regardless of what you call it, it is the next generation of enterprise AI, Habib says. Current AI technologies function within pre-determined parameters, but AGI can handle much more complex tasks that require a deeper, contextual understanding. In the future, AI will be capable of learning, reasoning, and adapting across any task or work domain, not just those pre-programmed or trained into it.AGI will also be capable of creative thinking and action that is independent of its creators.It will be able to operate in multiple realms, completing numerous types of tasks. It is possible that AGI may, in its general effect, be a person. There is some suggestion that personality qualities may be successfully encoded into a hypothetical AGI system, leading it to act in ways that align with certain sorts of people, with particular personality qualities that influence their decision-making.However, as it is defined, AGI appears to be a distinct possibility in the near future. We simply do not know what it will look like.AGI is still technically theoretical. How do you get ready for something that big? Jones asks. If you cant even get ready for the basics -- you cant tie your shoe --how do you control the environment when it's 1,000 times more complicated?Related:Such a system, which will approach sentience, may thus be capable of human failings due to simple malfunction or misdirection due to hacking events or even intentional disobedience on its own. If any human personality traits are encoded, intentionally or not, they ought to be benign or at least beneficial -- a highly subjective and difficult determination to make. AGI needs to be designed with the idea that it can ultimately be trusted with its own intelligence -- that it will act with the interests of its designers and users in mind. They must be closely aligned with our own goals and values.AI guardrails are and will continue to come down to self-regulation in the enterprise, Habib says. While LLMs can be unreliable, we can get nondeterministic systems to do mostly deterministic things when were specific with the outcomes we want from our generative AI applications. Innovation and safety are a balancing act. Self-regulation will continue to be key for AI's journey.Disbandment of OpenAIs AGI Readiness TeamBrundages departure from OpenAI in late October following the disbandment of its AGI Readiness team sent shockwaves through the AI community. He joined the company in 2018 as a researcher and led its policy research since 2021, serving as a key watchdog for potential issues created by the companys rapidly advancing products. The dissolution of his team and his departure followed on the heels of the implosion of its Superalignment team in May, which had served a similar oversight purpose.Brundage claimed that he would either join a nonprofit focused on monitoring AI concerns or start his own. While both he and OpenAI claimed that the split was amicable, observers have read between the lines, speculating that his concerns had not been taken seriously by the company. The members of the team who stayed with the company have been shuffled to other departments. Other significant figures at the company have also left in the past year.Though the Substack post in which he extensively described his reasons for leaving and his concerns about AGI was largely diplomatic, Brundage stated that no one was ready for AGI -- fueling the hypothesis that OpenAI and other AI companies are disregarding the guardrails their own employees are attempting to establish. A June 2024 open letter from employees of OpenAI and other AI companies warns of exactly that.Brundages exit is seen as a signifier that the old guard of AI has been sent to the hinterlands -- and that unbridled excess may follow in their absence.Potential Risks of AGIAs with the risks of narrow AI, those posed by AGI range from the mundane to the catastrophic.One underappreciated reason there are so few generative AI use cases at scale in the enterprise is fear -- but its fear of job displacement, loss of control, privacy erosion and cultural adjustments -- not the end of mankind, Habib notes. The biggest ethical concerns right now are data privacy, transparency and algorithmic bias.You dont just build a super-intelligent system and hope it behaves; you have to account for all sorts of unintended consequences, like AI following instructions too literally without understanding human intent, De Ridder adds. Were still figuring out how to handle that. Theres just not enough emphasis on these problems yet. A lot of the research is still missing.An AGI system that has negative personality traits, encoded by its designer intentionally or unintentionally, would likely amplify those traits in its actions. For example, the Big Five personality trait model characterizes human personalities according to openness, conscientiousness, extraversion, agreeableness, and neuroticism.If a model is particularly disagreeable, it might act against the interests of humans it is meant to serve if it feels that is the best course of action. Or, if it is highly neurotic, it might end up dithering over issues that are ultimately inconsequential. There is also concern that AGI models may consciously evade attempts to modify their actions -- essentially, being dishonest to their designers and users.These can result in very consequential effects when it comes to moral and ethical decision making -- with which AGI systems might conceivably be entrusted. Biases and unfair decision making might have potentially massive consequences if these systems are entrusted with large-scale decision making.Decisions that are based on inferences from information on individuals may lead to dangerous effects, essentially stereotyping people on the basis of data -- some of which may have originally been harvested for entirely different purposes. Further, data harvesting itself could increase exponentially if the system feels that it is useful. This intersects with privacy concerns -- data fed into or harvested by these models may not necessarily have been harvested with consent. The consequences could unfairly impact certain individuals or groups of individuals.Untrammeled AGI might also have society-wide effects. The fact that AGI will have human capabilities also raises the concern that it will wipe out entire employment sectors, leaving people with certain skill sets without a means of gainful employment, thus leading to social unrest and economic instability.AGI would greatly increase the magnitude of cyber-attacks and have the potential to be able to take out infrastructure, Jones adds. If you have a bunch of AI bots that are emotionally intelligent and that are talking with people constantly, the ability to spread disinformation increases dramatically. Weaponization becomes a big issue -- the ability to control your systems. Large-scale cyber-attacks that target infrastructure or government databases, or the launch of massive misinformation campaigns could be devastating.Tracy Jones, GuidehouseThe autonomy of these systems is particularly concerning. These events might happen without any human oversight if the AGI is not properly designed to consult with or respond to its human controllers. And the ability of malicious human actors to infiltrate an AGI system and redirect its power is of equal concern. It has even been proposed that AGI might assist in the production of bioweapons.The 2024 International Scientific Report on the Safety of Advanced AI articulates a host of other potential effects -- and there are almost certainly others that have not yet been anticipated.What Companies Need To Do To Be ReadyThere are a number of steps that companies can take to ensure that they are at least marginally ready for the advent of AGI.The industry needs to shift its focus toward foundational safety research, not just faster innovation. I believe in designing AGI systems that evolve with constraints -- think of them having lifespans or offspring models, so we can avoid long-term compounding misalignment, De Ridder advises.Above all, rigorous testing is necessary to prevent the development of dangerous capabilities and vulnerabilities prior to deployment. Ensuring that the model is amenable to correction is also essential. If it resists efforts to redirect its actions while it is still in the development phase, it will likely become even more resistant as its capabilities advance. It is also important to build models whose actions can be understood -- already a challenge in narrow AI. Tracing the origins of erroneous reasoning is crucial if it is to be effectively modified.Limiting its curiosity to specific domains may prevent AGI from taking autonomous action in areas where it may not understand the unintended consequences -- detonating weapons, for example, or cutting off supply of essential resources if those actions seem like possible solutions to a problem. Models can be coded to detect when a course of action is too dangerous and to stop before executing such tasks.Ensuring that products are resistant to penetration by outside adversaries during their development is also imperative. If an AGI technology proves susceptible to external manipulation, it is not safe to release it into the wild. Any data that is used in the creation of an AGI must be harvested ethically and protected from potential breaches.Human oversight must be built into the system from the start -- while the goal is to facilitate autonomy, it must be limited and targeted. Coding for conformal procedures, which request human input when more than one solution is suggested, may help to rein in potentially damaging decisions and train models to understand when they are out of line.Such procedures are one instance of a system being designed so that humans know when to intervene. There must also be mechanisms that allow humans to intervene and stop a potentially dangerous course of action -- variously referred to as kill switches and failsafes.And ultimately, AI systems must be aligned to human values in a meaningful way. If they are encoded to perform actions that do not align with fundamental ethical norms, they will almost certainly act against human interests.Engaging with the public on their concerns about the trajectory of these technologies may be a significant step toward establishing a good-faith relationship with those who will inevitably be affected. So too, transparency on where AGI is headed and what it might be capable of might facilitate trust in the companies that are developing its precursors. Some have suggested that open source code might allow for peer review and critique.Ultimately, anyone designing systems that may result in AGI needs to plan for a multitude of outcomes and be able to manage each one of them if they arise.How Ready Are AI companies?Whether or not the developers of the technology leading to AGI are actually ready to manage its effects is, at this point, anyones guess. The larger AI companies -- OpenAI, DeepMind, Meta, Adobe, and upstart Anthropic, which focuses on safe AI -- have all made public commitments to maintaining safeguards. Their statements and policies range from vague gestures toward AI safety to elaborate theses on the obligation to develop thoughtful, safe AI technology. DeepMind, Anthropic and OpenAI have released elaborate frameworks for how they plan on aligning their AI models with human values.One survey found that 98% of respondents from AI labs agreed that labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.Even in their public statements, it is clear that these organizations are struggling to balance their rapid advancement with responsible alignment, development of models whose actions can be interpreted and monitoring of potentially dangerous capabilities.Alexander De Ridder, SmythOSRight now, companies are falling short when it comes to monitoring the broader implications of AI, particularly AGI. Most of them are spending only 1-5% of their compute budgets on safety research, when they should be investing closer to 20-40%, says De Ridder.They do not seem to know whether debiasing their models or subjecting them to human feedback is actually sufficient to mitigate the risks they might pose down the line.But other organizations have not even gotten that far. A lot of organizations that are not AI companies -- companies that offer other products and services that utilize AI -- do not have aI security teams yet, Jones says. They havent matured to that place.However, she thinks that is changing. Were starting to see a big uptick across companies and government in general in focusing on security, she observes, adding that in addition to dedicated safety and security teams, there is a movement to embed safety monitoring throughout the organization. A year ago, a lot of people were just playing with AI without that, and now people are reaching out. They want to understand AI readiness and theyre talking about AI security.This suggests a growing realization amongst both AI developers and their customers that serious consequences are a near inevitability. Ive seen organizations sharing information -- there's an understanding that we all have to move forward and that we can all learn from each other, Jones claims.Whether the leadership and the actual developers behind the technology are taking the recommendations of any of these teams seriously is a separate question. The exodus of multiple OpenAI staffers -- and the letter of warning they signed earlier this year -- suggests that at least in some cases, safety monitoring is being ignored or at least downplayed.It highlights the tension that is going to be there between really fast innovation and ensuring that it is responsible, Jones adds.
    0 Kommentare 0 Anteile 146 Ansichten
  • WWW.TECHNOLOGYREVIEW.COM
    The Download: the future of nuclear power, and fact checking Mark Zuckerberg
    This is today's edition ofThe Download,our weekday newsletter that provides a daily dose of what's going on in the world of technology. Whats next for nuclear power While nuclear reactors have been generating power around the world for over 70 years, the current moment is one of potentially radical transformation for the technology. As electricity demand rises around the world for everything from electric vehicles to data centers, theres renewed interest in building new nuclear capacity, as well as extending the lifetime of existing plants and even reopening facilities that have been shut down. Efforts are also growing to rethink reactor designs, and 2025 marks a major test for so-called advanced reactors as they begin to move from ideas on paper into the construction phase. Heres what to expect next for the industry.Casey Crownhart This piece is part of MIT Technology Reviews Whats Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here. Mark Zuckerberg and the power of the media On Tuesday last week, Meta CEO Mark Zuckerberg announced that Meta is done with fact checking in the US, that it will roll back restrictions on speech, and is going to start showing people more tailored political content in their feeds. While the end of fact checking has gotten most of the attention, the changes to its hateful speech policy are also notable. Zuckerbergwhose previous self-acknowledged mistakes include the Cambridge Analytica data scandal, and helping to fuel a genocide in Myanmarpresented Facebooks history of fact-checking and content moderation as something he was pressured into doing by the government and media. The reality, of course, is that these were his decisions. He famously calls the shots, and always has. Read the full story. Mat Honan This story first appeared in The Debrief, providing a weekly take on the tech news that really matters and links to stories we loveas well as the occasional recommendation.Sign up to receive it in your inbox every Friday. Heres our forecast for AI this year In December, our small but mighty AI reporting team was asked by our editors to make a prediction: Whats coming next for AI? As we look ahead, certain things are a given. We know that agentsAI models that do more than just converse with you and can actually go off and complete tasks for youare the focus of many AI companies right now. Similarly, the need to make AI faster and more energy efficient is putting so-called small language models in the spotlight. However, the other predictions were not so clear-cut. Read the full story. James O'Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. To witness the fallout from the AI teams lively debates (and hear more about what didnt make the list), you can join our upcoming LinkedIn Live this Thursday, January 16 at 12.30pm ET. James will be talking it all over with Will Douglas Heaven, our senior editor for AI, and our news editor, Charlotte Jee. The must-reads Ive combed the internet to find you todays most fun/important/scary/fascinating stories about technology. 1 China is considering selling TikTok to Elon Musk But its unclear how likely an outcome that really is. (Bloomberg $)+ Its certainly one way of allowing TikTok to remain in the US. (WSJ $)+ For what its worth, TikTok has dismissed the report as pure fiction. (Variety $)+ Xiaohongshu, also known as RedNote, is dealing with an influx of American users. (WP $)2 Amazon drivers are still delivering packages amid LA fires They're dropping off parcels even after neighborhoods have been instructed to evacuate. (404 Media)3 Alexa is getting a generative AI makeoverAmazon is racing to turn its digital assistant into an AI agent. (FT $) + What are AI agents? (MIT Technology Review)4 Animal manure is a major climate problem Unfortunately, turning it into energy is easier said than done. (Vox)+ How poop could help feed the planet. (MIT Technology Review) 5 Power lines caused many of Californias worst fires Thousands of blazes have been traced back to power infrastructure in recent decades. (NYT $)+ Why some homes manage to withstand wildfires. (Bloomberg $)+ The quest to build wildfire-resistant homes. (MIT Technology Review)6 Barcelona is a hotbed of spyware startups Researchers are increasingly concerned about its creep across Europe. (TechCrunch)7 Mastodons founder doesnt want to follow in Mark Zuckerbergs footstepsEugen Rochko has restructured the company to ensure it could never be controlled by a single individual. (Ars Technica) + Hes made it clear he doesnt want to end up like Elon Musk, either. (Engadget)8 Spare a thought for this Welsh would-be crypto millionaireHis 11-year quest to recover an old hard drive has come to a disappointing end. (Wired $) 9 The unbearable banality of internet lexicon Its giving nonsense. (The Atlantic $)10 You never know whether youll get to see the northern lights or not AI could help us to predict when theyll occur more accurately. (Vice)+ Digital pictures make the lights look much more defined than they actually are. (NYT $)Quote of the day Cutting fact checkers from social platforms is like disbanding your fire department. Alan Duke, co-founder of fact-checking outlet Lead Stories, criticizes Metas decision to ax its US-based fact checkers as the groups attempt to slow viral misinformation spreading about the wildfires in California, CNN reports. The big story The world is moving closer to a new cold war fought with authoritarian tech September 2022Despite President Bidens assurances that the US is not seeking a new cold war, one is brewing between the worlds autocracies and democraciesand technology is fueling it. Authoritarian states are following Chinas lead and are trending toward more digital rights abuses by increasing the mass digital surveillance of citizens, censorship, and controls on individual expression.And while democracies also use massive amounts of surveillance technology, its the tech trade relationships between authoritarian countries thats enabling the rise of digitally enabled social control. Read the full story.Tate Ryan-Mosley We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Before indie sleaze, there was DIY counterculture site Buddyhead.+ Did you know black holes dont actually suck anything in at all?+ Science fiction is stuck in a loop, and cant seem to break its fixation with cyberpunk.+ Every now and again, TV produces a perfect episode. Heres eight of them.
    0 Kommentare 0 Anteile 143 Ansichten
  • WWW.BUSINESSINSIDER.COM
    This 59-day, around-the-world train trip has a 4,000-person waitlist — see what the $124,150 vacation will be like
    2025-01-14T14:02:49Z Read in app Railbookers' 59-day, 12-country itinerary includes travel on seven luxury trains, including the iconic Venice-Simplon Orient Express. Joey Hadden/Business Insider This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? Both luxury trains and around-the-world vacations have been in high demand.Railbookers combined both into a 59-day, 12-country itinerary that includes travel on seven luxury trains.Railbookers' CEO said the $124,150-per-person trip had a 4,000-person waitlist.World cruises have been a hot commodity in the luxury travel industry. But if you're prone to seasickness (or don't have more than 100 PTO days to spend), Railbookers has a $124,150 alternative by luxury rail.The train-focused tour company's 59-day around-the-world vacation, departing in early September, includes travel on seven high-end trains to more than 20 cities and 12 countries.Throughout the four-continent trek, globetrotters would go on a safari in India's Ranthambore National Park, cruise the Ganges River, and receive a private tour of the Louvre Museum all while traveling in bucket-list trains such as Belmond's Venice Simplon-Orient-Express. It's Railbookers' second year hosting a global itinerary, and travelers rail-y can't get enough.The itinerary includes six nights on India's Maharajas Express. Marben/Shutterstock Luxury trains have been in high demand over the last few years. This itinerary is no different.Frank Marini, president and CEO of Railbookers Group, told Business Insider that the trip had a 4,000-person waitlist ahead of its launch. (BI could not verify this.)"The demand was crazy," Andrew Channell, Railbookers' senior vice president of product and operations, told BI. "It's captured the imagination of a lot of people who said, 'I had no idea there was even a luxury train experience you could do there.'"Some wanted to book the full journey, while others wanted to reserve various legs. Marini said the trip is expected to sell out.Luxury train enthusiasts will likely recognize several in the itinerary.Travelers would spend 21 nights on trains, including three on Rovos Rail. Rovos Rail The trip starts in Vancouver, Canada, and concludes in Singapore. Guests would travel on seven luxury trains along the way, including three nights touring Scotland on Belmond's Royal Scotsman, two nights sightseeing Italy on the soon-to-debut La Dolce Vita Orient Express, and three Between sleeper trains, guests would spend 32 nights at premium hotels, including Fairmonts in Canada and The Imperial in New Delhi.The itinerary includes two overnight stays in an Istanbul hotel before flying to New Delhi. Shutterstock/borozentsev The itinerary also requires six flights, five of which aren't included in the price.Excursions are, however, bundled into the $124,150-per-person cost. These activities include a private tour of Venice, Italy's Saint Mark's Basilica, a sunrise stop at the Taj Mahal, and the chance to see elephants and rhinos in South Africa's Pilanesberg National Park. A more than $2,000-per-day vacation may not be the cheapest global travel option.The Venice Simplon-Orient Express is one of luxury travel company Belmond's most recognizable trains. Belmond The individual trains on Railbookers' itineraries aren't known to be ultra-affordable.Three nights on the Royal Scotsman in September (as the itinerary includes) goes for about $22,400 per person.Similarly, a one-night Venice Simplon-Orient-Express trip from Verona, Italy, to Paris during the late-summer month starts at about $4,730 per person. Around-the-world vacations have been a hit in the cruise industry. Rocky Mountaineer's GoldLeaf-level travelers have amenities like a two-level coach with a glass dome. Rocky Mountaineer Several premium cruise lines, such as Regent Seven Seas, offer annual global voyages.The luxury cruise line's 132-night 2024 and 150-night 2025 world cruises were sold out in record times: three hours for the former and before bookings formally opened for the latter.About one-third of the travelers who booked the 2025 itinerary which started at $87,000 per person were first-time Regent guests, signaling a growing demand for high-end extended itineraries, a spokesperson told BI.Railbookers' per-day cost may be more than triple that of Regent's, but it's a great express option if you, like many other wealthy travelers, have several luxury trains on your travel bucket list. Close iconTwo crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.
    0 Kommentare 0 Anteile 129 Ansichten