-
- EXPLORE
-
-
-
-
News and current events from around the globe. Since 1923.
Aggiornamenti recenti
-
TIME.COMU.S. Antitrust Regulators Seek to Break Up Google, Force Sale of Chrome BrowserBy Michael Liedtke / APUpdated: November 21, 2024 3:05 AM EST | Originally published: November 21, 2024 12:05 AM ESTU.S. regulators want a federal judge to break up Google to prevent the company from continuing to squash competition through its dominant search engine after a court found it had maintained an abusive monopoly over the past decade.The proposed breakup floated in a 23-page document filed late Wednesday by the U.S. Department of Justice calls for sweeping punishments that would include a sale of Googles industry-leading Chrome web browser and impose restrictions to prevent Android from favoring its own search engine.A sale of Chrome will permanently stop Googles control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet, Justice Department lawyers argued in their filing.Although regulators stopped short of demanding Google sell Android too, they asserted the judge should make it clear the company could still be required to divest its smartphone operating system if its oversight committee continues to see evidence of misconduct.The broad scope of the recommended penalties underscores how severely regulators operating under President Joe Bidens administration believe Google should be punished following an August ruling by U.S. District Judge Amit Mehta that branded the company as a monopolist.The Justice Department decision-makers who will inherit the case after President-elect Donald Trump takes office next year might not be as strident. The Washington, D.C. court hearings on Googles punishment are scheduled to begin in April and Mehta is aiming to issue his final decision before Labor Day.If Mehta embraces the governments recommendations, Google would be forced to sell its 16-year-old Chrome browser within six months of the final ruling. But the company certainly would appeal any punishment, potentially prolonging a legal tussle that has dragged on for more than four years.Besides seeking a Chrome spinoff and a corralling of the Android software, the Justice Department wants the judge to ban Google from forging multibillion-dollar deals to lock in its dominant search engine as the default option on Apples iPhone and other devices. It would also ban Google from favoring its own services, such as YouTube or its recently-launched artificial intelligence platform, Gemini.Regulators also want Google to license the search index data it collects from peoples queries to its rivals, giving them a better chance at competing with the tech giant. On the commercial side of its search engine, Google would be required to provide more transparency into how it sets the prices that advertisers pay to be listed near the top of some targeted search results.Kent Walker, Googles chief legal officer, lashed out at the Justice Department for pursuing a radical interventionist agenda that would harm Americans and Americas global technology. In a blog post, Walker warned the overly broad proposal would threaten personal privacy while undermining Googles early leadership in artificial intelligence, perhaps the most important innovation of our time.Wary of Googles increasing use of artificial intelligence in its search results, regulators also advised Mehta to ensure websites will be able to shield their content from Googles AI training techniques.The measures, if they are ordered, threaten to upend a business expected to generate more than $300 billion in revenue this year.The playing field is not level because of Googles conduct, and Googles quality reflects the ill-gotten gains of an advantage illegally acquired, the Justice Department asserted in its recommendations. The remedy must close this gap and deprive Google of these advantages.Its still possible that the Justice Department could ease off attempts to break up Google, especially if Trump takes the widely expected step of replacing Assistant Attorney General Jonathan Kanter, who was appointed by Biden to oversee the agencys antitrust division.Although the case targeting Google was originally filed during the final months of Trumps first term in office, Kanter oversaw the high-profile trial that culminated in Mehtas ruling against Google. Working in tandem with Federal Trade Commission Chair Lina Khan, Kanter took a get-tough stance against Big Tech that triggered other attempted crackdowns on industry powerhouses such as Apple and discouraged many business deals from getting done during the past four years.Trump recently expressed concerns that a breakup might destroy Google but didnt elaborate on alternative penalties he might have in mind. What you can do without breaking it up is make sure its more fair, Trump said last month. Matt Gaetz, the former Republican congressman that Trump nominated to be the next U.S. Attorney General, has previously called for the breakup of Big Tech companies.Gaetz faces a tough confirmation hearing.This latest filing gave Kanter and his team a final chance to spell out measures that they believe are needed to restore competition in search. It comes six weeks after Justice first floated the idea of a breakup in a preliminary outline of potential penalties.But Kanters proposal is already raising questions about whether regulators seek to impose controls that extend beyond the issues covered in last years trial, andby extensionMehtas ruling.Banning the default search deals that Google now pays more than $26 billion annually to maintain was one of the main practices that troubled Mehta in his ruling.Its less clear whether the judge will embrace the Justice Departments contention that Chrome needs to be spun out of Google and or Android should be completely walled off from its search engine.It is probably going a little beyond, Syracuse University law professor Shubha Ghosh said of the Chrome breakup. The remedies should match the harm, it should match the transgression. This does seem a little beyond that pale.Google rival DuckDuckGo, whose executives testified during last year's trial, asserted the Justice Department is simply doing what needs to be done to rein in a brazen monopolist.Undoing Googles overlapping and widespread illegal conduct over more than a decade requires more than contract restrictions: it requires a range of remedies to create enduring competition, Kamyl Bazbaz, DuckDuckGos senior vice president of public affairs, said in a statement.Trying to break up Google harks back to a similar punishment initially imposed on Microsoft a quarter century ago following another major antitrust trial that culminated in a federal judge deciding the software maker had illegally used his Windows operating system for PCs to stifle competition.However, an appeals court overturned an order that would have broken up Microsoft, a precedent many experts believe will make Mehta reluctant to go down a similar road with the Google case.0 Commenti 0 condivisioni 2 ViewsEffettua l'accesso per mettere mi piace, condividere e commentare!
-
TIME.COMHas AI Progress Really Slowed Down?A laptop keyboard and ChatGPT on App Store displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on October 1, 2024.Jakub PorzyckiNurPhoto/Getty ImagesBy Harry BoothNovember 21, 2024 12:53 PM ESTFor over a decade, companies have bet on a tantalizing rule of thumb: that artificial intelligence systems would keep getting smarter if only they found ways to continue making them bigger. This wasnt merely wishful thinking. In 2017, researchers at Chinese technology firm Baidu demonstrated that pouring more data and computing power into machine learning algorithms yielded mathematically predictable improvementsregardless of whether the system was designed to recognize images, speech, or generate language. Noticing the same trend, in 2020, OpenAI coined the term scaling laws, which has since become a touchstone of the industry.This thesis prompted AI firms to bet hundreds of millions on ever-larger computing clusters and datasets. The gamble paid off handsomely, transforming crude text machines into today's articulate chatbots.But now, that bigger-is-better gospel is being called into question.Last week, reports by Reuters and Bloomberg suggested that leading AI companies are experiencing diminishing returns on scaling their AI systems. Days earlier, The Information reported doubts at OpenAI about continued advancement after the unreleased Orion model failed to meet expectations in internal testing. The co-founders of Andreessen Horowitz, a prominent Silicon Valley venture capital firm, have echoed these sentiments, noting that increasing computing power is no longer yielding the same "intelligence improvements."What are tech companies saying?Though, many leading AI companies seem confident that progress is marching full steam ahead. In a statement, a spokesperson for Anthropic, developer of the popular chatbot Claude, said we haven't seen any signs of deviations from scaling laws. OpenAI declined to comment. Google DeepMind did not respond for comment. However, last week, after an experimental new version of Googles Gemini model took GPT-4os top spot on a popular AI-performance leaderboard, the companys CEO, Sundar Pichai posted to X saying more to come.Read more: The Researcher Trying to Glimpse the Future of AIRecent releases paint a somewhat mixed picture. Anthropic has updated its medium sized model, Sonnet, twice since its release in March, making it more capable than the companys largest model, Opus, which has not received such updates. In June, the company said Opus would be updated later this year, but last week, speaking on the Lex Fridman podcast, co-founder and CEO Dario Amodei declined to give a specific timeline. Google updated its smaller Gemini Pro model in February, but the company's larger Gemini Ultra model has yet to receive an update. OpenAIs recently released o1-preview model outperforms GPT-4o in several benchmarks, but in others it falls short. o1-preview was reportedly called GPT-4o with reasoning internally, suggesting the underlying model is similar in scale to GPT-4.Parsing the truth is complicated by competing interests on all sides. If Anthropic cannot produce more powerful models, weve failed deeply as a company, Amodei said last week, offering a glimpse at the stakes for AI companies that have bet their futures on relentless progress. A slowdown could spook investors and trigger an economic reckoning. Meanwhile, Ilya Sutskever, OpenAIs former chief scientist and once an ardent proponent of scaling, now says performance gains from bigger models have plateaued. But his stance carries its own baggage: Suskevers new AI start up, Safe Superintelligence Inc., launched in June with less funding and computational firepower than its rivals. A breakdown in the scaling hypothesis would conveniently help level the playing field.They had these things they thought were mathematical laws and they're making predictions relative to those mathematical laws and the systems are not meeting them, says Gary Marcus, a leading voice on AI, and author of several books including Taming Silicon Valley. He says the recent reports of diminishing returns suggest we have finally hit a wallsomething hes warned could happen since 2022. I didn't know exactly when it would happen, and we did get some more progress. Now it seems like we are stuck, he says.Have we run out of data?A slowdown could be a reflection of the limits of current deep learning techniques, or simply that there's not enough fresh data anymore, Marcus says. Its a hypothesis that has gained ground among some following AI closely. Sasha Luccioni, AI and climate lead at Hugging Face, says there are limits to how much information can be learned from text and images. She points to how people are more likely to misinterpret your intentions over text messaging, as opposed to in person, as an example of text datas limitations. I think it's like that with language models, she says.The lack of data is particularly acute in certain domains like reasoning and mathematics, where we just don't have that much high quality data, says Ege Erdil, senior researcher at Epoch AI, a nonprofit that studies trends in AI development. That doesnt mean scaling is likely to stopjust that scaling alone might be insufficient. At every order of magnitude scale up, different innovations have to be found, he says, noting that it does not mean AI progress will slow overall.It's not the first time critics have pronounced scaling dead. At every stage of scaling, there are always arguments, Amodei said last week. The latest one we have today is, were going to run out of data, or the data isnt high quality enough or models cant reason., ...Ive seen the story happen for enough times to really believe that probably the scaling is going to continue, he said. Reflecting on OpenAIs early days on Y-Combinators podcast, company CEO Sampost on X from Marcus saying his predictions of diminishing returns were right, Altman posted saying there is no wall. Though there could be another reason we may be hearing echoes of new models failing to meet internal expectations, says Jaime Sevilla, director of Epoch AI. Following conversations with people at OpenAI and Anthropic, he came away with a sense that people had extremely high expectations. They expected AI was going to be able to, already write a PhD thesis, he says. Maybe it feels a bit.. anti-climactic.A temporary lull does not necessarily signal a wider slowdown, Sevilla says. History shows significant gaps between major advances: GPT-4, released just 19 months ago, itself arrived 33 months after GPT-3. We tend to forget that GPT three from GPT four was like 100x scale in compute, Sevilla says. If you want to do something like 100 times bigger than GPT-4, you're gonna need up to a million GPUs, Sevilla says. That is bigger than any known clusters currently in existence, though he notes that there have been concerted efforts to build AI infrastructure this year, such as Elon Musks 100,000 GPU supercomputer in Memphisthe largest of its kindwhich was reportedly built from start to finish in three months.In the interim, AI companies are likely exploring other methods to improve performance after a model has been trained. OpenAIs o1-preview has been heralded as one such example, which outperforms previous models on reasoning problems by being allowed more time to think. This is something we already knew was possible, Sevilla says, gesturing to an Epoch AI report published in July 2023.Policy and geopolitical implicationsPrematurely diagnosing a slowdown could have repercussions beyond Silicon Valley and Wall St. The perceived speed of technological advancement following GPT-4s release prompted an open letter calling for a six-month pause on the training of larger systems to give researchers and governments a chance to catch up. The letter garnered over 30,000 signatories, including Musk and Turing Award recipient Yoshua Bengio. Its an open question whether a perceived slowdown could have the opposite effect, causing AI safety to slip from the agenda.Much of the U.S.s AI policy has been built on the belief that AI systems would continue to balloon in size. A provision in Bidens sweeping executive order on AI, signed in October 2023 (and expected to be repealed by the Trump White House) required AI developers to share information with the government regarding models trained using computing power above a certain threshold. That threshold was set above the largest models available at the time, under the assumption that it would target future, larger models. This same assumption underpins export restrictions (restrictions on the sale of AI chips and technologies to certain countries) designed to limit Chinas access to the powerful semiconductors needed to build large AI models. However, if breakthroughs in AI development begin to rely less on computing power and more on factors like better algorithms or specialized techniques, these restrictions may have a smaller impact on slowing Chinas AI progress.The overarching thing that the U.S. needs to understand is that to some extent, export controls were built on a theory of timelines of the technology, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. In a world where the U.S. stalls at the frontier, he says, we could see a national push to drive breakthroughs in AI. He says a slip in the U.S.s perceived lead in AI could spur a greater willingness to negotiate with China on safety principles.Whether we're seeing a genuine slowdown or just another pause ahead of a leap remains to be seen. Its unclear to me that a few months is a substantial enough reference point, Singer says. You could hit a plateau and then hit extremely rapid gains.More Must-Reads from TIMEWhy Trumps Message Worked on Latino MenWhat Trumps Win Could Mean for HousingThe 100 Must-Read Books of 2024Sleep Doctors Share the 1 Tip Thats Changed Their LivesColumn: Lets Bring Back RomanceWhat Its Like to Have Long COVID As a KidFXs Say NothingIs the Must-Watch Political Thriller of 2024Merle Bombardieri Is Helping People Make the Baby DecisionContact us at letters@time.com0 Commenti 0 condivisioni 2 Views
-
TIME.COMU.S. Gathers Global Group to Tackle AI Safety Amid Growing National Security ConcernsU.S. Commerce Secretary Gina Raimondo at the inaugural convening of the International Network of AI Safety Institutes in San Francisco on Nov. 20, 2024.Jeff ChiuAPBy Tharin Pillay / San FranciscoNovember 21, 2024 1:00 AM ESTAI is a technology like no other in human history, U.S. Commerce Secretary Gina Raimondo said on Wednesday in San Francisco. Advancing AI is the right thing to do, but advancing as quickly as possible, just because we can, without thinking of the consequences, isnt the smart thing to do.Raimondos remarks came during the inaugural convening of the International Network of AI Safety Institutes, a network of artificial intelligence safety institutes (AISIs) from 9 nations as well as the European Commission brought together by the U.S. Departments of Commerce and State. The event gathered technical experts from government, industry, academia, and civil society to discuss how to manage the risks posed by increasingly-capable AI systems.Raimondo suggested participants keep two principles in mind: We cant release models that are going to endanger people, she said. Second, lets make sure AI is serving people, not the other way around.The convening marks a significant step forward in international collaboration on AI governance. The first AISIs emerged last November during the inaugural AI Safety Summit hosted by the UK. Both the U.K. and the U.S. governments announced the formation of their respective AISIs as a means of giving their governments the technical capacity to evaluate the safety of cutting-edge AI models. Other countries followed suit; by May, at another AI Summit in Seoul, Raimondo had announced the creation of the network.In a joint statement, the members of the International Network of AI Safety Instituteswhich includes AISIs from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singaporelaid out their mission: to be a forum that brings together technical expertise from around the world, to facilitate a common technical understanding of AI safety risks and mitigations based upon the work of our institutes and of the broader scientific community, and to encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.In the lead-up to the convening, the U.S. AISI, which serves as the networks inaugural chair, also announced a new government taskforce focused on the technologys national security risks. The Testing Risks of AI for National Security (TRAINS) Taskforce brings together representatives from the Departments of Defense, Energy, Homeland Security, and Health and Human Services. It will be chaired by the U.S. AISI, and aim to identify, measure, and manage the emerging national security and public safety implications of rapidly evolving AI technology, with a particular focus on radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, and conventional military capabilities.The push for international cooperation comes at a time of increasing tension around AI development between the U.S. and China, whose absence from the network is notable. In remarks pre-recorded for the convening, Senate Majority Leader Chuck Schumer emphasized the importance of ensuring that the Chinese Community Party does not get to write the rules of the road. Earlier Wednesday, Chinese lab Deepseek announced a new reasoning model thought to be the first to rival OpenAIs own reasoning model, o1, which the company says is designed to spend more time thinking before it responds.On Tuesday, the U.S.-China Economic and Security Review Commission, which has provided annual recommendations to Congress since 2000, recommended that Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability, which the commission defined as systems as good as or better than human capabilities across all cognitive domains that would surpass the sharpest human minds at every task.Many experts in the field, such as Geoffrey Hinton, who earlier this year won a Nobel Prize in physics for his work on artificial intelligence, have expressed concerns that, should AGI be developed, humanity may not be able to control it, which could lead to catastrophic harm. In a panel discussion at Wednesdays event, Anthropic CEO Dario Amodeiwho believes AGI-like systems could arrive as soon as 2026cited loss of control risks as a serious concern, alongside the risks that future, more capable models are misused by malicious actors to perpetrate bioterrorism or undermine cybersecurity. Responding to a question, Amodei expressed unequivocal support for making the testing of advanced AI systems mandatory, noting we also need to be really careful about how we do it.Meanwhile, practical international collaboration on AI safety is advancing. Earlier in the week, the U.S. and U.K. AISIs shared preliminary findings from their pre-deployment evaluation of an advanced AI modelthe upgraded version of Anthropics Claude 3.5 Sonnet. The evaluation focused on assessing the models biological and cyber capabilities, as well as its performance on software and development tasks, and the efficacy of the safeguards built into it to prevent the model from responding to harmful requests. Both the U.K. and U.S. AISIs found that these safeguards could be routinely circumvented, which they noted is consistent with prior research on the vulnerability of other AI systems safeguards.The San Francisco convening set out three priority topics that stand to urgently benefit from international collaboration: managing risks from synthetic content, testing foundation models, and conducting risk assessments for advanced AI systems. Ahead of the convening, $11 million of funding was announced to support research into how best to mitigate risks from synthetic content (such as the generation and distribution of child sexual abuse material, and the facilitation of fraud and impersonation). The funding was provided by a mix of government agencies and philanthropic organizations, including the Republic of Korea and the Knight Foundation.While it is unclear how the election victory of Donald Trump will impact the future of the U.S. AISI and American AI policy more broadly, international collaboration on the topic of AI safety is set to continue. The U.K. AISI is hosting another San Francisco-based conference this week, in partnership with the Centre for the Governance of AI, to accelerate the design and implementation of frontier AI safety frameworks. And in February, France will host its AI Action Summit, following the Summits held in Seoul in May and in the U.K. last November. The 2025 AI Action Summit will gather leaders from the public and private sectors, academia, and civil society, as actors across the world seek to find ways to govern the technology as its capabilities accelerate.Raimondo on Wednesday emphasized the importance of integrating safety with innovation when it comes to something as rapidly advancing and as powerful as AI. It has the potential to replace the human mind, she said.More Must-Reads from TIMEWhy Trumps Message Worked on Latino MenWhat Trumps Win Could Mean for HousingThe 100 Must-Read Books of 2024Sleep Doctors Share the 1 Tip Thats Changed Their LivesColumn: Lets Bring Back RomanceWhat Its Like to Have Long COVID As a KidFXs Say NothingIs the Must-Watch Political Thriller of 2024Merle Bombardieri Is Helping People Make the Baby DecisionContact us at letters@time.com0 Commenti 0 condivisioni 16 Views
-
TIME.COMLandmark Bill to Ban Children From Social Media Introduced in Australias ParliamentBy Rod McGuirk / APUpdated: November 21, 2024 3:30 AM EST | Originally published: November 21, 2024 2:30 AM ESTMELBOURNE Australias communications minister introduced a world-first law into Parliament on Thursday that would ban children under 16 from social media, saying online safety was one of parents toughest challenges.Michelle Rowland said TikTok, Facebook, Snapchat, Reddit, X and Instagram were among the platforms that would face fines of up to 50 million Australian dollars ($33 million) for systemic failures to prevent young children from holding accounts.This bill seeks to set a new normative value in society that accessing social media is not the defining feature of growing up in Australia, Rowland told Parliament.There is wide acknowledgement that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content unfiltered and infinite, she added.X owner Elon Musk warned that Australia intended to go further, posting on his platform: Seems like a backdoor way to control access to the Internet by all Australians.The bill has wide political support. After it becomes law, the platforms would have one year to work out how to implement the age restriction.For too many young Australians, social media can be harmful, Rowland said. Almost two-thirds of 14- to 17-years-old Australians have viewed extremely harmful content online including drug abuse, suicide or self-harm as well as violent material. One quarter have been exposed to content promoting unsafe eating habits.Government research found that 95% of Australian care-givers find online safety to be one of their toughest parenting challenges, she said. Social media had a social responsibility and could do better in addressing harms on their platforms, she added.This is about protecting young people, not punishing or isolating them, and letting parents know that were in their corner when it comes to supporting their childrens health and wellbeing, Rowland said.Child welfare and internet experts have raised concerns about the ban, including isolating 14- and 15-year-olds from their already established online social networks.Rowland said there would not be age restrictions placed on messaging services, online games or platforms that substantially support the health and education of users.We are not saying risks dont exist on messaging apps or online gaming. While users can still be exposed to harmful content by other users, they do not face the same algorithmic curation of content and psychological manipulation to encourage near-endless engagement, she said.The governmentannouncedlast week that a consortium led by British company Age Check Certification Scheme has been contracted to examine various technologies to estimate and verify ages.In addition to removing children under 16 from social media, Australia is also looking for ways to prevent children under 18 from accessing online pornography, a government statement said.Age Check Certification Schemes chief executive Tony Allen said Monday the technologies being considered included age estimation and age inference. Inference involves establishing a series of facts about individuals that point to them being at least a certain age.Rowland said the platforms would also face fines of up to AU$50 million ($33 million) if they misused personal information of users gained for age-assurance purposes.Information used for age assurances must be destroyed after serving that purpose unless the user consents to it being kept, she said.Digital Industry Group Inc., an advocate for the digital industry in Australia, said with Parliament expected to vote on the bill next week, there might not be time for meaningful consultation on the details of the globally unprecedented legislation.Mainstream digital platforms have strict measures in place to keep young people safe, and a ban could push young people on to darker, less safe online spaces that dont have safety guardrails, DIGI managing director Sunita Bose said in a statement. A blunt ban doesnt encourage companies to continually improve safety because the focus is on keeping teenagers off the service, rather than keeping them safe when theyre on it.0 Commenti 0 condivisioni 20 Views
-
TIME.COMThere Is a Solution to AIs Existential Risk ProblemIdeasBy Otto BartenNovember 15, 2024 7:11 AM ESTOtto Barten is director of the Existential Risk Observatory, a nonprofit aiming to reduce existential risk by informing the public debate.Technological progress can excite us, politics can infuriate us, and wars can mobilize us. But faced with the risk of human extinction that the rise of artificial intelligence is causing, we have remained surprisingly passive. In part, perhaps this was because there did not seem to be a solution. This is an idea I would like to challenge.AIs capabilities are ever-improving. Since the release of ChatGPT two years ago, hundreds of billions of dollars have poured into AI. These combined efforts will likely lead to Artificial General Intelligence (AGI), where machines have human-like cognition, perhaps within just a few years. Hundreds of AI scientists think we might lose control over AI once it gets too capable, which could result in human extinction. So what can we do? Read More: What Donald Trump's Win Means For AIThe existential risk of AI has often been presented as extremely complex. A 2018 paper, for example, called the development of safe human-level AI a super wicked problem. This perceived difficulty had much to do with the proposed solution of AI alignment, which entails making superhuman AI act according to humanitys values. AI alignment, however, was a problematic solution from the start.First, scientific progress in alignment has been much slower than progress in AI itself. Second, the philosophical question of which values to align a superintelligence to is incredibly fraught. Third, it is not at all obvious that alignment, even if successful, would be a solution to AIs existential risk. Having one friendly AI does not necessarily stop other unfriendly ones.Because of these issues, many have urged technology companies not to build any AI that humanity could lose control over. Some have gone further; activist groups such as PauseAI have indeed proposed an international treaty that would pause development globally. That is not seen as politically palatable by many, since it may still take a long time before the missing pieces to AGI are filled in. And do we have to pause already, when this technology can also do a lot of good? Yann Lecun, AI chief at Meta and prominent existential risk skeptic, says that the existential risk debate is like worrying about turbojet safety in 1920.On the other hand, technology can leapfrog. If we get another breakthrough such as the transformer, a 2017 innovation which helped launch modern Large Language Models, perhaps we could reach AGI in a few months training time. Thats why a regulatory framework needs to be in place before then.Fortunately, Nobel Laureate Geoffrey Hinton, Turing Award winner Yoshua Bengio, and many others have provided a piece of the solution. In a policy paper published in Science earlier this year, they recommended if-then commitments: commitments to be activated if and when red-line capabilities are found in frontier AI systems.Building upon their work, we at the nonprofit Existential Risk Observatory propose a Conditional AI Safety Treaty. Signatory countries of this treaty, which should include at least the U.S. and China, would agree that once we get too close to loss of control they will halt any potentially unsafe training within their borders. Once the most powerful nations have signed this treaty, it is in their interest to verify each others compliance, and to make sure uncontrollable AI is not built elsewhere, either. One outstanding question is at what point AI capabilities are too close to loss of control. We propose to delegate this question to the AI Safety Institutes set up in the U.K., U.S., China, and other countries. They have specialized model evaluation know-how, which can be developed further to answer this crucial question. Also, these institutes are public, making them independent from the mostly private AI development labs. The question of how close is too close to losing control will remain difficult, but someone will need to answer it, and the AI Safety Institutes are best positioned to do so.We can mostly still get the benefits of AI under the Conditional AI Safety Treaty. All current AI is far below loss of control level, and will therefore be unaffected. Narrow AIs in the future that are suitable for a single tasksuch as climate modeling or finding new medicineswill be unaffected as well. Even more general AIs can still be developed, if labs can demonstrate to a regulator that their model has loss of control risk less than, say, 0.002% per year (the safety threshold we accept for nuclear reactors). Other AI thinkers, such as MIT professor Max Tegmark, Conjecture CEO Connor Leahy, and ControlAI director Andrea Miotti, are thinking in similar directions.Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported Californias legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: So I dont know why were sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good? For his part, Trump has expressed concern about the risks posed by AI, too.The Conditional AI Safety Treaty could provide a solution to AIs existential risk, while not unnecessarily obstructing AI development right now. Getting China and other countries to accept and enforce the treaty will no doubt be a major geopolitical challenge, but perhaps a Trump government is exactly what is needed to overcome it.A solution to one of the toughest problems we facethe existential risk of AIdoes exist. It is up to us whether we make it happen, or continue to go down the path toward possible human extinction.0 Commenti 0 condivisioni 5 Views
-
TIME.COMWhy a Technocracy Fails Young PeopleAs a chaplain at Harvard and MIT, I have been particularly concerned when talking to young people, who hope to be the next generation of American leaders. What moral lessons should they draw from the 2024 election? Elite institutions like those I serve have, after all, spent generations teaching young people to pursue leadership and success above all else. And, well, the former-turned-next POTUS has become one of the most successful political leaders of this century.The electoral resurrection of a convicted felon whose own former chief of staff, a former Marine Corps General no less, likened him to a fascist, requires far more than re-evaluation of Democratic Party policies. It demands a re-examination of our entire societys ethicaland even spiritualpriorities.Its not that students on campuses like mine want to be the next Trump (though he did win a majority among white, male, college-educated voters). It is, however, common for them to idolize billionaire tech entrepreneurs like Elon Musk and Peter Thiel. Both Musk and Thiel factored significantly in Trump and Vances victory; both will be handsomely rewarded for their support.But is a technocracy the best we can do as a model for living a meaningful life today? It is past time to recognize that the digital technologies with which many of us now interact from the moment we wake until the moment we drift into sleep (and often beyond that) have ceased to be mere tools. Just like we went from users to being the products by which companies like Facebook and Google make trillions in advertising revenue, we now have become the tools by which certain technologists can realize their grandest financial and political ambitions.Policy reform alonewhile necessarywont save us. But neither will tech figures like Musk or Theil. In fact, we need an alternative to an archetype that I like to call The Drama of the Gifted Technologist, of which Musk, Thiel, and other tech leaders have become avatars.Based on the ideas of the noted 20th century psychologist Alice Miller, and on my observation of the inner lives of many of the world's most gifted students, the "Drama of the Gifted Technologist" starts with the belief that one is only "enough," or worthy of love and life, if one achieves extraordinary things, namely through leadership in tech or social media clout.Ive seen this "drama" become a kind of official psychopathology of the Ivy League and Silicon Valley. It began, in some ways, with the accumulation of friends on Facebook over a decade ago, to gain social relevance. And it has now graduated to become the psychological and even spiritual dynamic driving the current AI arms-race, also known as "accelerationism. See for example the influential billionaire AI cheerleader VC Marc Andreessen's famous "Techno-Optimist's Manifesto," which uses the phrase "we believe" 133 times, arguing that "any deceleration of AI will cost lives..." and that "AI that was prevented from existing is a form of murder." Or Sam Altman's urgent quest for trillions of dollars to create a world of AI "abundance, consequences for the climate, democracy, or, say, biological weapons be damned. Or Thiels belief that one needs a "near-messianic attitude" to succeed in venture capital. Or young men's hero worship of tech "genius" figures like Musk, who, as former Twitter owner Jack Dorsey said, is the singular solution: the one man to single-handedly take humanity beyond earth, into the stars.And why wouldnt the drama of the gifted technologist appeal to young people? They live, after all, in a world so unequal, with a future so uncertain, that the fortunate few really do live lives of grandeur in comparison to the precarity and struggle others face.Read More: Inside Elon Musks Struggle for the Future of AIStill, some might dismiss these ideas as mere hype and bluster. Id love to do so, too. But I've heard far too many "confessions" reminiscent of what famous AI "doomer" Eliezer Yudkowsky once said most starkly and alarmingly: that "ambitious people would rather destroy the world than never amount to anything."Of course, Im not saying that the aspiring leaders I work with are feeling so worthless and undeserving that they put themselves on a straight path from aspirational tech leadership towards world-destruction. Plenty are wonderful human beings. But it doesnt take many hollow young men to destroy, if not the whole world, then at least far too much of it. Ultimately, many gifted young adults are feeling extraordinarily normal feelings: Fear. Loneliness. Grief. But because their drama doesn't permit them to simply be normal, they too often look for ways to dominate others, rather than connect with them in humble mutual solidarity.In the spring of 2023, I sat and discussed all this over a long lunch with a group of about 20 soon-to-graduate students at Harvards Kennedy School of Government. The students, in many cases deeply anxious about their individual and collective futures, asked me to advise them on how to envision and build ethical, meaningful, and sustainable lives in a world in which technological (and climate) change was causing them a level of uncertainty that was destabilizing at best, debilitating at worst. I suggested they view themselves as having inherent worth and value, simply for existing. Hearing that, one of the students respondedwith laudable honesty and forthrightnessthat she found that idea laughable.I dont blame her for laughing. It truly can be hard to accept oneself unconditionally, at a time of so much dehumanization. Many students I meet find it much easier to simply work harder. Ironically, their belief that tech success and wealth will save them strikes me as a kind of digital puritanism: a secularized version of the original Puritanism that founded Harvard College in the 1630's, in which you were either considered one of the world's few true elites, bound for Heaven, or if not, your destiny was the fire and brimstone vision of Hell. Perhaps techs hierarchies arent quite as extreme as traditional Puritanisms, which allowed no way to alter destiny, and where the famous "Protestant work ethic" was merely an indicator of one's obvious predestination. But given the many ways in which today's tech is worsening social inequality, the difference isn't exactly huge.The good news? Many reformers are actively working to make tech more humane.Among those is MacArthur fellow and scholar of tech privacy Danielle Citron, an expert in online abuse, who told me she worries that gifted technologists can lose their way behind screens, because they dont see the people whom they hurt.To build a society for future cyborgs as ones goal, Citron continued, suggests that these folks dont have real, flesh and blood relationshipswhere we see each other in the way that Martin Buberdescribed.Buber, an influential Jewish philosopher whose career spanned the decades before and after the Holocaust, was best known for his idea, first fully expressed in his 1923 essay I and Thou, that human life finds its meaning in relationships, and that the world would be better if each of us imagined our flesh-and-blood connections with one anotherrather than achievements or technologiesas the ultimate expression of our connection to the divine.Indeed. I dont happen to share Bubers belief in a divine authority; Im an atheist and humanist. But I share Bubers faith in the sacredness of human interrelationship. And I honor any form of contemporary spiritual teaching, religious or not, that reminds us to place justice, and one anothers well-being, over ambitionor winning.We are not digital beings. We are not chatbots, optimized for achievement and sent to conquer this country and then colonize the stars through infinite data accumulation. We are human beings who care deeply about one another because we care about ourselves. Our very existence, as people capable of loving and being loved, is what makes us worthy of the space we occupy, here in this country, on this planet, and on any other planet we may someday find ourselves inhabiting.0 Commenti 0 condivisioni 4 Views
-
TIME.COMBluesky Adds 1 Million New Users Since U.S. Election, as People Seek Alternatives to XBy SARAH PARVINI / APNovember 13, 2024 9:45 PM ESTSocial media site Bluesky has gained 1 million new users in the week since the U.S. election, as some X users look for an alternative platform to post their thoughts and engage with others online.Bluesky said Wednesday that its total users surged to 15 million, up from roughly 13 million at the end of October.Championed by former Twitter CEO Jack Dorsey, Bluesky was an invitation-only space until it opened to the public in February. That invite-only period gave the site time to build out moderation tools and other features. The platform resembles Elon Musks X, with a discover feed as well a chronological feed for accounts that users follow. Users can send direct messages and pin posts, as well as find starter packs that provide a curated list of people and custom feeds to follow.The post-election uptick in users isnt the first time that Bluesky has benefitted from people leaving X. Bluesky gained 2.6 million users in the week after X was banned in Brazil in August85% of them from Brazil, the company said. About 500,000 new users signed up in the span of one day last month, when X signaled that blocked accounts would be able to see a users public posts.Despite Blueskys growth, X posted last week that it had dominated the global conversation on the U.S. election and had set new records. The platform saw a 15.5% jump in new-user signups on Election Day, X said, with a record 942 million posts worldwide. Representatives for Bluesky and for X did not respond to requests for comment.Bluesky has referenced its competitive relationship to X through tongue-in-cheeks comments, including an Election Day post on X referencing Musk watching voting results come in with President-elect Donald Trump.I can guarantee that no Bluesky team members will be sitting with a presidential candidate tonight and giving them direct access to control what you see online, Bluesky said.Across the platform, new usersamong them journalists, left-leaning politicians and celebritieshave posted memes and shared that they were looking forward to using a space free from advertisements and hate speech. Some said it reminded them of the early days of X, when it was still Twitter.On Wednesday, the Guardian said it would no longer post on X, citing far right conspiracy theories and racism on the site as a reason. At the same time, television journalist Don Lemon posted on X that he is leaving the platform but will continue to use other social media, including Bluesky.Lemon said he felt X was no longer a place for honest debate and discussion. He noted changes to the sites terms of service set to go into effect Friday that state lawsuits against X must be filed in the U.S. District Court for the Northern District of Texas rather than the Western District of Texas. Musk said in July that he was moving Xs headquarters to Texas from San Francisco.As the Washington Post recently reported on Xs decision to change the terms, this ensures that such lawsuits will be heard in courthouses that are a hub for conservatives, which experts say could make it easier for X to shield itself from litigation and punish critics, Lemon wrote. I think that speaks for itself.Last year, advertisers such as IBM, NBCUniversal and its parent company Comcast fled X over concerns about their ads showing up next to pro-Nazi content and hate speech on the site in general, with Musk inflaming tensions with his own posts endorsing an antisemitic conspiracy theory.0 Commenti 0 condivisioni 7 Views
-
TIME.COMWhat Donald Trumps Win Means For AIWhen Donald Trump was last President, ChatGPT had not yet been launched. Now, as he prepares to return to the White House after defeating Vice President Kamala Harris in the 2024 election, the artificial intelligence landscape looks quite different. AI systems are advancing so rapidly that some leading executives of AI companies, such as Anthropic CEO Dario Amodei and Elon Musk, the Tesla CEO and a prominent Trump backer, believe AI may become smarter than humans by 2026. Others offer a more general timeframe. In an essay published in September, OpenAI CEO Sam Altman said, It is possible that we will have superintelligence in a few thousand days, but also noted that it may take longer. Meanwhile, Meta CEO Mark Zuckerberg sees the arrival of these systems as more of a gradual process rather than a single moment.Either way, such advances could have far-reaching implications for national security, the economy, and the global balance of power.Trumps own pronouncements on AI have fluctuated between awe and apprehension. In a JuneLogan Pauls Impaulsive podcast, hedescribed AI as a superpower and called its capabilities alarming. And like many in Washington, he views the technology through the lens of competition with China, which he sees as the primary threat in the race to build advanced AI. Yet even his closest allies are divided on how to govern the technology: Musk has long voiced concerns about AIs existential risks, while J.D. Vance, Trump's Vice President, sees such warnings from industry as a ploy to usher regulations that would entrench the tech incumbents. These divisions among Trump's confidants hint at the competing pressures that will shape AI policy during Trumps second term.Undoing Bidens AI legacyTrumps first major AIJoe Bidens Executive Order on AI. The sweeping order, signed in October 2023, sought to address threats the technology could pose to civil rights, privacy, and national security, while promoting innovation, competition, and the use of AI for public services. Trump promised to repeal the Executive Order on the campaign trail in December 2023, and this position was reaffirmed in the Republican Party platform in July, which criticized the executive order for hindering innovation and imposing radical leftwing ideas on the technologys development.Sections of the Executive Order which focus on racial discrimination or inequality are not as much Trumps style, says Dan Hendrycks, executive and research director of the Center for AI Safety. While experts have criticized any rollback of bias protections, Hendrycks says the Trump Administration may preserve other aspects of Biden's approach. I think there's stuff in [the Executive Order] that's very bipartisan, and then there's some other stuff that's more specifically Democrat-flavored, Hendrycks says.It would not surprise me if a Trump executive order on AI maintained or even expanded on some of the core national security provisions within the Biden Executive Order, building on what the Department of Homeland Security has done for evaluating cybersecurity, biological, and radiological risks associated with AI, says Samuel Hammond, a senior economist at the Foundation for American Innovation, a technology-focused think tank.The fate of the U.S. AI Safety Institute (AISI), an institution created last November by the Biden Administration to lead the government's efforts on AI safety, also remains uncertain. In August, the AISI signed agreements with OpenAI and Anthropic to formally collaborate on AI safety research, and on the testing and evaluation of new models. Almost certainly, the AI Safety Institute is viewed as an inhibitor to innovation, which doesn't necessarily align with the rest of what appears to be Trump's tech and AI agenda, says Keegan McBride, a lecturer in AI, government, and policy at the Oxford Internet Institute. But Hammond says that while some fringe voices would move to shutter the institute, most Republicans are supportive of the AISI. They see it as an extension of our leadership in AI.Read more: What Trumps Win Means for CryptoCongress is already working on protecting the AISI. In October, a broad coalition of companies, universities, and civil society groupsincluding OpenAI, Lockheed Martin, Carnegie Mellon University, and the nonprofit Encode Justicesigned a letter calling on key figures in Congress to urgently establish a legislative basis for the AISI. Efforts are underway in both the Senate and the House of Representatives, and both reportedly have pretty wide bipartisan support, says Hamza Chaudhry, U.S. policy specialist at the nonprofit Future of Life Institute.America-first AI and the race against ChinaTrumps previous comments suggest that maintaining the U.S.s lead in AI development will be a key focus for his Administration.We have to be at the forefront, he said on the Impaulsive podcast in June. We have to take the lead over China. Trump also framed environmental concerns as potential obstacles, arguing they could "hold us back" in what he views as the race against China.Trump's AI policy could include rolling back regulations to accelerate infrastructure development, says Dean Ball, a research fellow at George Mason University. "There's the data centers that are going to have to be built. The energy to power those data centers is going to be immense. I think even bigger than that: chip production," he says. We're going to need a lot more chips. While Trumps campaign has at times attacked the CHIPS Act, which provides incentives for chip makers manufacturing in the U.S, leading some analysts to believe that he is unlikely to repeal the act.Read more: What Donald Trumps Win Means for the EconomyChip export restrictions are likely to remain a key lever in U.S. AI policy. Building on measures he initiated during his first termwhich were later expanded by BidenTrump may well strengthen controls that curb China's access to advanced semiconductors. "It's fair to say that the Biden Administration has been pretty tough on China, but I'm sure Trump wants to be seen as tougher," McBride says. It is quite likely that Trumps White House will double down on export controls in an effort to close gaps that have allowed China to access chips, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace. The overwhelming majority of people on both sides think that the export controls are important, he says.The rise of open-source AI presents new challenges. China has shown it can leverage U.S. systems, as demonstrated when Chinese researchers reportedly adapted an earlier version of Meta's Llama model for military applications. Thats created a policy divide. "You've got people in the GOP that are really in favor of open-source," Ball says. "And then you have people who are 'China hawks' and really want to forbid open-source at the frontier of AI.""My sense is that because a Trump platform has so much conviction in the importance and value of open-source I'd be surprised to see a movement towards restriction," Singer says.Despite his tough talk, Trump's deal-making impulses could shape his policy towards China. "I think people misunderstand Trump as a China hawk. He doesn't hate China," Hammond says, describing Trump's "transactional" view of international relations. In 2018, Trump lifted restrictions on Chinese technology company ZTE in exchange for a $1.3 billion fine and increased oversight. Singer sees similar possibilities for AI negotiations, particularly if Trump accepts concerns held by many experts about AIs more extreme risks, such as the chance that humanity may lose control over future systems.Trumps coalition is divided over AIDebates over how to govern AI reveal deep divisions within Trump's coalition of supporters. Leading figures, including Vance, favor looser regulations of the technology. Vance has dismissed AI risk as an industry ploy to usher in new regulations that would make it actually harder for new entrants to create the innovation thats going to power the next generation of American growth. Silicon Valley billionaire Peter Thiel, who served on Trumps 2016 transition team, recently cautioned against movements to regulate AI. Speaking at the Cambridge Union in May, he said any government with the authority to govern the technology would have a global totalitarian character. Marc Andreessen, the co-founder of prominent venture capital firm Andreessen Horowitz, gave $2.5 million to a pro-Trump super political action committee, and an additional $844,600 to Trumps campaign and the Republican Party. Yet, a more safety-focused perspective has found other supporters in Trump's orbit. Hammond, who advised on the AI policy committee for Project 2025, a proposed policy agenda led by right-wing think tank the Heritage Foundation, and not officially endorsed by the Trump campaign, says that within the people advising that project, [there was a] very clear focus on artificial general intelligence and catastrophic risks from AI.Musk, who has emerged as a prominent Trump campaign ally through both his donations and his promotion of Trump on his platform X (formerly Twitter), has long been concerned that AI could pose an existential threat to humanity. Recently, Musk said he believes theres a 10% to 20% chance that AI goes bad. In August, Musk posted on X supporting the now-vetoed California AI safety bill that would have put guardrails on AI developers. Hendrycks, whose organization co-sponsored the California bill, and who serves as safety adviser at xAI, Musks AI company, says If Elon is making suggestions on AI stuff, then I expect it to go well. However, theres a lot of basic appointments and groundwork to do, which makes it a little harder to predict, he says.Trump has acknowledged some of the national security risks of AI. In June, he said he feared deepfakes of a U.S. President threatening a nuclear strike could prompt another state to respond, sparking a nuclear war. He also gestured to the idea that an AI system could go rogue and overpower humanity, but took care to distinguish this position from his personal view. However, for Trump, competition with China appears to remain the primary concern.But these priorities arent necessarily at odds and AI safety regulation does not inherently entail ceding ground to China, Hendrycks says. He notes that safeguards against malicious use require minimal investment from developers. You have to hire one person to spend, like, a month or two on engineering, and then you get your jailbreaking safeguards, he says. But with these competing voices shaping Trump's AI agenda, the direction of Trumps AI policy agenda remains uncertain.In terms of which viewpoint President Trump and his team side towards, I think that is an open question, and that's just something we'll have to see, says Chaudhry. Now is a pivotal moment.0 Commenti 0 condivisioni 12 Views
-
TIME.COMWhat Trumps Win Means For CryptoThis election cycle, the crypto industry poured over $100 million into races across the country, hoping to assert cryptos relevancy as a voter issue and usher pro-crypto candidates into office. On Wednesday morning, almost all of the industrys wishes came true. Republican candidate Donald Trump, who has lavished praise upon Bitcoin this year, won handily against his Democratic opponent Kamala Harris. And crypto PACs scored major wins in House and Senate racesmost notably in Ohio, where Republican Bernie Moreno defeated crypto skeptic Sherrod Brown.As Trumps numbers ascended on Tuesday night, Bitcoin hit a new record high, topping $75,000. Crypto-related stocks, including Robinhood Markets and MicroStrategy, also leapt upward. Enthusiasts now believe that Trumps Administration will strip back regulation of the crypto industry, and that a favorable Congress will pass legislation that gives the industry more room to grow.This is a huge victory for crypto, Kristin Smith, the CEO of the Blockchain Association, a D.C.-based lobbying group, tells TIME. I think we've really turned a corner, and we've got the right folks in place to get the policy settled once and for all.Trumps crypto embraceMany crypto fans supported Trump over Harris for several reasons. Trump spoke glowingly about crypto this year on the campaign trail, despite casting skepticism upon it for years. At the Bitcoin conference in Nashville in July, Trump floated the idea of establishing a federal Bitcoin reserve, and stressed the importance of bringing more Bitcoin mining operations to the U.S.Read More: Inside the Health Crisis of a Texas Bitcoin TownPerhaps most importantly, Trump vowed to oust Gary Gensler, the chair of the Securities and Exchange Commission (SEC), who has brought many lawsuits against crypto projects for allegedly violating securities laws. Gensler is a widely-reviled figure in the crypto industry, with many accusing him of stifling innovation. Gensler, conversely, argued that it was his job to protect consumers from the massive crypto collapses that unfolded in 2022, including Terra Luna and FTX.Genslers term isnt up until 2026, but some analysts expect him to resign once Trump takes office, as previous SEC chairs have done after the President that appointed them lost their elections. A change to SEC leadership could allow many more crypto products to enter mainstream financial markets. For the past few years, the SEC had been hesitant to approve crypto ETFs: investment vehicles that allow people to bet on crypto without actually holding it. But a judge forced Genslers hand, bringing Bitcoin ETFs onto the market in January. Now, under a friendlier SEC, ETFs based on smaller cryptocurrencies like Solana and XRP may be next.Many crypto enthusiasts are also excited by Trumps alliance with Elon Musk, who has long championed cryptocurrencies on social media. On election night, Dogecoin, Musks preferred meme coin, spiked 25% to 21 cents.Impact in the SenateCrypto enthusiasts are also cheering the results in the Senate, which was the focus of most of the industrys political contributions. Crypto PACs like Fairshake spent over $100 million dollars supporting pro-crypto candidates and opposing anti-crypto candidates, in the hopes of fomenting a new Congress that would pass legislation favorable to the industry. Centrally, lobbyists hoped for a bill that would turn over crypto regulation from the SEC to the Commodity Futures Trading Commission (CFTC), a much smaller agency.Crypto PACs particularly focused their efforts in Ohio, spending some $40 million to unseat Democrat Brown, the Senate Banking Committee Chair and a crypto critic. His opponent Moreno has been a regular attendee at crypto conferences and vowed to lead the fight to defend crypto in the US Senate. On Tuesday night, Moreno won, flipping control of the Senate.Defend American Jobs, a Crypto PAC affiliated with Fairshake, claimed credit for Browns defeat on Tuesday. Elizabeth Warren ally Sherrod Brown was a top opponent of cryptocurrency and thanks to our efforts, he will be leaving the Senate, spokesperson Josh Vlasto wrote in a statement. Senator-Elect Morenos come-from-behind win shows that Ohio voters want a leader who prioritizes innovation, protects American economic interests, and will ensure our nations continued technological leadership.Crypto PACs notched another victory in Montana, where their preferred candidate, Republican Tim Sheehy, defeated Democrat Jon Tester.The rise of prediction marketsFinally, crypto enthusiasts celebrated the accuracy of prediction markets, which allow users to bet on election results using crypto. Advocates claimed that prediction markets could be more accurate than polls, because they channeled the collective wisdom of people with skin in the game. Critics, on the other hand, dismissed them as being too volatile and based in personal sentiment and boosterism.For weeks, prediction markets had been far more favorable toward Trump than the polls, which portrayed Trump and Harris in a dead heat. (For example, Polymarket gave Trump a 62% chance of winning on Nov. 3.) And on election day, before any major results had been tabulated, prediction markets swung heavily towards Trump; the odds of Republicans sweeping the presidency, house and senate jumped to 44% on Kalshi.In the last couple months, bettors wagered over $2 billion on the presidential election on Polymarket, according to Dune Analytics. Its still unclear whether prediction markets are actually more accurate than polls on average. But their success in this election will likely make their presence in the political arena only increase in years to come.Cryptos future in the Trump era is far from guaranteed. Crypto prices are highly susceptible to global events, like Russias invasion of Ukraine, as well as larger macroeconomic trends. Fraudulent crypto projects like FTX, which thrived in deregulated environments, have also tanked prices in years past. Skeptics worry that more Americans being able to buy crypto will add volatility and risk to the American financial system.And its unclear how dedicated Trump actually is to crypto, or whether he will follow through on his pledges to the industry. If he doesnt deliver on these promises quickly, the euphoria could turn to disappointment, which has the potential to result in crypto market volatility, Tim Kravchunovsky, founder and CEO of the decentralized telecommunications network Chirp, wrote to TIME. We have to be prepared for this because the reality is that crypto isnt the most important issue on Trumps current agenda.But for now, most crypto fans believe that a bull run, in which prices increase, is imminent, and that regulatory change is incoming. I don't think we're going to see the same kind of hostility from the government, particularly members of Congress, as we have in the past, says Smith. This is really positive news for all parts of the ecosystem.Andrew R. Chows book about crypto and Sam Bankman-Fried, Cryptomania, was published in August.0 Commenti 0 condivisioni 19 Views
-
TIME.COMThe Gap Between Open and Closed AI Models Might Be Shrinking. Heres Why That MattersBy Tharin PillayNovember 5, 2024 9:15 AM ESTTodays best AI models, like OpenAIs ChatGPT and Anthropics Claude, come with conditions: their creators control the terms on which they are accessed to prevent them being used in harmful ways. This is in contrast with open models, which can be downloaded, modified, and used by anyone for almost any purpose. A new report by non-profit research organization Epoch AI found that open models available today are about a year behind the top closed models. The best open model today is on par with closed models in performance, but with a lag of about one year, says Ben Cottier, lead researcher on the report.Meta's Llama 3.1 405B, an open model released in July, took about 16 months to match the capabilities of the first version of GPT-4. If Metas next generation AI, Llama 4, is released as an open model, as it is widely expected to be, this gap could shrink even further. The findings come as policymakers grapple with how to deal with increasingly-powerful AI systems, which have already been reshaping information environments ahead of elections across the world, and which some experts worry could one day be capable of engineering pandemics, executing sophisticated cyberattacks, and causing other harms to humans.Researchers at Epoch AI analyzed hundreds of notable models released since 2018. To arrive at their results, they measured the performance of top models on technical benchmarksstandardized tests that measure an AI's ability to handle tasks like solving math problems, answering general knowledge questions, and demonstrating logical reasoning. They also looked at how much computing power, or compute, was used to train them, since that has historically been a good proxy for capabilities, though open models can sometimes perform as well as closed models while using less compute, thanks to advancements in the efficiency of AI algorithms. The lag between open and closed models provides a window for policymakers and AI labs to assess frontier capabilities before they become available in open models, Epoch researchers write in the report.Read More: The Researcher Trying to Glimpse the Future of AIBut the distinction between 'open' and 'closed' AI models is not as simple as it might appear. While Meta describes its Llama models as open-source, it doesn't meet the new definition published last month by the Open Source Initiative, which has historically set the industry standard for what constitutes open source. The new definition requires companies to share not just the model itself, but also the data and code used to train it. While Meta releases its model weightslong lists of numbers that allow users to download and modify the modelit doesnt release either the training data or the code used to train the models. Before downloading a model, users must agree to an Acceptable Use Policy that prohibits military use and other harmful or illegal activities, although once models are downloaded, these restrictions are difficult to enforce in practice. Meta says it disagrees with the Open Source Initiatives new definition. There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of todays rapidly advancing AI models, a Meta spokesperson told TIME in an emailed statement. We make Llama free and openly available, and our license and Acceptable Use Policy help keep people safe by having some restrictions in place. We will continue working with OSI and other industry groups to make AI more accessible and free responsibly, regardless of technical definitions.Making AI models open is widely seen to be beneficial because it democratizes access to technology and drives innovation and competition. One of the key things that open communities do is they get a wider, geographically more-dispersed, and more diverse community involved in AI development, says Elizabeth Seger, director of digital policy at Demos, a U.K.-based think tank. Open communities, which include academic researchers, independent developers, and non-profit AI labs, also drive innovation through collaboration, particularly in making technical processes more efficient. They don't have the same resources to play with as Big Tech companies, so being able to do a lot more with a lot less is really important, says Seger. In India, for example, AI that's built into public service delivery is almost completely built off of open source models, she says.Open models also enable greater transparency and accountability. There needs to be an open version of any model that becomes basic infrastructure for society, because we do need to know where the problems are coming from, says Yacine Jernite, machine learning and society lead at Hugging Face, a company that maintains the digital infrastructure where many open models are hosted. He points to the example of Stable Diffusion 2, an open image generation model that allowed researchers and critics to examine its training data and push back against potential biases or copyright infringementssomething impossible with closed models like OpenAIs DALL-E. You can do that much more easily when you have the receipts and the traces, he says.However, the fact that open models can be used by anyone creates inherent risks, as people with malicious intentions can use them for harm, such as producing child sexual abuse material, or they could even be used by rival states. Last week, Reuters reported that Chinese research institutions linked to the Peoples Liberation Army had used an old version of Metas Llama model to develop an AI tool for military use, underscoring the fact that, once a model has been publicly released, it cannot be recalled. Chinese companies such as Alibaba have also developed their own open models, which are reportedly competitive with their American counterparts.On Monday, Meta announced it would make its Llama models available to U.S. government agencies, including those working on defense and national security applications, and to private companies supporting government work, such as Lockeed Martin, Anduril, and Palantir. The company argues that American leadership in open-source AI is both economically advantageous and crucial for global security.Closed proprietary models present their own challenges. While they are more secure, because access is controlled by their developers, they are also more opaque. Third parties cannot inspect the data on which the models are trained to search for bias, copyrighted material, and other issues. Organizations using AI to process sensitive data may choose to avoid closed models due to privacy concerns. And while these models have stronger guardrails built in to prevent misuse, many people have found ways to jailbreak them, effectively circumventing these guardrails.Governance challengesAt present, the safety of closed models is primarily in the hands of private companies, although government institutions such as the U.S. AI Safety Institute (AISI) are increasingly playing a role in safety-testing models ahead of their release. In August, the U.S. AISI signed formal agreements with Anthropic to enable formal collaboration on AI safety research, testing and evaluation.Because of the lack of centralized control, open models present distinct governance challengesparticularly in relation to the most extreme risks that future AI systems could pose, such as empowering bioterrorists or enhancing cyberattacks. How policymakers should respond depends on whether the capabilities gap between open and closed models is shrinking or widening. If the gap keeps getting wider, then when we talk about frontier AI safety, we don't have to worry so much about open ecosystems, because anything we see is going to be happening with closed models first, and those are easier to regulate, says Seger. However, if that gap is going to get narrower, then we need to think a lot harder about if and how and when to regulate open model development, which is an entire other can of worms, because there's no central, regulatable entity.For companies such as OpenAI and Anthropic, selling access to their models is central to their business model. A key difference between Meta and closed model providers is that selling access to AI models isnt our business model, Meta CEO Mark Zuckerberg wrote in an open letter in July. We expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency.Measuring the abilities of AI systems is not straightforward. Capabilities is not a term that's defined in any way, shape or form, which makes it a terrible thing to discuss without common vocabulary, says Jernite. There are many things you can do with open models that you cant do with closed models, he says, emphasizing that open models can be adapted to a range of use-cases, and that they may outperform closed models when trained for specific tasks.Ethan Mollick, a Wharton professor and popular commentator on the technology, argues that even if there was no further progress in AI, it would likely take years before these systems are fully integrated into our world. With new capabilities being added to AI systems at a steady ratein October, frontier AI lab Anthropic introduced the ability for its model to directly control a computer, still in betathe complexity of governing this technology will only increase.In response, Seger says that it is vital to tease out exactly what risks are at stake. We need to establish very clear threat models outlining what the harm is and how we expect openness to lead to the realization of that harm, and then figure out the best point along those individual threat models for intervention.0 Commenti 0 condivisioni 22 Views
-
TIME.COMHow AI Is Being Used to Respond to Natural Disasters in CitiesBy Harry Booth and Tharin PillayNovember 4, 2024 11:01 AM ESTThe number of people living in urban areas has tripled in the last 50 years, meaning when a major natural disaster such as an earthquake strikes a city, more lives are in danger. Meanwhile, the strength and frequency of extreme weather events has increaseda trend set to continue as the climate warms. That is spurring efforts around the world to develop a new generation of earthquake monitoring and climate forecasting systems to make detecting and responding to disasters quicker, cheaper, and more accurate than ever.On Nov. 6, at the Barcelona Supercomputing Center in Spain, the Global Initiative on Resilience to Natural Hazards through AI Solutions will meet for the first time. The new United Nations initiative aims to guide governments, organizations, and communities in using AI for disaster management. The initiative builds on nearly four years of groundwork laid by the International Telecommunications Union, the World Meteorological Organization (WMO) and the U.N. Environment Programme, which in early 2021 collectively convened a focus group to begin developing best practices for AI use in disaster management. These include enhancing data collection, improving forecasting, and streamlining communications. What I find exciting is, for one type of hazard, there are so many different ways that AI can be applied and this creates a lot of opportunities, says Monique Kuglitsch, who chaired the focus group. Take hurricanes for example: In 2023, researchers showed AI could help policymakers identify the best places to put traffic sensors to detect road blockages after tropical storms in Tallahassee, Fla. And in October, meteorologists used AI weather forecasting models to accurately predict that Hurricane Milton would land near Siesta Key, Florida. AI is also being used to alert members of the public more efficiently. Last year, The National Weather Service announced a partnership with AI translation company Lilt to help deliver forecasts in Spanish and simplified Chinese, which it says can reduce the time to translate a hurricane warning from an hour to 10 minutes.Besides helping communities prepare for disasters, AI is also being used to coordinate response efforts. Following both Hurricane Milton and Hurricane Ian, non-profit GiveDirectly used Googles machine learning models to analyze pre- and post-satellite images to identify the worst affected areas, and prioritize cash grants accordingly. Last year AI analysis of aerial images was deployed in cities like Quelimane, Mozambique, after Cyclone Freddy and Adyaman, Turkey, after a 7.8 magnitude earthquake, to aid response efforts. Operating early warning systems is primarily a governmental responsibility, but AI climate modelingand, to a lesser extent, earthquake detectionhas become a burgeoning private industry. Start-up SeismicAI says its working with the civil protection agencies in the Mexican states of Guerrero and Jalisco to deploy an AI-enhanced network of sensors, which would detect earthquakes in real-time. Tech giants Google, Nvidia, and Huawei are partnering with European forecasters and say their AI-driven models can generate accurate medium-term forecasts thousands of times more quickly than traditional models, while being less computationally intensive. And in September, IBM partnered with NASA to release a general-purpose open-source model that can be used for various climate-modeling cases, and which runs on a desktop.AI advancesWhile machine learning techniques have been incorporated into weather forecasting models for many years, recent advances have allowed many new models to be built using AI from the ground-up, improving the accuracy and speed of forecasting. Traditional models, which rely on complex physics-based equations to simulate interactions between water and air in the atmosphere and require supercomputers to run, can take hours to generate a single forecast. In contrast, AI weather models learn to spot patterns by training on decades of climate data, most of which was collected via satellites and ground-based sensors and shared through intergovernmental collaboration. Both AI and physics-based forecasts work by dividing the world into a three-dimensional grid of boxes and then determining variables like temperature and wind speed. But because AI models are more computationally efficient, they can create much finer-grained grids. For example, the the European Centre for Medium-Range Weather Forecasts' highest resolution model breaks the world into 5.5 mile boxes, whereas forecasting startup Atmo offers models finer than one square mile. This bump in resolution can allow for more efficient allocation of resources during extreme weather events, which is particularly important for cities, says Johan Mathe, co-founder and CTO of the company, which earlier this year inked deals with the Philippines and the island nation of Tuvalu.LimitationsAI-driven models are typically only as good as the data they are trained on, which can be a limiting factor in some places. When youre in a really high stakes situation, like a disaster, you need to be able to rely on the model output, says Kuglitsch. Poorer regionsoften on the frontlines of climate-related disasterstypically have fewer and worse-maintained weather sensors, for example, creating gaps in meteorological data. AI systems trained on this skewed data can be less accurate in the places most vulnerable to disasters. And unlike physics-based models, which follow set rules, as AI models become more complex, they increasingly operate as sophisticated black boxes, where the path from input to output becomes less transparent. The U.N. initiatives focus is on developing guidelines for using AI responsibly. Kuglitsch says standards could, for example, encourage developers to disclose a models limitations or ensure systems work across regional boundaries.The initiative will test its recommendations in the field by collaborating with the Mediterranean and pan-European forecast and Early Warning System Against natural hazards (MedEWSa), a project that spun out of the focus group. We're going to be applying the best practices from the focus group and getting a feedback loop going, to figure out which of the best practices are easiest to follow, Kuglitsch says. One MedEWSa pilot project will explore machine learning to predict the occurrence of wildfires an area around Athens, Greece. Another will use AI to improve flooding and landslide warnings in the area surrounding Tbilisi city, Georgia. Meanwhile, private companies like Tomorrow.io are seeking to plug these gaps by collecting their own data. The AI weather forecasting start-up has launched satellites with radar and other meteorological sensors to collect data from regions that lack ground-based sensors, which it combines with historical data to train its models. Tomorrow.ios technology is being used by New England cities including Boston, to help city officials decide when to salt the roads ahead of snowfall. Its also used by Uber and Delta Airlines.Another U.N. initiative, the Systematic Observations Financing Facility (SOFF), also aims to close the weather data gap by providing financing and technical assistance in poorer countries. Johan Stander, director of services for the WMO, one of SOFFs partners, says the WMO is working with private AI developers including Google and Microsoft, but stresses the importance of not handing off too much responsibility to AI systems. You cant go to a machine and say, OK, you were wrong. Answer me, whats going on? You still need somebody to take that ownership, he says. He sees private companies role as supporting the national met services, instead of trying to take them over.0 Commenti 0 condivisioni 16 Views
-
TIME.COMInside the New Nonprofit AI Initiatives Seeking to Aid Teachers and Farmers in Rural AfricaIsaac Darko, left, helps Lucy Baffoe uses a mobile phone outside her home in Nkoranza village, Ghana in 2021, as part of an Opportunity International initiative.Arete -- Patrick Abah -- Opportunity InternationalBy Andrew R. ChowOctober 31, 2024 2:30 PM EDTOver the past year, rural farmers in Malawi have been seeking advice about their crops and animals from a generative AI chatbot. These farmers ask questions in Chichewa, their native tongue, and the app, Ulangizi, responds in kind, using conversational language based on information taken from the governments agricultural manual. In the past we could wait for days for agriculture extension workers to come and address whatever problems we had on our farms, Maron Galeta, a Malawian farmer, told Bloomberg. Just a touch of a button we have all the information we need.The nonprofit behind the app, Opportunity International, hopes to bring similar AI-based solutions to other impoverished communities. In February, Opportunity ran an acceleration incubator for humanitarian workers across the world to pitch AI-based ideas and then develop them alongside mentors from institutions like Microsoft and Amazon. On October 30, Opportunity announced the three winners of this program: free-to-use apps that aim to help African farmers with crop and climate strategy, teachers with lesson planning, and school leaders with administration management. The winners will each receive about $150,000 in funding to pilot the apps in their communities, with the goal of reaching millions of people within two years.Greg Nelson, the CTO of Opportunity, hopes that the program will show the power of AI to level playing fields for those who previously faced barriers to accessing knowledge and expertise. Since the mobile phone, this is the biggest democratizing change that we have seen in our lifetime, he says.In early February, Opportunity employees from around the world participated in brainstorming sessions for the incubator, generating more than 200 ideas. Many of these employees hoped to wield generative AIs potential to solve the specific problems of clients they had long worked with on the ground in high-poverty areas. For instance, verbal chatbots offering targeted advice and trained upon specific languages and vetted documents could be especially useful for communities with limited literacy. Our clients are never going to use Google, Nelson says. Now, they can speak, and are spoken to, in their own language.Read More: AI's Underwhelming Impact On the 2024 Elections.The top 20 teams then worked to transform their ideas into app prototypes, with assistance from mentors at major tech companies and technical support from MIT platforms. The three winners, which do not yet have formal names, were then picked by a panel of judges. The first winner is a farming app that hopes to improve upon Ulangizi. While that app offers general knowledge, this one will be designed to take in personalized data and give specific farming advicelike what seeds to plant and when and how much fertilizer to usebased upon a farmers acreage, crop history, and climate.Rebecca Nakacwa, who is based in Uganda and one of the projects founders, says that the apps ability to understand climate patterns in real time is crucial. When we went to farmers, we thought the biggest problem was around pricing, she says. But we were so surprised, because they told us their topmost problem is climate: finding a solution to how to work with the different climate changes. We know that with AI, this is achievable. She hopes to have the app ready for the start of planting season in Rwanda and Malawi next summer.Khumbo Msutu, a project coordinator at Opportunity International, with local loan officers in Zomba, Malawi, in 2023.Eden Sparke / Arete / OpportunitEden Sparke / Arete / Opportunity InternationalThe second app helps teachers develop lesson plans tailored to their students. The app is led by Lordina Omanhene-Gyimah, who taught in a rural school in Ghana. She found that teachers faced an acute lack of resources and knowledge about how to cater to classrooms filled with students of different ages and learning styles. Her app allows teachers to input information about students learning styles, and then creates lesson plans based on the national school curriculum. Omanhene-Gyimah hopes to roll out the app in classrooms in Ghana and Uganda before the next academic school year. The third app is designed to help school owners in areas from teacher recruitment to marketing to behavioral management. Anne Njine, a former Kenyan teacher, hopes that the app will be a partner in the pocket for school leaders, to give them real time solutions and ideas. Opportunity says that the app is ready to be rolled out to 20,000 schools, potentially reaching 6,000,000 students. The success of these apps is far from guaranteed. People in rural areas often lack smartphones or mobile connectivity. (An Opportunity rep says that the apps will be designed to work offline.) There are steep learning curves for new users of AI, and models sometimes return false answers, which can be problematic in educational settings. Nelson hopes that training these AIs on specific data sets and alongside clients will produce better, more accurate results.Nelsons goal is for the incubator program to launch three new AI-based apps a year. But thats dependent on the funding of philanthropists and corporate partners. (Opportunity declined to say how much it has raised for the program so far.)The founders of the three winning apps are confident that they have found transformative real-life use cases for an industry whose impact is often exaggerated by runaway hype. Its not just we like using AI because its in vogue and everybody's doing it, Omanhene-Gyimah says. We are in the field. We work with these clients on a daily basis, and we know what they need.More Must-Reads from TIMEHow the Electoral College Actually WorksYour Vote Is SafeMel Robbins Will Make You Do ItWhy Vinegar Is So Good for YouThe Surprising Health Benefits of PainYou Dont Have to Dread the End of Daylight SavingThe 20 Best Halloween TV Episodes of All TimeMeet TIME's Newest Class of Next Generation LeadersContact us at letters@time.com0 Commenti 0 condivisioni 80 Views
-
TIME.COMMedically-Backed Anti-Aging SerumJessica KleinOctober 30, 2024 8:01 AM EDTIf you cut yourself, what's the first responder? says Rion Aesthetics CEO Alisa Lask. Plateletsyour blood. Going off that premise, creators of Rions Plated Intense Serum purchase platelets from a blood bank to harness the type of exosomean inter-cellular messengerresponsible for signaling cell renewal. With billions of exosomes per two-pump dose, Plated uses its Renewosome technology to decrease facial redness and maintain shelf stability of the seruma challenge for stem cell-containing products. The company has the medical credentials: Rion was founded by Mayo Clinic physicians at the Van Cleve Cardiac Regenerative Medicine Program, where they discovered that exosomes play a key role in healing. The company now produces platelet-derived exosome products currently in clinical trials for wound healing, cardiology, and orthopedics, while Lasks spinoff focuses on aesthetic applications. A 2023 clinical study found that all patients experienced improvement in redness, skin irritation, skin tone, texture, and smoothness with nine weeks of Plated use.Buy Now: INTENSE Serum on Plated SkinScience0 Commenti 0 condivisioni 77 Views
-
TIME.COMSmoothing Split EndsBy Ashley MateoOctober 30, 2024 8:01 AM EDTKeratin has been part of hair care routines since Brazilians popularized use of the smoothing protein. But the time-consuming treatment process is a short-term solution. "We created Virtues Damage Reverse Serum to actually reverse and prevent future damage as opposed to just patching it up temporarily," says Virtue Labs CEO Jose Luis Palacios. The lightweight cream-to-serum formula contains the company's proprietary protein, Alpha Keratin 60ku, which it claims has been shown to re-seal hairs split ends. An independent study conducted by TRI Princeton found the serum sealed 98% of split ends after one use.Buy Now: Virtue Damage Reverse Serum on Virtue Labs | Amazon | Sephora | Dermstore0 Commenti 0 condivisioni 78 Views
-
TIME.COMA Robot for Lash ExtensionsJessica KleinOctober 30, 2024 8:01 AM EDTGetting lash extensions can be an uncomfortable process, involving lying with tape under your eyes on a bed for two hours. Chief technology officer Nathan Harding co-founded Luum Lash when he realized it could be improved by using robots. Luum swaps sharp application instruments for soft-tipped plastic tools, uses a safety mechanism to detach instruments from the machine before they poke a client, and employs machine learning to apply lashes more efficiently and precisely. An appointment that usually takes two to three hours takes one and a half with Luum. Luum lash artists, primarily working from the Lash Lab in Oakland, Calif., can see up to four times the clients daily as they could operate without the robot, says CEO Jo Lawson. So far, Luum has applied more than 160,000 eyelash extensions, charging $170 for a full set. Luum has secured 51 patents in 25 countries and plans to launch at Nordstroms Westfield Valley Fair Store in California in November.Learn More at Luum Precision Lash0 Commenti 0 condivisioni 79 Views
-
TIME.COMTop-Tier SkincareJessica KleinOctober 30, 2024 8:01 AM EDTNuFaces facial toning devices have been gaining popularity in recent years, but the latest model, the Trinity+ Complete, offers the combination of microcurrent and red light therapy, a hugely popular and study-backed skincare treatment. The handheld facial device has three magnetized attachments that promise to tighten facial muscles and smooth wrinkles. As we get older, we need to exercise more to keep our bodies toned and tight, says NuFace co-founder and CEO Tera Peterson. The same concept goes for delicate facial muscles. Thirty-six concentrated red LED lights are designed to smooth skin, while two attachments for administering microcurrents (one for targeting around the eyes) use up to 425 microamps to increase cellular adenosine triphosphate, providing a nearly instant, at-home facelift. The device prioritizes safety by shutting off after 20 minutes.Buy Now: NuFace Trinity+ Complete on My Nuface | Nordstrom0 Commenti 0 condivisioni 79 Views
-
TIME.COMHow We Picked the Best Inventions of 2024TIME EditorsOctober 30, 2024 8:34 AM EDTEvery year for over two decades, TIME editors have highlighted the most impactful new products and ideas in TIMEs Best Inventions issue. To compile this year's list, we solicited nominations from TIMEs editors and correspondents around the world, and through an online application process, paying special attention to growing fieldssuch as health care, AI, and green energy. We then evaluated each contender on a number of key factors, including originality, efficacy, ambition, and impact.The result is a list of 200 groundbreaking inventions (and 50 special mention inventions)including the worlds largest computer chip, a humanoid robot joining the workforce, and a bioluminescent houseplantthat are changing how we live, work, play, and think about whats possible.Read the full list here.0 Commenti 0 condivisioni 78 Views
-
TIME.COMKamala Harris Shouldnt Just Embrace Crypto. She Must Help It FlourishAs a journalist, I try not to reveal personal opinions. But Im breaking that rule today, because as an American, theres something I, a progressive Democrat and a journalist who has covered crypto for more than nine years, have to speak up about.The Democrats and more specifically, its progressive wing, are making a mistake being anti-crypto. Their opposition is threatening to not only turn the U.S. into a technological backwater, but also to imperil our countrys status as the worlds only superpower. Being hostile to crypto could chip away at the U.S. dollars dominance as the worlds major global reserve asset. Plus, progressive opposition isnt even logical since crypto broadly aligns with progressive ideals. Most urgently, their stance could cost Vice President and Democratic nominee Kamala Harris the election and hand Donald Trump, who tried to overturn the 2020 election, the presidency.Throughout her presidential run, Harris has made minimal statements on crypto. At a Wall Street fundraiser on Sept. 22, she said, We will encourage innovative technologies like AI and digital assets, while protecting our consumers and investors. At the Economic Club of Pittsburgh on Sept. 24, she said, I will recommit the nation to global leadership in the sectors that will define the next century. We will remain dominant in AI and quantum computing, blockchain and other emerging technologies. She also pledged to create a regulatory framework for crypto, as part of her economic plan for Black men, since 20% of Black Americans own digital assets. While its promising that her few utterances on crypto were vaguely positive, sheas well as the Democratic partyshould go further. Harriss campaignand hopefully her administrationshould embrace crypto and help it flourish, so we don't lose our decades-long edge in tech and the U.S. dollar retains its reserve currency status.Over the last few years, Ive watched with dismay as my party has attacked the technology that could help usher in the change it wishes to see. For an election that will be decided by inches and in which, according to Pew Research, 17% of U.S. adults have ever owned crypto, Democrats have let Trump come in and claim the issue as his own.But crypto is not inherently partisan. Being against it is like being against the internet. Just as the internet, or a knife, or a dollar bill can be used for good or bad, crypto is also a neutral technology. I expect crypto will follow a similar trajectory to the internet: this small, fringe phenomenon will, over the next couple decades, become embedded in of our lives alongside the dollar and other financial assets, the way that email and text messages are more central to our existence than snail mail.In fact, Democratic antipathy has turned many lifelong Democrats who work in crypto into Trump supporters, since the Biden administration has put their livelihoods at risk and treated these entrepreneurs as criminals. Watching this saga gives me Blockbuster-Netflix deja vu.Democratic leadership fails to understand that crypto has the potential to usher in a progressive era, through the technology itself. A blockchain, at its core, is a collectively maintained ledger of every transaction involving its coin. Imagine if you lived in a village whose financial system consisted of every villager gathering in the town square every day at noon and calling out their transactions since the day before. For each expense, such as, I paid the baker $10, every villager would log it in their own ledger.No single person or entity like a bank would be in charge of keeping what would be considered the authoritative record. Instead, we would agree the only correct ledger did not exist physically but would be the one represented by the majority of the ledgers. This is like Bitcoin, except swap the villagers for computers around the globe running the Bitcoin software, managed by anyone contributing to this transparent, community-run financial system.Of course, people should be paid to keep these ledgers. But instead of hiring employees, Bitcoins software mints new coins every time a new block of transactions is added to the ledger. (In Bitcoin, this happens, on average, every 10 minutes, unlike the villages daily cadence.) The people maintaining the ledgers are incentivized by the opportunity to win those new bitcoins, which is how they get paid.What a marvel: In the last 16 years, through this combination of cryptography, decentralization, and incentives, Bitcoin went from obscurity to surpassing a $1 trillion market cap, which only Apple, Amazon, Nvidia, Meta, Alphabet, Microsoft, and Berkshire Hathaway have doneexcept those entities did so with a board, C suite and employees. Meanwhile, Bitcoin, a grassroots phenomenon, attracted workers via incentives.This is the core of cryptos decentralization conceptone that could take on big banks and big tech. More services beyond an electronic peer to peer cash system, as Bitcoin creator Satoshi Nakamoto described it, can be offered on the internet in a decentralized way, with a cryptographic token and well-designed incentives.Though this is the ideal, crypto has seen numerous scams and fraudsfor example, FTX, OneCoin, and numerous pig butchering schemes in which the tricksters ensnare their victims in emotional relationships and dupe them into handing over money. But as U.S. attorney Damian Williams, whose office prosecuted FTX co-founder Sam Bankman-Fried, said, [W]hile the cryptocurrency industry might be new and the players like Sam Bankman-Fried might be new, this kind of corruption is as old as time. Similarly to how no one advised investors to avoid stocks after Bernie Madoffs Ponzi scheme, the usage of crypto to perpetrate scams and frauds doesnt mean everyone should shun crypto.Unfortunately, the U.S. has regulated the industry so poorly that crypto entrepreneurs have left the U.S. and cut Americans off from their projects. For example, Polymarket, a prediction market, is off-limits to Americans and U.S. residents. The way that theres the internet and then Chinas own censored version, now the world has a burgeoning crypto economy, while the U.S. has a censored crypto landscape. Frequently, crypto projects will list blocked countries and name the U.S. alongside the likes of North Korea, Cuba, Iran, China, and Russia. Not the typical company the U.S. keeps. Already, the U.S. is becoming a technological backwater.Furthermore, the way the SEC under President Joe Bidens appointed chair, Gary Gensler, has regulated crypto has been egregiously unfair. In 2021, after his appointment, Gensler wrote a letter to Sen. Elizabeth Warren, saying the SEC did not have the authority to regulate crypto. Although such power was never granted, the SEC began applying decades-old regulations to crypto by targeting entrepreneurs with enforcement actions or punishments for infractions of rules designed for a very different type of financial system. Im not talking about cases of fraud, such as FTX, which regulators should rightly pursue, but incidents such as when the SEC sued Coinbase, which has aimed to be compliant from its earliest days, for not registering as a securities exchange. The dirty secret is that while Gensler often says crypto companies should come in and register, the SEC has not made that possible.Judges are calling out the SEC. When the agency blocked crypto companies applications to offer bitcoin exchange-traded funds (ETFs) for so long that a then-potential issuer sued the SEC, a panel of three judgestwo appointed by Democratic presidentsunanimously sided with the plaintiff, calling the SECs reasoning for not approving the ETFs arbitrary and capricious. In March 2024, in a case against the crypto project DebtBox, Judge Robert Shelby in Utah excoriated the SEC for what he called a gross abuse of power, in a judgment that noted multiple instances of SEC lawyers lying to the court. Judge Shelby levied a $1.8 million fine against the agency, which closed its Salt Lake City office.Its poor win/loss record on crypto cases shows how the SEC and Chair Gensler gave Donald Trump a layup to take Democratic votes. At a speech Trump gave at the July Bitcoin 2024 conference, he promised the crypto community common sense things a reasonable regulator already would have done. Which pledge got a standing ovation? That on day one, he would fire Gensler.Some Democrats have clued in that their approach to the industry may cost them this election. Both Senate Majority Leader Chuck Schumer and Speaker Emerita Nancy Pelosi, along with dozens of other Democrats in the House and Senate, broke party lines to vote for pro-crypto bills. But it may be too little too late. Members of the crypto community now regularly criticize the SEC, the Biden administration, Vice President Harris, Senator Warren, and Gensler, while advocating for a Trump presidency.Most importantly, being anti-crypto has geopolitical implications. If Harris wins without signaling a complete about face from the Biden administrations anti-crypto approach, China could use this technology to gain more power over developing regions like Africa, which would further erode the dominance of the U.S. dollar. China already has launched a digital yuan and created hundreds of blockchain-based initiatives. One could see them, say, requiring African business partners, especially as part of its Belt and Road initiative, to transact in the digital yuan, which would mean these businesses and countries could begin holding the digital yuan like a reserve currency. While the yuan accounts for only 4.69% of global reserves, it is growing quicklyit has increased by more than a full percentage point in the last year.If it continues to gain a toehold, it will be a total own goal by the U.S., because, crypto has, so far, actually reinforced the primacy of the U.S. dollar abroad. According to the European Central Bank, 99% of so-called stablecoins, whose value is pegged to that of another asset, are tied to the worth of the U.S. dollar. These crypto dollars, $170 billion worth in circulation, are now gaining adoption in countries that have a weak currency or poor financial systems.Citizens of, say, Argentina or Afghanistan grasp the promise of crypto more easily than Americans, who enjoy a stable currency and safe and robust financial systems. Roya Mahboob, the Afghan entrepreneur whose girls robotics team made news in 2017, told me that Bitcoin was the solution in 2013 when her blogging platform faced challenges paying its women bloggers who didn't have bank accounts or whose payments would get confiscated by male relatives. Argentine entrepreneur Wences Casares has shared with me that a non-governmental money outside the control of the banks would have helped his family, who lost their life savings multiple times during periods of Argentina's hyperinflation. Surely the notion of a financial instrument and technology benefiting underserved populations resonates with progressive and liberal values.Liberals are often concerned about cryptos environmental impact. This is an issue primarily with Bitcoin, the main crypto asset whose security model is proof of work, which requires the burning of electricity. In September 2022, Ethereum, the second-most popular coin, switched from PoW to a new method called proof of stake, which cut its electricity consumption by over 99%; this, or similar, methods are used by most new tokens launched nowadays.Bitcoin miners can help renewable energy facilities, whose output is intermittent, resulting in inconsistent revenues, by evening out their revenue by purchasing excess energy at times when, say, wind is plentiful but demand is low, and by shutting off their facilities when usage is high. Secondly, renewable energy facilities can run miners themselves to earn Bitcoin when energy is plentiful but demand is low.When I started covering this technology almost a decade ago, there was optimism and open-mindedness about what it could do. While, in 2022, the industry sawmany collapses like FTX (of centralized entities, not the ideal of what crypto can enable), U.S. agencies had been politicizing it long before then. A truly neutral regulator, or Congress, would have already created clear rules that would give American crypto entrepreneurs peace of mind that they wouldnt be punished using decades-old laws whose application to crypto is unclear. Such directives would also make it easier for the public to separate legitimate entrepreneurial activity from scams and frauds. A head-in-the-sand approach from our leaders only gives countries like China an opportunity to encroach on the U.S.s power. As our nation and American companies like Apple, Google, Facebook, Amazon, and Netflix did with the internet, lets be leaders in this new frontier of innovation.If liberals, progressives, and Democrats take a fresh look at crypto without preconceived judgments, they will see much that aligns with their ideals. I urge Vice President Harris to reject the politicization of this world-changing technology, and instead, embrace it to help propel her to victory.0 Commenti 0 condivisioni 79 Views
-
TIME.COMWhat Teenagers Really Think About AIAmerican teenagers believe addressing the potential risks of artificial intelligence should be a top priority for lawmakers, according to a new poll that provides the first in-depth look into young peoples concerns about the technology.The poll, carried out by youth-led advocacy group the Center for Youth and AI and polling organization YouGov, and shared exclusively with TIME, reveals a level of concern that rivals long standing issues like social inequality and climate change.The poll of 1,017 U.S. teens aged 13 to 18 was carried out in late July and early August, and found that 80% of respondents believed it was extremely or somewhat important for lawmakers to address the risks posed by AI, falling just below healthcare access and affordability in terms of issues they said were a top priority. That surpassed social inequality (78%) and climate change (77%).Although the sample size is fairly small, it gives an insight into how young people are thinking about technology, which has often been embedded in their lives from an early age. I think our generation has a unique perspective, says Saheb Gulati, 17, who co-founded the Center for Youth and AI with Jason Hausenloy, 19. That's not in spite of our age, but specifically because of it. Because todays teens have grown up using digital technology, Gulati says, they have confronted questions of its societal impacts more than older generations. While there has been more research about how young people are using AI, for example to help or cheat with schoolwork, says Rachel Hanebutt, assistant professor at Georgetown Universitys Thrive Center, who helped advise on the polls analysis, Some of those can feel a little superficial and not as focused on what teens and young people think about AI and its role in their future, which I think is where this brings a lot of value."The findings show that nearly half of the respondents use ChatGPT or similar tools several times per week, aligning with another recent poll that suggests teens have embraced AI faster than their parents. But being early-adopters hasnt translated into full-throated optimism, Hausenloy says.Teens are at the heart of many debates over artificial intelligence, from the impact of social media algorithms to deep fake nudes. This week it emerged that a mother is suing Character.ai and Google after her son allegedly became obsessed with the chatbot before committing suicide. Yet, "ages 13 to 18 are not always represented in full political polls," says Hanebutt. This research gives adults a better understanding of what teens and young people think about AI and its role in their future," rather than just how they're using it, Hanebutt says. She notes the need for future polling that explores how teenagers expect lawmakers to act on the issue. Read More: Column: How AI-Powered Tech Can Harm ChildrenWhile the poll didn't ask about specific policies, it does offer insight into the AI risks of concern to the greatest number of teens, with immediate threats topping the list. AI-generated misinformation worried the largest proportion of respondents at 59%, closely followed by deepfakes at 58%. However, the poll reveals that many young people are also concerned about the technologys longer term trajectory, with 47% saying they are concerned about the potential for advanced autonomous AI to escape human control. Nearly two-thirds said they consider the implications of AI when planning their career.Hausenloy says that the poll is just the first step in the Center for Youth and AIs ambitions to ensure young people are represented, prepared and protected when it comes to AI.The poll suggests that, despite concerns in other areas, young people are generally supportive of AI-generated creative works. More than half of respondents (57%) were in favor of AI-generated art, film, and music, while only 26% opposed it. Less than a third of teens were concerned about AI copyright violations.On the question of befriending AI, respondents were divided, with 46% saying AI companionship is acceptable compared with 44% saying its unacceptable. On the other hand, most teens (68%) opposed romantic relationships with AI, compared to only 24% who find them acceptable.This is the first and most comprehensive view on youth attitudes on AI I have ever seen, says Sneha Revanur, founder and president of Encode Justice, a youth-led, AI-focused civil-society group, which helped advise on the survey questions. Revanur was the youngest participant at a White House roundtable about AI back in July 2023, and more recently the youngest to participate in the 2024 World Economic Forum in Davos. In the past, she says Encode Justice was speaking on behalf of their generation without hard numbers to back them, but we'll be coming into future meetings with policymakers armed with this data, and armed with the fact that we do actually have a fair amount of young people who are thinking about these risks.She points to the California Senate Bill 1047which would have required AI companies to implement safety measures to protect the public from potential harms from their technologyas a case where public concerns about the technology were overlooked. In California, we just saw Governor Gavin Newsom veto a sweeping AI safety bill that was supported by a broad coalition, including our organization, Anthropic, Elon Musk, actors in Hollywood and labor unions, Revanur says. That was the first time that we saw this splintering in the narrative that the public doesn't care about AI policy. And I think that this poll is actually just one more crack in that narrative.0 Commenti 0 condivisioni 78 Views
-
TIME.COMWhy Surgeons Are Wearing The Apple Vision Pro In Operating RoomsBy Andrew R. ChowOctober 15, 2024 12:41 PM EDTTwenty-four years ago, the surgeon Santiago Horgan performed the first robotically assisted gastric-bypass surgery in the world, a major medical breakthrough. Now Horgan is working with a new tool that he argues could be even more transformative in operating rooms: the Apple Vision Pro. Over the last month, Horgan and other surgeons at the University of California, San Diego have performed more than 20 minimally invasive operations while wearing Apples mixed-reality headsets. Apple released the headsets to the public in February, and theyve largely been a commercial flop. But practitioners in some industries, including architecture and medicine, have been testing how they might serve particular needs.Horgan says that wearing headsets during surgeries has improved his effectiveness while lowering his risk of injuryand could have an enormous impact on hospitals across the country, especially those without the means to afford specialty equipment. This is the same level of revolution, but will impact more lives because of the access to it, he says, referring to his previous breakthrough in 2000. Read More: How Virtual Reality Could Transform Architecture.Horgan directs the Center for the Future of Surgery at UC San Diego, which explores how emerging technology might improve surgical processes. In laparoscopic surgery, doctors send a tiny camera through a small incision in a patients body, and the cameras view is projected onto a monitor. Doctors must then operate on a patient while looking up at the screen, a tricky feat of hand-eye coordination, while processing other visual variables in a pressurized environment.Im usually turning around and stopping the operation to see a CT scan; looking to see what happened with the endoscopy [another small camera that provides a closer look at organs]; looking at the monitor for the heart rate, Horgan says.As a result, most surgeons report experiencing discomfort while performing minimal-access surgery, a 2022 study found. About one-fifth of surgeons polled said they would consider retiring early because their pain was so frequent and uncomfortable. A good mixed-reality headset, then, might allow a surgeon to look at a patients surgical area and, without looking up, virtual screens that show them the laparoscopy camera and a patients vitals. In previous years, Horgan tried other headsets, like Google Glass and Microsoft HoloLens, and found they werent high-resolution enough. But he tested the Apple Vision Pro before its release and was immediately impressed. Horgan applied for approval from the institutional review board at the University of California, which green-lit the use of the devices. In September, he led the first surgery with the Apple headset, for a paraesophageal hernia. We are all blown away: It was better than we even expected, Horgan says. In the weeks since, UC San Diegos minimally invasive department has performed more than 20 surgeries with the Apple Vision Pro, including acid-reflux surgery and obesity surgery. Doctors, assistants, and nurses all don headsets during the procedures. No patients have yet opted out of the experiment, Horgan says.Christopher Longhurst, chief clinical and innovation officer at UC San Diego Health, says that while the Vision Pros price tag of $3,499 might seem daunting to a regular consumer, its inexpensive compared to most medical equipment. The monitors in the operating room are probably $20,000 to $30,000, he says. So $3,500 for a headset is like budget dust in the healthcare setting. This price tag could make it especially appealing to smaller community hospitals that lack the budget for expensive equipment. (The FDA has yet to approve the device for widespread medical use.)Longhurst is also testing the ability of the Apple Vision Pro to create 3D radiology imaging. Over the next couple of years, he expects the team at UC San Diego to release several papers documenting the efficacy of headsets in different medical applications. We believe that it's going to be standard of care in the next years to come, in operating rooms all over the world, Longhurst says. Surgeons and nurses wear Apple Vision Pro headsets instead of looking at monitors during surgery at UC San Diego Health. Apple Vision Pro is not the only device competing for the attention of surgeons. There are other surgical visualization systems on the market promising similar benefits. The startup Augmedics developed an AR navigation system for spinal surgeons, which superimposes a 3D image of a patients CT scan over their body, theoretically allowing the doctor to operate as if they had X-ray vision. Another company, Vuzix, offers headsets that are significantly lighter than the Vision Pro, and allow a surgeon anywhere in the world to view an operating surgeons viewpoint and give them advice.Ahmed Ghazi, the director of minimally invasive and robotic surgery at Johns Hopkins in Baltimore, has used Vuzix headsets for remote teaching, allowing trainees to see from a proctors viewpoint. He recently used the Microsoft HoloLens to give a patient a "surgical rehearsal" of her operation: both donned headsets, and he guided her through a virtual 3D recreation of her CT scan, explaining how he would remove her tumor. We were able to walk her through the process: Im going to find the feeding vessel to the tumor, clip it, dissect away from here, make sure I dont injure this, he says. There is a potential for us to bring patients to that world, to give them better understanding.Ghazi says that as these headsets are increasingly brought into operating rooms, its crucial for doctors to take precautions, especially around patient privacy. Any device that is connected to a network or WiFi signal, has the potential to be exposed or hacked, he says. We have to be very diligent about what we're doing and how we're doing it. Read More: How Meteorologists Use AI to Forecast Storms.Miguel Burch, who leads the general surgery division at Cedars-Sinai Medical Center in Los Angeles, has tested a variety of medical-focused headsets over the years. He says that the Apple Vision Pro is especially useful because of its adaptability. If everything we wanted to use in augmented reality is proprietarily attached to a different device, then we have 10 headsets and 15 different monitors, Burch says. But with this one, you can use it with anything that has a video feed.Burch says hes sustained three different injuries over the course of his career from performing minimally-invasive surgeries. He now hopes to bring the Apple Vision Pro to Cedars-Sinai, and believes that the headsets current medical functions are the tip of the iceberg. Not only is it ergonomically a solution to the silent problem of surgeons having to end their careers earlier, he says, but the ability to have images overlap is going to tremendously improve what we can do.0 Commenti 0 condivisioni 82 Views
-
TIME.COMTIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a Relic of the PastJade Leung, Victor Riparbelli, and Abeba Birhane participate in a panel during the TIME100 Impact Dinner London on Oct. 16, 2024.TIMEBy Tharin PillayUpdated: October 16, 2024 10:56 PM EDT | Originally published: October 16, 2024 9:03 PM EDTOn Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIMEs 2023 and 2024 100 Influential People in AI listsall of whom are playing a role in shaping the future of the technology.Following a discussion between TIMEs CEO Jessica Sibley and executives from the events sponsorsRosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidias VP of Europe, the Middle East, and Africaand after the main course had been served, attention turned to a panel discussion. The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AIs impacts, and the potential of AI-generated videos to transform how we communicate.The panelists' views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to get our heads around the empirical data that will tell us much more about what's coming down the pike and what kind of risks are associated with it.Birhane, meanwhile, emphasized what she sees as the massive hype around AIs capabilities and potential to pose existential risk. These models dont actually live up to their claims. Birhane argued that AI is not just computational calculations. It's the entire pipeline that makes it possible to build and to sustain systems, citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline, she said. Riparbelli suggested that both fixing the problems already in society today and thinking about Terminator-style scenarios are important, and worth paying attention to.Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, best practices for how you deal with different safety challenges change very quickly, Leung said, pointing to a big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. Theres a huge onus on companies to continue to innovate on safety practices, said Leung. Riparbelli agreed, suggesting companies may have a moral imperative to ensure their systems are safe. At the same time, governments have to play a role here. That's completely non-negotiable, said Leung.Equally, Birhane was clear that effective regulation based on empirical evidence is necessary. A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain, she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. What are the best practices, standards, and protocols that we want to harmonize across jurisdictions? she asked. Its not a sufficiently-resourced question. Still, Leung pointed to the fact that China was party to last years AI Safety Summit hosted by the U.K. as cause for optimism. Its very important to make sure that theyre around the table, she said.One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his companys technology, Riparbellis AI double declared text as a technology is ultimately transitory and will become a relic of the past. Expanding on the thought, the real Riparbelli said: We've always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will enable anyone to create a Hollywood film from their bedroom without needing more than their imagination, he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.Riparbelli suggests that rather than a technology-centric approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. Let's focus on the things we don't want to happen and regulate around those. The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.More Must-Reads from TIMEHow the Economy is Doing in the Swing StatesHarris Battles For the Bro VoteOur Guide to Voting in the 2024 ElectionMel Robbins Will Make You Do ItWhy Vinegar Is So Good for YouYou Dont Have to Dread the End of Daylight SavingThe 20 Best Halloween TV Episodes of All TimeMeet TIME's Newest Class of Next Generation LeadersContact us at letters@time.com0 Commenti 0 condivisioni 83 Views
-
TIME.COMSilicon Valley Takes Artificial General Intelligence SeriouslyWashington Must TooIdeasBy Daniel ColsonOctober 18, 2024 7:10 AM EDTDaniel Colson is the Executive Director of the AI Policy Institute.Artificial General Intelligencemachines that can learn and perform any cognitive task that a human canhas long been relegated to the realm of science fiction. But recent developments show that AGI is no longer a distant speculation; its an impending reality that demands our immediate attention.On Sept. 17, during a Senate Judiciary Subcommittee hearing titled Oversight of AI: Insiders Perspectives, whistleblowers from leading AI companies sounded the alarm on the rapid advancement toward AGI and the glaring lack of oversight. Helen Toner, a former board member of OpenAI and director of strategy at Georgetown Universitys Center for Security and Emerging Technology, testified that, The biggest disconnect that I see between AI insider perspectives and public perceptions of AI companies is when it comes to the idea of artificial general intelligence. She continued that leading AI companies such as OpenAI, Google, and Anthropic are treating building AGI as an entirely serious goal.Toners co-witness William Saundersa former researcher at OpenAI who recently resigned after losing faith in OpenAI acting responsiblyechoed similar sentiments to Toner, testifying that, Companies like OpenAI are working towards building artificial general intelligence and that they are raising billions of dollars towards this goal.All three leading AI labsOpenAI, Anthropic, and Google DeepMindare more or less explicit about their AGI goals. OpenAIs mission states: To ensure that artificial general intelligenceby which we mean highly autonomous systems that outperform humans at most economically valuable workbenefits all of humanity. Anthropic focuses on building reliable, interpretable, and steerable AI systems, aiming for safe AGI. Google DeepMind aspires to solve intelligence and then to use the resultant AI systems to solve everything else, with co-founder Shane Legg stating unequivocally that he expects human-level AI will be passed in the mid-2020s. New entrants into the AI race, such as Elon Musks xAI and Ilya Sutskevers Safe Superintelligence Inc., are similarly focused on AGI.Policymakers in Washington have mostly dismissed AGI as either marketing hype or a vague metaphorical device not meant to be taken literally. But last months hearing might have broken through in a way that previous discourse of AGI has not. Senator Josh Hawley (R-MO), Ranking Member of the subcommittee, commented that the witnesses are folks who have been inside [AI] companies, who have worked on these technologies, who have seen them firsthand, and I might just observe dont have quite the vested interest in painting that rosy picture and cheerleading in the same way that [AI company] executives have.Senator Richard Blumenthal (D-CT), the subcommittee Chair, was even more direct. The idea that AGI might in 10 or 20 years be smarter or at least as smart as human beings is no longer that far out in the future. Its very far from science fiction. Its here and nowone to three years has been the latest prediction, he said. He didnt mince words about where responsibility lies: What we should learn from social media, that experience is, dont trust Big Tech.The apparent shift in Washington reflects public opinion that has been more willing to entertain the possibility of AGIs imminence. In a July 2023 survey conducted by the AI Policy Institute, the majority of Americans said they thought AGI would be developed within the next 5 years. Some 82% of respondents also said we should go slowly and deliberately in AI development.Thats because the stakes are astronomical. Saunders detailed that AGI could lead to cyberattacks or the creation of novel biological weapons, and Toner warned that many leading AI figures believe that in a worst-case scenario AGI could lead to literal human extinction.Despite these stakes, the U.S. has instituted almost no regulatory oversight over the companies racing toward AGI. So where does this leave us?First, Washington needs to start taking AGI seriously. The potential risks are too great to ignore. Even in a good scenario, AGI could upend economies and displace millions of jobs, requiring society to adapt. In a bad scenario, AGI could become uncontrollable.Second, we must establish regulatory guardrails for powerful AI systems. Regulation should involve government transparency into whats going on with the most powerful AI systems that are being created by tech companies. Government transparency will reduce the chances that society is caught flat-footed by a tech company developing AGI before anyone else is expecting. And mandated security measures are needed to prevent U.S. adversaries and other bad actors from stealing AGI systems from U.S. companies. These light-touch measures would be sensible even if AGI werent a possibility, but the prospect of AGI heightens their importance.In a particularly concerning part of Saunders testimony, he said that during his time at OpenAI there were long stretches where he or hundreds of other employees would be able to bypass access controls and steal the companys most advanced AI systems, including GPT-4. This lax attitude toward security is bad enough for U.S. competitiveness today, but it is an absolutely unacceptable way to treat systems on the path to AGI. The comments were another powerful reminder that tech companies cannot be trusted to self-regulate.Finally, public engagement is essential. AGI isnt just a technical issue; its a societal one. The public must be informed and involved in discussions about how AGI could impact all of our lives.No one knows how long we have until AGIwhat Senator Blumenthal referred to as the 64 billion dollar questionbut the window for action may be rapidly closing. Some AI figures including Saunders think it may be in as little as three years. Ignoring the potentially imminent challenges of AGI wont make them disappear. Its time for policymakers to begin to get their heads out of the cloud.0 Commenti 0 condivisioni 93 Views
-
TIME.COMOctober Is Cybersecurity Awareness Month. Heres How to Stay Safe From ScamsBy ADRIANA MORGA / APOctober 19, 2024 9:13 AM EDTNEW YORK October is Cybersecurity Awareness Month, which means it's the perfect time to learn how to protect yourself from scams.Scams have become so sophisticated now. Phishing emails, texts, spoofing caller ID, all of this technology gives scammers that edge, said Eva Velasquez, president and CEO of the Identity Theft Resource Center.As scammers find new ways to steal money and personal information, consumers should be more vigilant about who they trust, especially online. A quick way to remember what to do when you think you're getting scammed is to think about the three S's, said Alissa Abdullah, also known as Dr. Jay, Mastercards deputy chief security officerStay suspicious, stop for a second (and think about it) and stay protected, she said.Whether it's romance scams or job scams, impersonators are looking for ways to trick you into giving them money or sharing your personal information. Here's what to know:Know scammers tacticsThree common tactics used by scammers are based on fear, urgency and money, said security expert Petros Efstathopoulos. Heres how they work: FearWhen a scammer contacts you via phone or email, they use language that makes it seem like there is a problem that you need to solve. For example, a scammer contacts you over email telling you that your tax return has an error and if you dont fix it youll get in trouble. UrgencyBecause scammers are good at creating a sense of urgency, people tend to rush, which makes them vulnerable. Scammers often tell people they need to act right away, which can lead to them sharing private information such as their Social Security numbers. MoneyScammers use money as bait, Efstathopoulos said. They might impersonate tax professionals or the IRS saying you will get a bigger tax refund than you expect if you pay them for their services or share your personal information.Know the most common scamsSimply being aware of typical scams can help, experts say. Robocalls in particular frequently target vulnerable individuals like seniors, people with disabilities, and people with debt.If you get a robocall out of the blue paying a recorded message trying to get you to buy something, just hang up, said James Lee, chief operating officer at the Identity Theft Resource Center. Same goes for texts anytime you get them from a number you dont know asking you to pay, wire, or click on something suspicious.Lee urges consumers to hang up and call the company or institution in question at an official number.Scammers will also often imitate someone in authority, such as a tax or debt collector. They might pretend to be a loved one calling to request immediate financial assistance for bail, legal help, or a hospital bill.Romance scamsSo-called romance scams often target lonely and isolated individuals, according to Will Maxson, assistant director of the Division of Marketing Practices at the FTC. These scams can take place over longer periods of time -- even years.Kate Kleinart, 70, who lost tens of thousands to a romance scam over several months, said to be vigilant if a new Facebook friend is exceptionally good-looking, asks you to download WhatsApp to communicate, attempts to isolate you from friends and family, and/or gets romantic very quickly.If youre seeing that picture of a very handsome person, ask someone younger in your life a child, a grandchild, a niece or a nephew to help you reverse-image search or identify the photo, she said.She said the man in pictures she received was a plastic surgeon from Spain whose photos have been stolen and used by scammers.Kleinart had also been living under lockdown during the early pandemic when she got the initial friend request, and the companionship and communication meant a lot to her while she was cut off from family. When the scam fell apart, she missed the relationship even more than the savings.Losing the love was worse than losing the money, she said.Job scamsJob scams involve a person pretending to be a recruiter or a company in order to steal money or information from a job seeker.Scammers tend to use the name of an employee from a large company and craft a job posting that matches similar positions. An initial red flag is that scammers usually try to make the job very appealing, Velasquez said.Theyre going to have very high salaries for somewhat low-skilled work," she said. "And theyre often saying its a 100% remote position because thats so appealing to people."Some scammers post fake jobs, but others reach out directly to job seekers through direct messages or texts. If the scammers are looking to steal your personal information, they may ask you to fill out several forms that include information like your Social Security number and drivers license details.The only information a legitimate employer should ask for at the beginning of the process is your skills, your work experience, and your contact information, Velasquez said.Other details dont generally need to be shared with an employer until after youve gotten an offer.Investment scamsAccording to Lois Greisman, an associate director of marketing practices at the Federal Trade Commission,an investment scamconstitutes any get-rich-quick scheme that lures targets via social media accounts or online ads.Investment scammers typically add different forms of testimony, such as from other social media accounts, to support that the investment works. Many of these also involve cryptocurrency. To avoid falling for these frauds, the FTC recommends independently researching the company especially by searching the companys name along with terms like review or scam.Quiz scamsWhen youre using Facebook or scrolling Google results, be aware ofquiz scams, which typically appear innocuous and ask about topics you might be interested in, such as your car or favorite TV show. They may also ask you to take a personality test.Despite these benign-seeming questions, scammers can then use the personal information you share to respond to security questions from your accounts or hack your social media to send malware links to your contacts.To protect your personal information,the FTC simply recommends steering clear of online quizzes. The commission also advises consumers to use random answers for security questions.Asked to enter your mothers maiden name? Say its something else: Parmesan or another word youll remember, advises Terri Miller, consumer education specialist at the FTC. This way, scammers wont be able to use information they find to steal your identity.Marketplace scamsWhenbuying or selling productson Instagram or Facebook Marketplace, keep in mind that not everyone that reaches out to you has the best intentions.To avoid being scammed when selling via an online platform, the FTC recommends checking buyers profiles, not sharing any codes sent to your phone or email, and avoiding accepting online payments from unknown persons.Likewise, whenbuying something from an online marketplace, make sure to diligently research the seller. Take a look at whether the profile is verified, what kind of reviews they have, and the terms and conditions of the purchase.Dont pick up if you dont know who is callingScammers often reach out by phone, Ben Hoffman, Head of Strategy and Consumer Products at Fifth Third Bank recommends that you don't pick up unknown incoming calls.Banks don't ask your for your password, said Hoffman. If you believe your bank is trying to reach out, give them a call at a number listed on their website.This makes it easier to know for sure that youre not talking to a scammer. As a general rule, banks dont often call unless there is suspicious activity on your account or if you previously contacted them about a problem.If you receive many unknown calls that end up being scammers or robocalls, you can use tools available on your phone to block spam. Check here for how to do this on youriPhoneand here forAndroid.Use all of the technology at your disposalThere are many tools are your disposal that can be used to protect yourself from scammers online. Use a password manager to ensure youre utilizing a complex password that scammers cant guess. Regularly checking your credit report and bank statements is a good practice since it can help you identify if someone has been using your bank account without your knowledge. Turn on multi-factor verification to make sure impersonators aren't able to access your social media or bank accounts.When in doubt, call for helpAs scams get more sophisticated, it's difficult to know who to trust or if a person is actually real, or an impersonator. If you aren't sure if a job recruiter is real or if your bank is actually asking your for information, find organizations that can help you, recommended Velasquez.Organizations like the Identity Theft Protection Center and the AARP Fraud Watch Network offer free services for customers who need help identifying scams or knowing what to do if you've been a victim of a scam.Share what you know with loved onesIf youve taken all the necessary steps to protect yourself, you might want to help those around you. Whether youre helping your grandparents to block unknown callers on their phones or sharing tips with your neighbors, talking with others about how to protect themselves from scams can be very effective.Report the scamIf you or a family member is a victim of a scam, its good practice to report it on the FTCs website.0 Commenti 0 condivisioni 89 Views
-
TIME.COMMeta to Use Facial Recognition to Crack Down on Scams and Recover Locked-Out AccountsBy Kurt Wagner / BloombergOctober 21, 2024 10:15 PM EDTFacebook parent company Meta Platforms Inc. will start using facial recognition technology to crack down on scams that use pictures of celebrities to look more legitimate, a strategy referred to as celeb-bait ads.Scammers use images of famous people to entice users into clicking on ads that lead them to shady websites, which are designed to steal their personal information or request money. Meta will start using facial recognition technology to weed out these ads by comparing the images in the post with the images from a celebritys Facebook or Instagram account. If we confirm a match and that the ad is a scam, well block it, Meta wrote in a blog post. Meta did not disclose how common this type of scam is across its services.With nearly 3.3 billion daily active users across all of its apps, Meta relies on artificial intelligence to enforce many of its content rules and guidelines. That has enabled Meta to better handle the deluge of daily reports about spam and other content that breaks the rules. It has also led to problems in the past when legitimate accounts have been unintentionally suspended or blocked due to automated errors. Read More: The Face Is the Final Frontier of PrivacyMeta says it will also start using facial recognition technology to better assist users who get locked out of their accounts. As part of a new test, some users can submit a video selfie when theyve been locked out of their accounts. Meta will then compare the video to the photos on the account to see if there is a match.Meta has previously asked locked-out users to submit other forms of identity verification, like an ID card or official certificate, but says that the video selfie option would only take a minute to complete. Meta will immediately delete any facial data generated after this comparison regardless of whether theres a match or not, the company wrote in a blog.The social networking giant has a complicated history with facial recognition technology. It previously used facial recognition to identify users in uploaded photos as a way to encourage people to tag their friends and increase connections. Meta was later sued by multiple U.S. states for profiting off this technology without user consent, and in 2024 was ordered to pay the state of Texas $1.4 billion as part of the claim. Several years earlier, it agreed to pay $650 million in a separate legal suit filed in Illinois. The company will not run this video selfie test in Illinois or Texas, according to Monika Bickert, Metas vice president of content policy.0 Commenti 0 condivisioni 91 Views
-
0 Commenti 0 condivisioni 90 Views
-
Altre storie