• WWW.VG247.COM
    Sony finally sees sense by making it optional to sign-in to a PSN account for single-player games like Horizon Zero Dawn Remastered on PC, but doing so will net you some bonuses now
    Took Your TimeSony finally sees sense by making it optional to sign-in to a PSN account for single-player games like Horizon Zero Dawn Remastered on PC, but doing so will net you some bonuses nowDon't expect it for every game though.Image credit: Guerilla Games News by Oisin Kuhnke Contributor Published on Jan. 29, 2025 After months of complaints from fans, Sony is removing the PSN requirement on some of its single-player PC ports, even if it clearly wants you to do so anyway.PlayStation has steadily been adding some of its biggest games onto Steam over the past few years, but more recently it's been forcing players to sign in to a PSN account to even play them in the first place. That proved very controversial with Helldivers 2, so that was walked back, but it stuck with it right through to the recent Horizon Zero Dawn remaster, essentially locking out millions of potential players from buying the game at all as PSN isn't available in every country (in fact, there's a whole lot of countries that don't have PSN). There's some good news today though: Sony is removing that PSN requirement for a select few games.To see this content please enable targeting cookies. Over on the PlayStation Blog, it was shared that starting with tomorrow's release of Marvel's Spider-Man 2 for PC, Sony is "working to add more benefits to playing with an account for PlayStation Network." This also applies to the upcoming port of The Last of Us Part 2 Remastered, as well as God of War Ragnarok, and Horizon Zero Dawn Remastered. Those benefits? In-game unlocks! But the actual important point from the blog is this: "An account for PlayStation Network will become optional for these titles on PC." Yes, that means a whole lot more people can play those four titles.The fact that signing in to a PSN account nets you bonuses like an early unlock for the Spider-Man 2099 Black Suit and the Miles Morales 2099 Suit in Spider-Man 2, and, uh, 50 points for bonus features and extras in The Last of Us Part 2, clearly shows that Sony would still rather people connect their accounts.It's worth noting that the sign-in requirement isn't being removed for titles like Until Dawn, another single-player game, so time will only tell if this becomes Sony does for all of its games - I imagine it'll continue requiring it for online titles, as the online Legends mode in Ghost of Tsushima also still requires a PSN account. Sony didn't say when the sign-in requirement is being removed for titles other than Spider-Man 2, so just keep your eyes peeled I suppose!
    0 Comments 0 Shares
  • TECHCRUNCH.COM
    MoviePass might pivot to crypto
    After MoviePasss historic implosion, subscribers to the Netflix for movie theaters were already cautious around the companys 2023 relaunch. These moviegoers may grow even more skeptical after MoviePass sent out an email blast on Wednesday, which surveyed customers about their interest in web3.Artificial Intelligence andBlockchain technologies are transforming the business landscape at an unprecedented pace, the email says. As a community-driven company, wed love to understand your interest and knowledge in the blockchain space.The survey asks basic questions about the respondents familiarity with web3, like if they own any assets like NFTs, or if they have a digital wallet. Customers were also asked whether they believe blockchain technology is promising, and if theyre interested in learning more about it.MoviePasss possible pivot to web3 didnt come out of nowhere. When the company relaunched, it raised seed funding from Animoca Brands, a Hong Kong-based software company and venture capital firm that specializes in blockchain technology. Last year, MoviePass partnered with the Sui blockchain to allow subscribers to make payments with USDC, a cryptocurrency pegged to the price of the U.S. dollar.At the time, MoviePass co-founder Stacy Spikes said that MoviePass intended to use web3 as a means of making moviegoing more accessible and able to reach a wider audience through deeper fan engagement. The company said it was looking toward offering on-chain rewards for seeing movies, or allowing users to invest in the movies they see (there are no further details about how that would actually work). Its not clear that fans want these on-chain bonuses, though, or if that sort of blockchain infrastructure would even help the company succeed. In some cases, adding crypto elements to a company that functions perfectly fine without it can alienate users rather than entice them. Patreon also once surveyed its users about their interest in crypto, but the creator membership platform was met with a clear no.Without adding in a web3 component, the new-and-improved MoviePass already turned its first-ever profit in 2023. While the first version of MoviePass was impossibly unsustainable subscribers could see unlimited movies in theaters for just $10, less than the cost of one movie ticket the new iteration of MoviePass makes a more modest offer. Now, MoviePass operates on a somewhat confusing credits system, where each movie showing can be redeemed for a certain number of credits, which fluctuates depending on the time of day and the format of the screening (IMAX, 3D, etc). For subscribers who live in places where movie tickets cost more, like New York City or Los Angeles, their monthly fee will be higher.Last February, MoviePass announced that subscribers had seen 1 million movies through its offerings, but it did not specify how many subscribers it has.
    0 Comments 0 Shares
  • TECHCRUNCH.COM
    Climate change ignited LAs wildfire risk these startups want to extinguish it
    Climate change increased the likelihood of the recent Southern California wildfires by 35%, according to a new study published by World Weather Attribution, a decade-old international group of climate scientists and other experts.The study comes as Los Angeles residents start to rebuild their lives in the wake of catastrophic fires that erupted earlier this month. The fires were sparked by near perfect conditions: The two preceding years were unusually wet, boosting the growth of wildfire-adapted vegetation. This year, climate change dealt the region two heavy blows a delayed annual rainy season and intense Santa Ana winds that fanned the flames and spread embers far and wide.These extreme weather conditions will be more common, according to the study, adding fresh urgency to a burgeoning group of climate adaptation startups that hope to blunt the impact of wildfires.The extreme weather conditions are now likely to occur once every 17 years. Compared to a 1.3C cooler climate this is an increase in likelihood of about 35%, the studys authors wrote. This trend is however not linear, they added, stating that the frequency of fire-prone years has been increasing rapidly in recent years.Southern California is no stranger to fire. Its ecosystems have evolved to handle and even thrive under regular, low-intensity wildfires. But over a century of fire suppression disrupted the natural regime, and in its absence, people have built deeper into fire-adapted ecosystems.Today, these areas are known as the wildland-urban interface, or WUI, and the density of housing there complicates the picture. Because the landscape has been carved up into smaller parcels, removing excess vegetation often falls on individual homeowners, who may not realize theyre responsible for the task.Elsewhere, its often best to introduce prescribed burning, in which land managers start low-intensity fires during weather conditions that make the low-intensity blaze easy to contain and direct. The process helps rebalance the ecosystem and prevent dry brush from building up. But even in places where prescribed burning is possible, its still difficult to introduce, requiring public buy-in and well-trained crews.Startups have stepped into the void. Vibrant Planet has developed a platform that helps utilities and land managers analyze a range of data to determine where wildfire risk is highest. Then, it helps them work with a range of stakeholders, including landowners, conservation organizations, and indigenous groups, to develop plans to mitigate the risk.Once plans are in place, other startups step in to do the dirty work. One company, Kodama, retrofits forestry equipment for remote operation, allowing forests to be thinned at lower costs, reducing the fuel load that can lead to catastrophic wildfire.Another, BurnBot, has developed a remotely operated machine that does the work of a prescribed burn in the relative safety of its metal shroud. There, propane torches burn vegetation as it slides under the machine. Fans on top of the machine keep air flowing into the burn chamber, raising the fires temperature to reduce smoke and embers. At the rear of the machine, rollers and water misters extinguish any flames or embers that remain on the ground.But even with vegetation management and prescribed burning, the climate and ecosystems of Southern California wont be completely wildfire free. To further minimize the risk of catastrophic fires, another slate of startups is working to spot wildfires soon after they ignite so crews can respond quickly.Pano, for example, uses AI to crunch a range of data sources, including cameras, satellite imagery, field sensors, and emergency alerts, to automatically detect new fires. Google is also in the game, having worked with Muon Space to launch FireSat, which can image wildfires from orbit every 20 minutes.And should wildfires escape early detection and containment, other startups like FireDome are developing tools to protect homes and businesses. The Israel-based startup has created an AI-assisted fire defense system that launches projectiles filed with fire retardants. The automated system can lay down a perimeter of retardant before fire reaches a property or, if embers are already flying, it can target hotspots to extinguish flames before they turn into conflagrations.Land owners and managers will have to get smarter about how to limit their risk. Theres unlikely to be a single solution, but rather a combination of advanced technology and old fashioned land management.
    0 Comments 0 Shares
  • WWW.AWN.COM
    Comedians The Sklar Brothers to Host 23rd Annual VES Awards
    The Visual Effects Society (VES) has just announced that actors-comedians Randy and Jason Sklar The Sklar Brothers - will host the 23rd Annual VES Awards on February 11th at The Beverly Hilton hotel. This marks the duos first hosting engagement of the annual celebration that recognizes outstanding visual effects artistry and innovation from around the world.No one understands the power of visual effects more than two identical humans, said one half of the VES Awards hosting team The Sklar Brothers. We are honored to have the opportunity to host the VES Awards. And if Randy isnt funny, well edit him out in post.The Sklar Brothers are known for their post-modern take on a stand-up comedy duo. Randy and Jason Sklar can currently be seen in the fourth season of FXs What We Do in The Shadows playing fictional Property Brothers Bran and Toby Daltry. The Sklars produced, wrote, and starred in The Nosebleeds, a UFC original series that released this summer on UFCs Fight Pass. The series is a hilarious deep dive into UFCs history featuring comedy sketches, field pieces, and in-studio character bits.The Sklars notably hosted and produced History Channels United Stats of America and created and starred in the ESPN cult hit series Cheap Seats, besides being guest hosts on Jeff Ross Presents Roast Battle. Their television credits include Glow, Bajillion Dollar Properties, Maron, Agent Carter, Playing House, Partners, Greys Anatomy, Curb Your Enthusiasm, Its Always Sunny in Philadelphia, Entourage, CSI, Law & Order and Comedy Central Presents. They released their special, Hipster Ghosts on Starz. They also recently produced the documentary, Poop Talk. The Sklars have had several appearances on both the TruTV Series Those Who Cant and AMCs hit series Better Call Saul.They can also be seen in Wild Hogs and The Comebacks, while their internet shows Held Up, Layers and Back on Topps have received critical acclaim. They also recurred as panelists on ESPNs SportsCenter and E!s Chelsea Lately. Their podcast View From the Cheap Seats (formerly Sklarbro Country) was nominated for best comedy podcast in 2012 at Comedy Centrals comedy awards and their new podcast Dumb People Town is averaging 75k downloads per episode in its first month. They are currently developing the pilot for Dumb People Town, based on the podcast, with Will Arnetts Electric Avenue and Val Kilmer Ruined Our Lives with Bill Lawrence.Awards in 25 categories for outstanding visual effects will be presented at the ceremony. Special honorees include: Golden Globe nominee and Emmy Award-winning actor- producer Hiroyuki Sanada, receiving the VES Award for Creative Excellence; Academy Award-winning director and Visual Effects Supervisor Takashi Yamazaki receiving the VES Visionary Award; and acclaimed Virtual Reality/Immersive Technology Pioneer Dr. Jacquelyn Ford Morie receiving the VES Georges Mlis Award.Source: Visual Effects Society Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    0 Comments 0 Shares
  • WWW.ZDNET.COM
    One of the best Ring cameras I've tested is 50% off for a limited time
    You can save $90 at Amazon on what is arguably one of the best Ring cameras available -- the Ring Stick Up Cam Pro.
    0 Comments 0 Shares
  • WWW.ZDNET.COM
    OpenAI tailored ChatGPT Gov for government use - here's what that means
    ChatGPT will be making its way to federal, state, and local agencies. The new version comes with benefits - and concerns.
    0 Comments 0 Shares
  • WWW.FORBES.COM
    Judge Throws Out Facial Recognition Evidence In Murder Case
    Facial Recognition System. gettyIn a recent ruling that underscores the growing debate over artificial intelligence in criminal investigations, an Ohio judge has excluded facial recognition evidence in a murder case, effectively preventing prosecutors from securing a conviction. The decision raises broader concerns about the reliability and transparency of facial recognition technology in law enforcement and the legal challenges it presents when used in court, The Record reports.The case involves the fatal shooting of Blake Story in Cleveland in February 2024. With no immediate leads, investigators turned to surveillance footage taken six days after the crime. They used Clearview AI, a controversial facial recognition software, to identify a suspect, Qeyeon Tolbert.Acting on this identification, police obtained a search warrant for Tolberts residence, where they recovered a firearm and other evidence. But as the trial approached, a flaw in the investigation came to light: police had not independently corroborated Tolberts identity before executing the search warrant, nor had they disclosed the use of facial recognition in their affidavit.On Jan. 9, the judge ruled in favor of a defense motion to suppress the evidence, stating that the warrant was granted without proper probable cause. With the firearm and other key evidence excluded, prosecutors were left with little to move forward on, forcing them to file an appeal. Without the suppressed evidence, the state has acknowledged that securing a conviction will be extremely difficult.This case is one of the latest in a growing list of legal challenges surrounding facial recognition technology. While law enforcement agencies argue that AI-driven identification speeds up investigations, defense attorneys and privacy advocates warn that overreliance on these tools can lead to wrongful arrests, constitutional violations and breaches of due process.MORE FOR YOUHow Facial Recognition Plays a Role in Law EnforcementFacial recognition software has become an increasingly common tool in criminal investigations. Programs like Clearview AI allow law enforcement agencies to compare suspect images against vast databases of photos scraped from social media, public websites, and other online sources. With an estimated 30 billion images in its system, Clearview AI is one of the largest facial recognition databases in the world.Proponents of the technology argue that it provides investigators with crucial leads when traditional methods fail. In cases where security footage captures an unknown suspect, facial recognition can rapidly generate potential matches, allowing law enforcement to act more quickly.However, Clearview AI itself acknowledges that its system is not designed to be the sole basis for arrests. The company warns that its results should be treated as leads rather than definitive proof of identity. Yet, a review of 23 police departments by The Washington Post found that at least 15 departments had made arrests based solely on facial recognition matches, raising concerns about the accuracy of these systems and the due diligence of law enforcement.The Challenges of Using Facial Recognition in CourtDespite its growing use, facial recognition remains controversial, particularly when it serves as the foundation for search warrants and arrests. Legal experts point to several key concerns:Accuracy and Bias Issues Facial recognition technology has been shown to be less accurate for people of color, women, and older adults, increasing the risk of wrongful identifications. A 2020 study by the National Institute of Standards and Technology found that many facial recognition systems exhibit racial and gender biases, leading to a higher rate of false positives for Black and Asian individuals.Lack of Transparency Some police departments do not disclose their use of facial recognition to suspects, defense attorneys, or even judges. This lack of transparency can violate due process rights, preventing defendants from fully challenging the evidence against them.Legal Admissibility Issues Courts are increasingly skeptical of facial recognition evidence, particularly when it is the sole or primary basis for a search warrant. In this case, the Ohio judge ruled that because police had not independently verified Tolberts identity before obtaining a warrant, the search and seizure of evidence violated his forth amendment rights.Privacy and Surveillance Concerns The widespread use of facial recognition raises broader questions about mass surveillance. Critics warn that if unchecked, these technologies could enable warrantless tracking of individuals in public spaces, blurring the lines between necessary policing and civil liberties violations.The Future of Facial Recognition in Criminal CasesThe Ohio case is a sign for law enforcement agencies relying on facial recognition as a core investigative tool. While AI-driven identification can assist in narrowing down suspects, courts are signaling that its use must be accompanied by traditional investigative work to establish probable cause.As more cases challenge the validity of AI-generated identifications, legal frameworks around facial recognition will likely evolve. Some states have already enacted restrictions on the technology. Maine, Massachusetts, and Illinois have passed laws limiting or banning law enforcement use of facial recognition without a warrant, citing privacy concerns.For now, the Ohio ruling is a reminder that while AI can assist human decision-making, it cannot replace the fundamental principles of due process. As courts continue to scrutinize the use of facial recognition in criminal cases, law enforcement agencies will need to ensure that they use these tools responsibly, with proper oversight and adherence to constitutional protections.
    0 Comments 0 Shares
  • WWW.FORBES.COM
    Alibaba Unveils Qwen 2.5: A DeepSeek Rival?
    Chinese internet company Alibaba launches Qwen 2.5 generative AI model, taking aim at its fellow ... [+] China-based competitor DeepSeek-V3.gettyTheres been an escalation in the generative AI large language model wars as Alibaba Qwen 2.5 launched Wednesday. This latest AI salvo from China-based Alibaba is directly aimed at its in-country rival DeepSeek, which launched its own AI--DeepSeek-V3--in December 2024 and its R1 version in mid-January.What sets DeepSeek-V3 apart from the other foundation AI models such as Claude, ChatGPT, Gemini, Llama and Perplexity is that its unique design came online much faster than the dominant players and required much less computing power to train compared to the other systems.Why Alibaba Qwen 2.5 LaunchedBecause of its upgraded algorithm architecture, the V3 model reportedly produces comparable results as existing LLMs; however, the company states that it was able to train DeepSeek-V3 for less than $6 million using older Nvidia H800 GPU chips that debuted almost two years ago--thats almost a lifetime within tech circles. Shortly after its release on January 20th, the DeepSeek-R1 AI assistant--powered by V3--became the top download within Apples Top Free App category.On Tuesday, the reality of this achievement sunk in on Wall Street as investors sold off nearly $600 billion worth of Nvidia stock--questioning whether pricey next-gen GPUs such as its H200 and Blackwell processors will even be necessary. While Meta has reportedly scrambled to establish Llama war rooms to try and reverse engineer how the latest DeepSeek rollouts debuted so fast and cheap.Alibaba Qwen 2.5 Versus DeepSeek-V3But the premiere of DeepSeeks latest innovations didnt just take U.S.-based AI developers and chip makers off guard. Media outlets suggest that it spurred an AI upgrade by TikTok owner ByteDance and this latest AI launch of Alibabas Qwen 2.5. Its reported that Alibaba specifically called out DeepSeek in a WeChat post stating that Qwen 2.5 outperforms V3.MORE FOR YOUWhile its too early to tell, which AI model from China will come out on top there are concerns surfacing about potential risks for both platforms. Issues that plagued China-owned TikTok are being raised regarding Qwen and DeepSeek-V3 regarding data security, privacy, potential misreporting of performance stats and separate issues of possible intellectual property theft on behalf of OpenAI and Microsoft--which could call into question whether V3 was trained from scratch or leveraged other AI models.
    0 Comments 0 Shares
  • TIME.COM
    Why DeepSeek Is Sparking Debates Over National Security, Just Like TikTok
    By Andrew R. ChowUpdated: January 29, 2025 12:00 PM EST | Originally published: January 29, 2025 11:28 AM ESTThe fast-rising Chinese AI lab DeepSeek is sparking national security concerns in the U.S., over fears that its AI models could be used by the Chinese government to spy on American civilians, learn proprietary secrets, and wage influence campaigns. In her first press briefing, White House Press Secretary Karoline Leavitt said that the National Security Council was "looking into" the potential security implications of DeepSeek. This comes amid news that the U.S. Navy has banned use of DeepSeek among its ranks due to potential security and ethical concerns.DeepSeek, which currently tops the Apple App Store in the U.S., marks a major inflection point in the AI arms race between the U.S. and China. For the last couple years, many leading technologists and political leaders have argued that whichever country developed AI the fastest will have a huge economic and military advantage over its rivals. DeepSeek shows that Chinas AI has developed much faster than many had believed, despite efforts from American policymakers to slow its progress.However, other privacy experts argue that DeepSeeks data collection policies are no worse than those of its American competitorsand worry that the companys rise will be used as an excuse by those firms to call for deregulation. In this way, the rhetorical battle over the dangers of DeepSeek is playing out on similar lines as the in-limbo TikTok ban, which has deeply divided the American public.There are completely valid privacy and data security concerns with DeepSeek, says Calli Schroeder, the AI and Human Rights lead at the Electronic Privacy Information Center (EPIC). But all of those are present in U.S. AI products, too.Read More: What to Know About DeepSeekConcerns over dataDeepSeeks AI models operate similarly to ChatGPT, answering user questions thanks to a vast amount of data and cutting-edge processing capabilities. But its models are much cheaper to run: the company says that it trained its R1 model on just $6 million, which is a good deal less than the cost of comparable U.S. models, Anthropic CEO Dario Amodei wrote in an essay.DeepSeek has built many open-source resources, including the LLM v3, which rivals the abilities of OpenAI's closed-source GPT-4o. Some people worry that by making such a powerful technology open and replicable, it presents an opportunity for people to use it more freely in malicious ways: to create bioweapons, launch large-scale phishing campaigns, or fill the internet with AI slop. However, there is another contingent of builders, including Metas VP and chief AI scientist Yann LeCun, who believe open-source development is a more beneficial path forward for AI. Another major concern centers upon data. Some privacy experts, like Schroeder, argue that most LLMs, including DeepSeek, are built upon sensitive or faulty databases: information from data leaks of stolen biometrics, for example. David Sacks, President Donald Trumps AI and crypto czar, accused DeepSeek of leaning on the output of OpenAIs models to help develop its own technology.There are even more concerns about how users data could be used by DeepSeek. The companys privacy policy states that it automatically collects a slew of input data from its users, including IP and keystroke patterns, and may use that to train their models. Users personal information is stored in secure servers located in the People's Republic of China, the policy reads.For some Americans, this is especially worrying because generative AI tools are often used in personal or high-stakes tasks: to help with their company strategies, manage finances, or seek health advice. That kind of data may now be stored in a country with few data rights laws and little transparency with regard to how that data might be viewed or used. It could be that when the servers are physically located within the country, it is much easier for the government to access them, Schroeder says.One of the main reasons that TikTok was initially banned in the U.S. was due to concerns over how much data the apps Chinese parent company, ByteDance, was collecting from Americans. If Americans start using DeepSeek to manage their lives, the privacy risks will be akin to TikTok on steroids, says Douglas Schmidt, the dean of the School of Computing, Data Sciences and Physics at William & Mary. I think TikTok was collecting information, but it was largely benign or generic data. But large language model owners get a much deeper insight into the personalities and interests and hopes and dreams of the users.Geopolitical concernsDeepSeek is also alarming those who view AI development as an existential arms race between the U.S. and China. Some leaders argued that DeepSeek shows China is now much closer to developing AGIan AI that can reason at a human level or higherthan previously believed. American AI labs like Anthropic have safety researchers working to mitigate the harms of these increasingly formidable systems. But its unclear what kind of safety research team Deepseek employs. The cybersecurity of Deepseeks models has also been called into question. On Monday, the company limited new sign-ups after saying the app had been targeted with a large-scale malicious attack.Well before AGI is achieved, a powerful, widely-used AI model could influence the thought and ideology of its users around the world. Most AI models apply censorship in certain key ways, or display biases based on the data they are trained upon. Users have found that DeepSeeks R1 refuses to answer questions about the 1989 massacre at Tiananmen Square, and asserts that Taiwan is a part of China. This has sparked concern from some American leaders about DeepSeek being used to promote Chinese values and political aimsor wielded as a tool for espionage or cyberattacks.This technology, if unchecked, has the potential to feed disinformation campaigns, erode public trust, and entrench authoritarian narratives within our democracies, Ross Burley, co-founder of the nonprofit Centre for Information Resilience, wrote in a statement emailed to TIME.AI industry leaders, and some Republican politicians, have responded by calling for massive investment into the American AI sector. President Trump said on Monday that DeepSeek should be a wake-up call for our industries that we need to be laser-focused on competing to win. Sacks posted on X that DeepSeek R1 shows that the AI race will be very competitive and that President Trump was right to rescind the Biden EO, referring to Bidens AI Executive Order which, among other things, drew attention to the potential short-term harms of developing AI too fast.These fears could lead to the U.S. imposing stronger sanctions against Chinese tech companies, or perhaps even trying to ban DeepSeek itself. On Monday, the House Select Committee on the Chinese Communist Party called for stronger export controls on technologies underpinning DeepSeeks AI infrastructure.But AI ethicists are pushing back, arguing that the rise of DeepSeek actually reveals the acute need for industry safeguards. This has the echoes of the TikTok ban: there are legitimate privacy and security risks with the way these companies are operating. But the U.S. firms who have been leading a lot of the development of these technologies are similarly abusing people's data. Just because they're doing it in America doesn't make it better, says Ben Winters, the director of AI and data privacy at the Consumer Federation of America. And DeepSeek gives those companies another weapon in their chamber to say, We really cannot be regulated right now.As ideological battle lines emerge, Schroeder, at EPIC, cautions users to be careful when using DeepSeek or other LLMs. If you have concerns about the origin of a company, she says, Be very, very careful about what you reveal about yourself and others in these systems.
    0 Comments 0 Shares
  • WWW.TECHSPOT.COM
    AI creates glowing protein that would've taken nature 500 million years to evolve
    What just happened? Scientists have used AI to design the blueprints for an entirely new protein that has never existed in nature. This AI-generated protein, dubbed esmGFP, would have taken half a billion years to evolve naturally. And the best part? It glows. In a study published in Science, researchers detailed how they used advanced language models to fast-forward evolution, simulating hundreds of millions of years of genetic changes in just hours. The result? A synthetic version of green fluorescent protein (GFP) with an amino acid sequence only 58 percent similar to its closest natural counterpart.For the uninitiated, GFPs are biomolecules that give certain marine creatures like jellyfish their vivid glow. Scientists frequently use them as biomarkers, attaching their genes to other proteins of interest to make them fluoresce under a microscope.In nature, these glowing proteins evolved over eons through random genetic mutations. But the AI model behind this breakthrough, called ESM3, took a radically different approach. Instead of evolving proteins step by step like life on Earth, it was trained on a dataset of 2.78 billion known proteins using one trillion teraflops of computing power to generate entirely new hypothetical sequences.For esmGFP specifically, the AI coded 96 mutations that would take over 500 million years to naturally arise in organisms like jellyfish or corals.Alex Rives, co-founder of EvolutionaryScale, told Live Science that by inferring the fundamental biological rules, their model can create functional proteins that defy the constraints of natural evolution. Rives and his colleagues previously worked on precursor models to ESM3 at Meta before founding EvolutionaryScale last year. Just months later, the startup raised $142 million to advance its research. // Related StoriesHowever, not everyone is entirely convinced. Tiffany Taylor, an evolutionary biologist at the University of Bath, acknowledged to Live Science that the model holds promise for drug development and bioengineering. Still, she cautioned that AI protein models don't account for the complex selective forces shaping entire organisms.Despite these concerns, the study highlights how AI could dramatically expand the range of synthetic proteins available, with potential applications in medicine and environmental science."The model has the potential to accelerate discovery across a broad range of applications, ranging from the development of new cancer treatments to creating proteins that could help capture carbon," a press release from last year noted.
    0 Comments 0 Shares