• The Biggest Winner In The DeepSeek Disruption Story Is Open Source AI
    www.forbes.com
    When the news about DeepSeek-R1 broke, the AI world was quick to frame it as yet another flashpoint ... [+] in the ongoing U.S.-China AI rivalry. But the real story, according to experts like Yann LeCun, is about the value of open source AI. (Photo by NICOLAS TUCAT/AFP via Getty Images)AFP via Getty ImagesDeepSeek-R1 the AI model created by DeepSeek, a little known Chinese company, at a fraction of what it cost OpenAI to build its own models has sent the AI industry into a frenzy for the last couple of days. When the news about DeepSeek-R1 broke, the AI world was quick to frame it as yet another flashpoint in the ongoing U.S.-China AI rivalry.However, I argue that the real story isnt about geopolitics, although theres a strong geopolitical layer somewhere there. I believe that the real story is about the growing power of open-source AI and how its upending the traditional dominance of closed-source models a line of thought that Yann LeCun, Metas chief AI scientist, also shares.LeCun, a vocal proponent of open-source AI, recently wrote in a LinkedIn post: To people who see the performance of DeepSeek and think: China is surpassing the U.S. in AI. You are reading this wrong. The correct reading is: Open-source models are surpassing proprietary ones.While LeCuns argument may seem simple, its message is far weightier than it appears on the surface: DeepSeek R1 didnt emerge from a vacuum. It built on the foundations of open-source research, leveraging previous advancements like Metas Llama models and the PyTorch ecosystem. DeepSeeks remarkable success with its new AI model reinforces the notion that open-source AI is becoming more competitive with, and perhaps even surpassing, the closed, proprietary models of major technology firms.Open-Source Vs. Closed AIOpen source AI, according to Open Source Initiative, is an AI system made available under terms and in a way that grants the freedom to Use the system for any purpose and without having to ask for permission, study how the system works and inspect its components, modify the system for any purpose, including to change its output, share the system for others to use with or without modifications, for any purpose.MORE FOR YOUThe gist of all that jargon is that open-source AI models give you the freedom to modify and build whatever you want. Its like having free, unrestricted access to all-purpose flour if you were a baker. Imagine the wide range of things you could bake.Closed source AI, on the other hand, means just the exact opposite. In closed AI models, the source codes and underlying algorithms are kept private and cannot be modified or built upon. The major argument for this type of approach is privacy. By keeping AI models closed, proponents of this approach say they can better protect users against data privacy breaches and potential misuse of the technology.But according to Manu Sharma, cofounder and CEO of Labelbox, innovations in software are very hard to keep closed-source in todays world. Almost every foundational piece of technology in AI is open source and has gained large mindshare.Sharma believes we are witnessing the same trend in AI that we saw with databases and operating systems, where open solutions eventually dominated the industry. With proprietary models requiring massive investment in compute and data acquisition, open-source alternatives offer more attractive options to companies seeking cost-effective AI solutions.DeepSeek R1s training cost reportedly just $6 million has shocked industry insiders, especially when compared to the billions spent by OpenAI, Google, and Anthropic on their frontier models. Kevin Surace, CEO of Appvance, called it a wake-up call, proving that China has focused on low-cost rapid models while the U.S. has focused on huge models at a huge cost.A Looming AI Price WarDeepSeeks AI model undoubtedly raises a valid question about whether we are on the cusp of an AI price war. Even Sam Altman, OpenAI CEO, acknowledged in a tweet late yesterday that DeepSeeks R1 is an impressive model, particularly around what theyre able to deliver for the priceAndy Thurai, VP and principal analyst at Constellation Research, noted in his Weekly Tech Bytes newsletter on LinkedIn that DeepSeeks efficiency will inevitably put downward pressure on AI costs. If it is proven that the entire AI software supply chain can be done cheaply using open-source software, many startups will take a hit. VCs will stop writing blank checks to start-ups that have generative AI on their pitch deck.Venture-backed AI firms that rely on closed-source models to justify their high valuations could take a devastating hit in the aftermath of the DeepSeek tsunami. Companies that fail to differentiate themselves beyond the mere ability to train LLMs could face significant funding challenges.Privacy And Security ConcernsHowever, not everyone is enthusiastic about open-source AI taking center stage. Indeed, open models democratize AI access, but they also introduce concerns about security, misuse and privacy.Surace raised concerns about DeepSeeks origins, noting that privacy is an issue because its China. Its always about collecting data from users. So users beware. While DeepSeeks model weights and codes are open, its training data sources remain largely opaque, making it difficult to assess potential biases or security risks.Syed Hussain and Neil Benedict, co-founders of Shiza.ai, expressed significant concerns about both the technical claims and potential security implications of DeepSeek. Both Hussain and Benedict viewed DeepSeek not as merely a company competing in the market, but as potentially part of a broader Chinese state strategy that might be aimed at disrupting the U.S. AI industry and market confidence.While people also worry about U.S. companies having access to their data, those companies are bound by U.S. privacy laws and constitutional protections, said Benedict. In contrast, he argued that DeepSeek, potentially tied to the Chinese state, operates under different rules and motivations. While U.S. companies have profit-driven motivations for data collection, DeepSeeks free model raises questions about hidden incentives, he said.Hussain further described DeepSeek as a potential Trojan horse, suggesting that it could be a sophisticated data collection operation masked as a competitive AI product.However, Thurai emphasized the transparency problem in AI models, regardless of origin. When choosing a model, transparency, the model creation process, and auditability should be more important than just the cost of usage, he said.While DeepSeek R1 is open-source, many companies may hesitate to adopt it without clearer disclosures about its dataset and safety mechanisms.The Fallout For Nvidia And The AI Supply ChainThe financial markets have already reacted to DeepSeeks impact. Although Nvidias stock has slightly rebounded by 6%, it faced short-term volatility, reflecting concerns that cheaper AI models will reduce demand for the companys high-end GPUs. But Sharma remains bullish on Nvidias long-term prospects.Affordable and abundant AGI means many more people are going to use it faster, and use it everywhere. Compute demand around inference will soar, he told me.This suggests that while training costs may decline, the demand for AI inference running models efficiently at scale will continue to grow. Companies like Nvidia may pivot toward optimizing hardware for inference workloads rather than focusing solely on the next wave of ultra-large training clusters.The Future Of Open-Source AIIf DeepSeek R1 has proven anything, its that high-performance open-source models are here to stay and they may become the dominant force in AI development. As LeCun noted, DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta). Because their work is published and open source, everyone can profit from it. That is the power of open research and open source.Businesses now need to rethink their reliance on closed-source models and consider the benefits of contributing to and benefiting from an open AI ecosystem. Moving forward, the debate wont just be about an AI Cold War between the U.S. and China, but about whether the future of AI will be more open, accessible, and shared or closed, proprietary, and expensive.The genie is out of the bottle, though. And it looks like its open-source.
    0 Comments ·0 Shares ·38 Views
  • Googles 105-Qubit Willow Chip Achieves Major Quantum Milestones
    www.forbes.com
    Google's new Willow quantum chip has broken new ground in an important random circuit sampling ... [+] benchmark, an important development in Google's roadmap for fault-tolerant quantum computing.PixabayGoogle has chalked up several amazing quantum computing records with its newest quantum 105-qubit superconducting chip called Willow. This performance is no surprise, considering Googles heritage of record-setting quantum chips, reaching back to Foxtail in 2017, Bristlecone in 2018 and Sycamore in 2019.Google announced Willow last month, and I think it is necessary to reemphasize the importance of this research after Jensen Huang, CEO of Nvidia, recently remarked that quantum computing likely wont be useful for another 20 years. Granted, there remains a lot of ground to cover to reach fault tolerance, which will be critical for many practical applications, but there has also been a lot accomplished in quantum in just the past 12 months. Marketplace evidence, research results (including qubit fidelity close to what is needed for fault tolerance) and the roadmaps of many quantum computing companies indicate that useful quantum technology is much closer than Huang believes.Read on for more on how the new Willow chip performed on the random circuit sampling benchmark. I also discuss what may be the most important piece of this development for future quantum fault tolerance, the results of applying a new error-corrected surface code. To provide more context, Ill also share historical perspective from Professor John Martinis, who led some of the most important work on earlier generations of Googles quantum chips, and how his work has now paid off just as he predicted with Willow.Willow Hardware And Software ImprovementsWillows performance across key metricsGoogleWillow has improved on earlier generations of Googles quantum chips in several ways. For starters, the use of tunable qubits and couplers in Willow has provided it with much faster gates and operations that help achieve lower error rates. This speed also allows hardware to be optimized or adjusted during operation. Variances in superconducting qubits can sometimes create high error rates, but tuners allow nonconforming qubits to be reconfigured and aligned with other qubits to eliminate errors.MORE FOR YOUNext up is the duration of quantum states. A major limitation of quantum computing has been the length of time qubits can maintain their quantum states. Willow has increased that time by 5x, from 20 microseconds to 100 microseconds. This allows more complex problems to be run.A third advantage of Willow is that Googles logical qubits can now function below the critical quantum error correction threshold. The QEC threshold arises from a theory developed in the 1990s, and until now it has been a barrier to efficient quantum computing. In the Willow chip, however, error rates are reduced by one-half as physical qubits are added in scale. Thanks to this, as Google increases the size of its surface code from 3x3 to 5x5 to 7x7 the encoded logical qubits maintain their coherence for longer times. Increasing grid size allows for more complex error patterns to be corrected, similar to more redundancy in classical error correction. It also means that logical qubits can maintain their quantum states longer than the underlying physical qubits.This leads me to the single most important part of Googles Willow announcement: Willow is the first quantum processor to demonstrate an exponential reduction in error rates as the number of qubits is increased. Traditionally, adding qubits causes the error rate to increase.Other factors necessary for fault-tolerant quantum computing have also been demonstrated by Google researchers. For one thing, having a repeatable performance over several hours without degradation is needed to run large-scale fault-tolerant algorithms and Willow has now demonstrated that capability.Benchmarking Quantum ProcessorsGoogle uses random circuit sampling as an ongoing benchmark to compare new experimental quantum processors against supercomputers running classical algorithms. It is important to point out that random circuit sampling is not useful as an application in itself; it is only a threshold test. But if a system fails to pass RCS, there is no need for further testing.Five years ago, the Google quantum research group claimed that the 53 superconducting qubits of its 54-qubit Sycamore chip (one qubit was faulty) had achieved quantum supremacy meaning that it outperformed comparable classical computing. Back then, Google researchers said they were able to complete a RCS benchmark computation in 200 seconds that theoretically would take a classical supercomputer 10,000 years to complete. IBM disputed the claim using calculations indicating it was possible for a classical computer to achieve the same results. However, it was eventually accepted by the quantum community that if Google had used all 54 qubits, it would have taken a classical supercomputer much longer than 10,000 years to equal Sycamores achievement.This year, in another quantum supremacy test, Google pitted the new 105-qubit Willow chip against the same RCS benchmark experiment that the Sycamore chip ran in 2019. Willow ran the RCS benchmark in under five minutes; it has been determined that todays best classical supercomputer would need 10 septillion years to run the same benchmark (thats a 1 followed by 25 zeros). In short, because Willow performs below the error correction threshold, it is able to conduct random circuit sampling far beyond what is possible with classical computers.If youre not familiar with quantum computing, these comparisons may seem confusing at first. But they are directly attributable to the number of qubits involved. The Willow chip has 105 qubits compared to Sycamore's 53. Each additional qubit results in an exponential increase in computing power, not a linear increase. The difference in the execution time between the tests in 2019 and the ones conducted in recent months today becomes understandable in this context. Because Willow has 52 more qubits than Sycamore, it has 2^52 (4.5 quadrillion) more computational states.Besides the increase in qubits, many other improvements have been made to quantum systems since 2019. Algorithms are a billion times better because of extensive experimentation by the large community of computer scientists in the ecosystem. Plus, quantum processors have improved significantly in various ways, including in the quality of qubits.Google's roadmap to fault-tolerant quantum computingGoogleFollowing its 2019 benchmark results, Google published a road map with a 10-year timeline for developing a large error-corrected quantum computer with 1,000 logical qubits using 1,000,000 physical qubits. As shown in the diagram above, the roadmap has six milestones; after its latest achievement with Willow, Google is now approaching the third milestone.For another perspective on the Willow chip, I recently discussed Googles achievement with Prof. John Martinis, who led the Google team that designed and tested the Sycamore chip. Prof. Martinis is currently working on a quantum startup called Qoloab with his cofounders Alan Ho (another Google veteran) and Prof. Robert McDermott.During that conversation, I recalled remarks that Prof. Martinis made about a yet-to-be-developed quantum computer chip for a Forbes article I published nearly five years ago. Googles plan is roughly to build a million-qubit system in about 10 years, with sufficiently low errors to do error correction, he said. Then at that point you will have enough error-corrected logical qubits that you can run useful, powerful algorithms that you now cant solve on a classical supercomputer. And maybe even at a few hundred qubits, with lower errors, it may be possible to do something special-purpose.Those remarks are very close to describing how Googles Willow chip has actually played out.How Long Until We See Commercial Quantum Applications?Google currently believes that it will be able to produce useful commercial quantum applications in the next five years or less. Many quantum scientists believe it will take at least another decade before quantum computers are able to handle world-affecting computations in areas such as climate change, drug discovery, materials science and financial modeling.Of course, Google is not the only company on this path. There is a great deal of experimentation and collaboration being done with logical qubits. One notable example is Microsoft, which has done exciting work with both Quantinuums H-2 trapped-ion processor and Atom Computings neutral-atom processor.Google acknowledges there are many challenges remaining. While the maximum code distance used in the Willow research was 7, to obtain the necessary error rate for fault tolerance would require a distance-27 logical qubit, which would need almost 1,500 physical qubits to create it. For quantum error correction, a higher distance means that an error code can handle more errors before it fails. A larger distance means the code has more layers of checks and balances that can detect and repair errors before they cause problems.That is just one of the many challenges that must be overcome to achieve fault tolerance. While some might believe Googles timeline is overly optimistic, I believe the company is on track. In another five years, fault tolerance will be a lot closer. And useful commercial quantum applications in some form or another should be quite doable.
    0 Comments ·0 Shares ·37 Views
  • Save $1,600 off this powerful HP Workstation laptop today
    www.digitaltrends.com
    For one of the best laptop deals, head over to B&H Photo Video, where you can buy an HP 15.6-inch ZBook Power G10 Mobile Workstation for a massive $1,600 off the regular price. Previously $3,449, the laptop is down to $1,849, with strictly limited stock remaining. Once its gone, its gone. This is a high-end business laptop and one thats sure to delight many. Heres all you need to know remember to be quick with your purchase!Over the years, HP has developed its reputation as one of the best laptop brands with strong reliability, good customer service, and some good looks. The best HP laptops are worth keeping an eye on. With the HP 15.6-inch ZBook Power G10 Mobile Workstation, you get a 13th-generation Intel Core i9-13900H processor, 32GB of DDR5 RAM, and 1TB of SSD storage, so this is fairly high-end stuff when it comes to business performance.It also has a 15.6-inch 1440p screen. You get a resolution of 2560 x 1440 on an anti-glare display, so it looks great at all times. While we wouldnt recommend this for gaming, the HP 15.6-inch ZBook Power G10 Mobile Workstation has an Nvidia RTX A2000 GPU, which is comparable to the Nvidia GeForce RTX 3050, although wed recommend it most for video editing rather than gaming.RelatedElsewhere, the HP 15.6-inch ZBook Power G10 Mobile Workstation has a bunch of great ports, including Thunderbolt 4, USB 3.2 Gen Type-A, and HDMI 2.1 for connecting it to a monitor. Theres also Gigabit Ethernet and Wi-Fi 6E alongside Bluetooth 5.3. For taking video calls, you can enjoy the 5MP IR webcam, which ensures you look a cut above the rest. This is a good bunch of features to compare with the best laptops.Designed with STEM students, business users, and anyone else with high-end needs, the HP 15.6-inch ZBook Power G10 Mobile Workstation is a power hungry beast of a laptop thats designed for working hard but with the benefit of fast charge, meaning 50% battery life in just 30 minutes.The HP 15.6-inch ZBook Power G10 Mobile Workstation normally costs $3,449, but right now you can buy it from B&H Photo Video for $1,849, so youre saving a huge $1,600. This is a good investment for anyone upgrading their business laptop equipment right now. Check it out soon, as stock is strictly limited at this price you dont have long left!Editors Recommendations
    0 Comments ·0 Shares ·36 Views
  • AdGuard VPN review: a fast sleeper hit for internet privacy
    www.digitaltrends.com
    AdGuard VPNMSRP$11.99 Score Details AdGuard VPN is the fastest service I've ever tested and prices are quite competitive with other leading VPNs.ProsSuper-fast downloads worldwideQuick, reliable server connectionsEasy-to-use designSupports 10 devicesCustomer protocol disguises VPN useConsEmail support is slowBrowser extension is slowerTable of ContentsTable of ContentsTiers and pricingDesignFeaturesSupportPrivacy and securityIs AdGuard VPN right for you?While AdGuard is best known for its popular ad blocker and Domain Name System (DNS) services, AdGuard VPN is a free and inexpensive privacy solution that hides your location, encrypts internet traffic, and unlocks geo-blocked content.Recommended VideosI reviewed AdGuard VPN to check how well it stacks up against the best VPNs available. The company claims its custom VPN protocol is faster and better than competing solutions. I put it to the test, checking speed and reliability, customer service, and security to help you decide if its the streaming and privacy solution youve been looking for.RelatedAdGuard VPN offers the best deals with longer subscriptions. AdGuardThe free version of AdGuard VPN lets you test the features and use it anytime, but there are restrictions. Most notably, internet data is limited to 3GB per month. That wont last long if youre streaming 4K video or torrenting large files. You get access to fewer servers and protection for two devices. There are better free VPNs with fewer restrictions.With a subscription, AdGuard VPN is unlocked so you can use it freely on up to 10 devices at once with no data cap. You can run the VPN full-time if you need anonymity or use it freely anytime you want to browse internationally like youre a local.The monthly fee for AdGuard VPN is $11.99, which is about average for the industry. For the best deal on monthly service, check out Mullvad a robust open-source VPN that costs a little over $5 per month.You get a better deal if you subscribe for longer. An annual plan costs $47.88 ($4/mo.) and a two-year subscription is just $71.76 ($3/mo.)AdGuard VPN has a simple design thats easy to use. Digital TrendsI installed the AdGuard VPN app, which took less than a minute. Its a medium-sized window thats somewhat small but roomy enough to show what I need without scrolling.A cartoon ninja stands at the ready behind the green connect button. When the VPN is on, the ninja ducks behind a nearby shrub and a white disconnect button appears. Its a little silly but serves to convey the status of the VPN at a glance.To the right, AdGuard VPN shows a list of server locations with the fastest appearing in a separate section at the top. A search bar lets me quickly find particular countries and cities to connect to and bookmark my favorites for quicker access.Tabs along the top of the window open Exclusions, Stats, Support, and Settings. The layout is simple and easy to understand.I added exclusions to AdGuard VPN so these apps could bypass the VPN. Digital TrendsSome work- or school-related web apps might expect you to connect from a particular location. Exclusions let me specify apps and websites that can bypass the VPN tunnel, showing my actual location and running at full speed.App settings include toggles for a kill switch that blocks internet access if the VPN unexpectedly disconnects, automatic launch, and automatic connection.AdGuard VPN also lets me choose from several DNS services. The default is AdGuard DNS. Ad-blocking and family filtering are optional. I could also select from third-party services like Google, CloudFlare, and more.AdGuard VPN was as fast as my native Ethernet connection when I chose a nearby server. Digital TrendsAdGuard created its own VPN that works better with its ad-blocking extension, but you dont need to use AdGuard to enjoy the speed and protection of AdGuard VPN and it really is fast.I have a gigabit Ethernet connection with uploads and downloads at a little over 900Mbps. When I use a VPN, transfer speeds drop to around 560Mbps down and 60Mbps up. Thats still more bandwidth than I need in most cases, but faster is better.AdGuards custom VPN protocol minimizes round trips for transfer confirmation, shortening the time it takes to deliver content. I found it to be the fastest VPN Ive ever tested.I connected to a Canadian server thats 1,000 miles away and was surprised to see SpeedTest report 936Mbps down and 91Mbps up. Downloads were as fast as if I werent using a VPN! New York is just across the border so when I measured 792Mbps up and 64Mbps down it wasnt as shocking.AdGuard VPN performed well with overseas connections. Digital TrendsFor a bigger challenge, I switched to European servers. Distance takes a toll and AdGuard VPN isnt immune when connecting to overseas locations. Still, download speeds ranged from 522Mbps to 633Mbps for the UK, France, and Germany. Upload speeds were around 5Mbps. Upload latency was quite good for such distant servers, a bit over 100ms, but download pings varied greatly from 145ms to 561ms.Even when VPNs perform this well, servers on the other side of the planet are almost always slow. I connected to a server in Australia and was amazed to see 662Mbps downl0ads. Latency was also relatively quick at 200ms, but uploads fell to 2Mbps.The AdGuard VPN extension was significantly slower than the app. Digital TrendsI also checked the AdGuard VPN extension, assuming Id get similar results. For some reason, it was much slower. For example, the download speed plunged to 416Mbps when I tested the fastest location again with the extension. Thats a 56% penalty for using the extension.Overall, I was very impressed with the performance. AdGuard VPN lets me browse and stream with excellent speed, hiding my location and displaying foreign content as if Im a world traveler.An AdGuard VPN subscription comes with AdGuard DNS at no extra cost, which blocks malware, trackers, and ads. I tested the malware safeguards by visiting Wicar.org. AdGuard DNS successfully blocked all 13 downloads and exploits.However, its fairly easy to earn a perfect score in spot checks on a well-known antivirus testing website. I found Microsoft Defender, Windows built-in solution, did just as well.If you want complete malware protection, I still need to invest in high-quality antivirus software.AdGuard VPN offers support via email and sometimes it takes a long time. Digital TrendsAdGuard VPN provides support via email. I tested the response time, asking about the number of servers available currently instead of the more generally available information about the number of countries and locations.An automated response informed me that AdGuard VPN support was handling a high volume of inquiries and would respond as quickly as possible. It took more than a day to get a reply.If you need quicker answers, you might be able to find solutions in help articles or community forums. For a VPN with more responsive support, I was pleased with Surfsharks 24/7 live chat.AdGuards VPN privacy policy clearly states that it doesnt share or sell your personal information. It also promises not to keep logs of your browsing activity.The company has earned the trust of a large user base with its open-source ad blocker and DNS services, starting in 2009. AdGuard VPN is much newer, launching in 2020 and while the company plans to release it as open-source software, it hasnt done so yet.Most leading VPNs boast independent audits that verify best practices are followed for security and to confirm no-logging claims. However, AdGuard VPN hasnt been audited. That means you need to assess how much you trust the company.AdGuards internal security seems good. I couldnt find any records of data breaches.In my testing, the AdGuard VPN app surpassed the speeds of even the fastest streaming VPNs, and the results with the Windows app were consistent worldwide. An AdGuard VPN subscription also includes AdGuard DNS with simple blocking of malware, trackers, and ads.While an AdGuard VPN subscription is affordable, with an average monthly cost as low as $3, the arebetter VPN dealswith even lower prices.If you want an all-in-one solution, Surfshark is an excellent VPN that can also protect your computer from malware. You can add the surprisingly effective Surfshark One Antivirus for less than a dollar.If youneed a VPN to use at schoolwhile connected to a public network, AdGuard VPNs custom protocol shouldnt raise any alarms.AdGuard VPN is one of the best VPNs you can find, but it lacks some special features and has fewer configuration options than VPNs like NordVPN or Proton VPN.Editors Recommendations
    0 Comments ·0 Shares ·38 Views
  • Lower by Benjamin Booker Review: Beauty Through Noise
    www.wsj.com
    The blues-inflected musicians new album fuses well-wrought songs with the hiss and crackle of deliberately distorted production.
    0 Comments ·0 Shares ·38 Views
  • American Manhunt: O.J. Simpson Review: Crime Without Punishment
    www.wsj.com
    Netflixs four-part documentary revisits the sensational case in revelatory if morbid detail, examining the failures of the police and prosecution.
    0 Comments ·0 Shares ·39 Views
  • AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
    arstechnica.com
    Making AI crawlers squirm AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt Attackers explain how an anti-spam defense became an AI weapon. Ashley Belanger Jan 28, 2025 4:16 pm | 38 View of an insect dissolving in a carnivorous pitcher plant, which inspired an AI tarpit called Nepenthes. Credit: Jerry Redfern / Contributor | LightRocket View of an insect dissolving in a carnivorous pitcher plant, which inspired an AI tarpit called Nepenthes. Credit: Jerry Redfern / Contributor | LightRocket Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLast summer, Anthropic inspired backlash when its ClaudeBot AI crawler was accused of hammering websites a million or more times a day.And it wasn't the only artificial intelligence company making headlines for supposedly ignoring instructions in robots.txt files to avoid scraping web content on certain sites. Around the same time, Reddit's CEO called out all AI companies whose crawlers he said were "a pain in the ass to block," despite the tech industry otherwise agreeing to respect "no scraping" robots.txt rules.Watching the controversy unfold was a software developer whom Ars has granted anonymity to discuss his development of malware (we'll call him Aaron). Shortly after he noticed Facebook's crawler exceeding 30 million hits on his site, Aaron began plotting a new kind of attack on crawlers "clobbering" websites that he told Ars he hoped would give "teeth" to robots.txt.Building on an anti-spam cybersecurity tactic known as tarpitting, he created Nepenthes, malicious software named after a carnivorous plant that will "eat just about anything that finds its way inside."Aaron clearly warns users that Nepenthes is aggressive malware. It's not to be deployed by site owners uncomfortable with trapping AI crawlers and sending them down an "infinite maze" of static files with no exit links, where they "get stuck" and "thrash around" for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models. That's likely an appealing bonus feature for any site owners who, like Aaron, are fed up with paying for AI scraping and just want to watch AI burn.Tarpits were originally designed to waste spammers' time and resources, but creators like Aaron have now evolved the tactic into an anti-AI weapon. As of this writing, Aaron confirmed that Nepenthes can effectively trap all the major web crawlers. So far, only OpenAI's crawler has managed to escape.It's unclear how much damage tarpits or other AI attacks can ultimately do. Last May, Laxmi Korada, Microsoft's director of partner technology, published a report detailing how leading AI companies were coping with poisoning, one of the earliest AI defense tactics deployed. He noted that all companies have developed poisoning countermeasures, while OpenAI "has been quite vigilant" and excels at detecting the "first signs of data poisoning attempts."Despite these efforts, he concluded that data poisoning was "a serious threat to machine learning models." And in 2025, tarpitting represents a new threat, potentially increasing the costs of fresh data at a moment when AI companies are heavily investing and competing to innovate quickly while rarely turning significant profits."A link to a Nepenthes location from your site will flood out valid URLs within your site's domain name, making it unlikely the crawler will access real content," a Nepenthes explainer reads.The only AI company that responded to Ars' request to comment was OpenAI, whose spokesperson confirmed that OpenAI is already working on a way to fight tarpitting."Were aware of efforts to disrupt AI web crawlers," OpenAI's spokesperson said. "We design our systems to be resilient while respecting robots.txt and standard web practices."But to Aaron, the fight is not about winning. Instead, it's about resisting the AI industry further decaying the Internet with tech that no one asked for, like chatbots that replace customer service agents or the rise of inaccurate AI search summaries. By releasing Nepenthes, he hopes to do as much damage as possible, perhaps spiking companies' AI training costs, dragging out training efforts, or even accelerating model collapse, with tarpits helping to delay the next wave of enshittification."Ultimately, it's like the Internet that I grew up on and loved is long gone," Aaron told Ars. "I'm just fed up, and you know what? Let's fight back, even if it's not successful. Be indigestible. Grow spikes."Nepenthes instantly inspires another tarpitNepenthes was released in mid-January but was instantly popularized beyond Aaron's expectations after tech journalist Cory Doctorow boosted a tech commentator, Jrgen Geuter, praising the novel AI attack method on Mastodon. Very quickly, Aaron was shocked to see engagement with Nepenthes skyrocket."That's when I realized, 'oh this is going to be something,'" Aaron told Ars. "I'm kind of shocked by how much it's blown up."It's hard to tell how widely Nepenthes has been deployed. Site owners are discouraged from flagging when the malware has been deployed, forcing crawlers to face unknown "consequences" if they ignore robots.txt instructions.Aaron told Ars that while "a handful" of site owners have reached out and "most people are being quiet about it," his web server logs indicate that people are already deploying the tool. Likely, site owners want to protect their content, deter scraping, or mess with AI companies.When software developer and hacker Gergely Nagy, who goes by the handle "algernon" online, saw Nepenthes, he was delighted. At that time, Nagy told Ars that nearly all of his server's bandwidth was being "eaten" by AI crawlers.Already blocking scraping and attempting to poison AI models through a simpler method, Nagy took his defense method further and created his own tarpit, Iocaine. He told Ars the tarpit immediately killed off about 94 percent of bot traffic to his site, which was primarily from AI crawlers. Soon, social media discussion drove users to inquire about Iocaine deployment, including not just individuals but also organizations wanting to take stronger steps to block scraping.Iocaine takes ideas (not code) from Nepenthes, but it's more intent on using the tarpit to poison AI models. Nagy used a reverse proxy to trap crawlers in an "infinite maze of garbage" in an attempt to slowly poison their data collection as much as possible for daring to ignore robots.txt.Taking its name from "one of the deadliest poisons known to man" from The Princess Bride, Iocaine is jokingly depicted as the "deadliest poison known to AI." While there's no way of validating that claim, Nagy's motto is that the more poisoning attacks that are out there, "the merrier." He told Ars that his primary reasons for building Iocaine were to help rights holders wall off valuable content and stop AI crawlers from crawling with abandon.Tarpits arent perfect weapons against AIRunning malware like Nepenthes can burden servers, too. Aaron likened the cost of running Nepenthes to running a cheap virtual machine on a Raspberry Pi, and Nagy said that serving crawlers Iocaine costs about the same as serving his website.But Aaron told Ars that Nepenthes wasting resources is the chief objection he's seen preventing its deployment. Critics fear that deploying Nepenthes widely will not only burden their servers but also increase the costs of powering all that AI crawling for nothing."That seems to be what they're worried about more than anything," Aaron told Ars. "The amount of power that AI models require is already astronomical, and I'm making it worse. And my view of that is, OK, so if I do nothing, AI models, they boil the planet. If I switch this on, they boil the planet. How is that my fault?"Aaron also defends against this criticism by suggesting that a broader impact could slow down AI investment enough to possibly curb some of that energy consumption. Perhaps due to the resistance, AI companies will be pushed to seek permission first to scrape or agree to pay more content creators for training on their data."Any time one of these crawlers pulls from my tarpit, it's resources they've consumed and will have to pay hard cash for, but, being bullshit, the money [they] have spent to get it won't be paid back by revenue," Aaron posted, explaining his tactic online. "It effectively raises their costs. And seeing how none of them have turned a profit yet, that's a big problem for them. The investor money will not continue forever without the investors getting paid."Nagy agrees that the more anti-AI attacks there are, the greater the potential is for them to have an impact. And by releasing Iocaine, Nagy showed that social media chatter about new attacks can inspire new tools within a few days. Marcus Butler, an independent software developer, similarly built his poisoning attack called Quixotic over a few days, he told Ars. Soon afterward, he received messages from others who built their own versions of his tool.Butler is not in the camp of wanting to destroy AI. He told Ars that he doesn't think "tools like Quixotic (or Nepenthes) will 'burn AI to the ground.'" Instead, he takes a more measured stance, suggesting that "these tools provide a little protection (a very little protection) against scrapers taking content and, say, reposting it or using it for training purposes."But for a certain sect of Internet users, every little bit of protection seemingly helps. Geuter linked Ars to a list of tools bent on sabotaging AI. Ultimately, he expects that tools like Nepenthes are "probably not gonna be useful in the long run" because AI companies can likely detect and drop gibberish from training data. But Nepenthes represents a sea change, Geuter told Ars, providing a useful tool for people who "feel helpless" in the face of endless scraping and showing that "the story of there being no alternative or choice is false."Criticism of tarpits as AI weaponsCritics debating Nepenthes' utility on Hacker News suggested that most AI crawlers could easily avoid tarpits like Nepenthes, with one commenter describing the attack as being "very crawler 101." Aaron said that was his "favorite comment" because if tarpits are considered elementary attacks, he has "2 million lines of access log that show that Google didn't graduate."But efforts to poison AI or waste AI resources don't just mess with the tech industry. Governments globally are seeking to leverage AI to solve societal problems, and attacks on AI's resilience seemingly threaten to disrupt that progress.Nathan VanHoudnos is a senior AI security research scientist in the federally funded CERT Division of the Carnegie Mellon University Software Engineering Institute, which partners with academia, industry, law enforcement, and government to "improve the security and resilience of computer systems and networks." He told Ars that new threats like tarpits seem to replicate a problem that AI companies are already well aware of: "that some of the stuff that you're going to download from the Internet might not be good for you.""It sounds like these tarpit creators just mainly want to cause a little bit of trouble," VanHoudnos said. "They want to make it a little harder for these folks to get" the "better or different" data "that they're looking for."VanHoudnos co-authored a paper on "Counter AI" last August, pointing out that attackers like Aaron and Nagy are limited in how much they can mess with AI models. They may have "influence over what training data is collected but may not be able to control how the data are labeled, have access to the trained model, or have access to the Al system," the paper said.Further, AI companies are increasingly turning to the deep web for unique data, so any efforts to wall off valuable content with tarpits may be coming right when crawling on the surface web starts to slow, VanHoudnos suggested.But according to VanHoudnos, AI crawlers are also "relatively cheap," and companies may deprioritize fighting against new attacks on crawlers if "there are higher-priority assets" under attack. And tarpitting "does need to be taken seriously because it is a tool in a toolkit throughout the whole life cycle of these systems. There is no silver bullet, but this is an interesting tool in a toolkit," he said.Offering a choice to abstain from AI trainingAaron told Ars that he never intended Nepenthes to be a major project but that he occasionally puts in work to fix bugs or add new features. He said he'd consider working on integrations for real-time reactions to crawlers if there was enough demand.Currently, Aaron predicts that Nepenthes might be most attractive to rights holders who want AI companies to pay to scrape their data. And many people seem enthusiastic about using it to reinforce robots.txt. But "some of the most exciting people are in the 'let it burn' category," Aaron said. These people are drawn to tools like Nepenthes as an act of rebellion against AI making the Internet less useful and enjoyable for users.Geuter told Ars that he considers Nepenthes "more of a sociopolitical statement than really a technological solution (because the problem it's trying to address isn't purely technical, it's social, political, legal, and needs way bigger levers)."To Geuter, a computer scientist who has been writing about the social, political, and structural impact of tech for two decades, AI is the "most aggressive" example of "technologies that are not done 'for us' but 'to us.'""It feels a bit like the social contract that society and the tech sector/engineering have had (you build useful things, and we're OK with you being well-off) has been canceled from one side," Geuter said. "And that side now wants to have its toy eat the world. People feel threatened and want the threats to stop."As AI evolves, so do attacks, with one 2021 study showing that increasingly stronger data poisoning attacks, for example, were able to break data sanitization defenses. Whether these attacks can ever do meaningful destruction or not, Geuter sees tarpits as a "powerful symbol" of the resistance that Aaron and Nagy readily joined."It's a great sign to see that people are challenging the notion that we all have to do AI now," Geuter said. "Because we don't. It's a choice. A choice that mostly benefits monopolists."Tarpit creators like Nagy will likely be watching to see if poisoning attacks continue growing in sophistication. On the Iocaine sitewhich, yes, is protected from scraping by Iocainehe posted this call to action: "Let's make AI poisoning the norm. If we all do it, they won't have anything to crawl."Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 38 Comments
    0 Comments ·0 Shares ·41 Views
  • Apple chips can be hacked to leak secrets from Gmail, iCloud, and more
    arstechnica.com
    MEET FLOP AND ITS CLOSE RELATIVE, SLAP Apple chips can be hacked to leak secrets from Gmail, iCloud, and more Side channel gives unauthenticated remote attackers access they should never have. Dan Goodin Jan 28, 2025 3:56 pm | 14 Apple is introducing three M3 performance tiers at the same time. Credit: Apple Apple is introducing three M3 performance tiers at the same time. Credit: Apple Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreApple-designed chips powering Macs, iPhones, and iPads contain two newly discovered vulnerabilities that leak credit card information, locations, and other sensitive data from the Chrome and Safari browsers as they visit sites such as iCloud Calendar, Google Maps, and Proton Mail.The vulnerabilities, affecting the CPUs in later generations of Apple A- and M-series chip sets, open them to side channel attacks, a class of exploit that infers secrets by measuring manifestations such as timing, sound, and power consumption. Both side channels are the result of the chips use of speculative execution, a performance optimization that improves speed by predicting the control flow the CPUs should take and following that path, rather than the instruction order in the program.A new directionThe Apple silicon affected takes speculative execution in new directions. Besides predicting control flow CPUs should take, it also predicts the data flow, such as which memory address to load from and what value will be returned from memory.The most powerful of the two side-channel attacks is named FLOP. It exploits a form of speculative execution implemented in the chips load value predictor (LVP), which predicts the contents of memory when theyre not immediately available. By inducing the LVP to forward values from malformed data, an attacker can read memory contents that would normally be off-limits. The attack can be leveraged to steal a targets location history from Google Maps, inbox content from Proton Mail, and events stored in iCloud Calendar.SLAP, meanwhile, abuses the load address predictor (LAP). Whereas LVP predicts the values of memory content, LAP predicts the memory locations where instructions data can be accessed. SLAP forces the LAP to predict the wrong memory addresses. Specifically, the value at an older load instruction's predicted address is forwarded to younger arbitrary instructions. When Safari has one tab open on a targeted website such as Gmail, and another open tab on an attacker site, the latter can access sensitive strings of JavaScript code of the former, making it possible to read email contents.There are hardware and software measures to ensure that two open webpages are isolated from each other, preventing one of them from (maliciously) reading the other's contents, the researchers wrote on an informational site describing the attacks and hosting the academic papers for each one. SLAP and FLOP break these protections, allowing attacker pages to read sensitive login-protected data from target webpages. In our work, we show that this data ranges from location history to credit card information.There are two reasons FLOP is more powerful than SLAP. The first is that it can read any memory address in the browser process's address space. Second, it works against both Safari and Chrome. SLAP, by contrast, is limited to reading strings belonging to another webpage that are allocated adjacently to the attacker's own strings. Further, it works only against Safari. The following Apple devices are affected by one or both of the attacks: All Mac laptops from 2022present (MacBook Air, MacBook Pro) All Mac desktops from 2023present (Mac Mini, iMac, Mac Studio, Mac Pro) All iPad Pro, Air, and Mini models from September 2021present (Pro 6th and 7th gen., Air 6th gen., Mini 6th gen.) All iPhones from September 2021present (All 13, 14, 15, and 16 models, SE 3rd gen.Attacking LVP with FLOPAfter reverse-engineering the LVP, which was introduced in the M3 and A17 generations, the researchers found that it behaved unexpectedly. When it sees the same data value being repeatedly returned from memory for the same load instruction, it will try to predict the loads outcome the next time the instruction is executed, even if the memory accessed by the load now contains a completely different value! the researchers explained. Therefore, using the LVP, we can trick the CPU into computing on incorrect data values. They continued:If the LVP guesses wrong, the CPU can perform arbitrary computations on incorrect data under speculative execution. This can cause critical checks in program logic for memory safety to be bypassed, opening attack surfaces for leaking secrets stored in memory. We demonstrate the LVP's dangers by orchestrating these attacks on both the Safari and Chrome web browsers in the form of arbitrary memory read primitives, recovering location history, calendar events, and credit card information.FLOP requires a target to be logged in to a site such as Gmail or iCloud in one tab and the attacker site in another for a duration of five to 10 minutes. When the target uses Safari, FLOP sends the browser training data in the form of JavaScript to determine the computations needed. With those computations in hand, the attacker can then run code reserved for one data structure on another data structure. The result is a means to read chosen 64-bit addresses.When a target moves the mouse pointer anywhere on the attacker webpage, FLOP opens the URL of the target page address in the same space allocated for the attacker site. To ensure that the data from the target site contains specific secrets of value to the attacker, FLOP relies on behavior in Apples WebKit browser engine that expands its heap at certain addresses and aligns memory addresses of data structures to multiples of 16 bytes. Overall, this reduces the entropy enough to brute-force guess 16-bit search spaces. Illustration of FLOP attack recovering data from Google Maps Timeline (Top), a Proton Mail inbox (Middle), and iCloud Calendar (Bottom). Credit: Kim et al. When a target browses with Chrome, FLOP targets internal data structures the browser uses to call WebAssembly functions. These structures first must vet the signature of each function. FLOP abuses the LVP in a way that allows the attacker to run functions with the wrong argumentfor instance, a memory pointer rather than an integer. The end result is a mechanism for reading chosen memory addresses.To enforce site isolation, Chrome allows two or more webpages to share address space only if their extended top-level domain and the prefix before this extension (for instance www.square.com) are identical. This restriction prevents one Chrome process from rendering URLs with attacker.square.com and target.square.com, or as attacker.org and target.org. Chrome further restricts roughly 15,000 domains included in the public suffix list from sharing address space.To bypass these rules, FLOP must meet three conditions:It cannot target any domain specified in the list such that attacker.site.tld can share an address space with target.site.tldThe webpage must allow users to host their own JavaScript and WebAssembly on the attacker.site.tld,The target.site.tld must render secretsHere, the researchers show how such an attack can steal credit card information stored on a user-created Square storefront such as storename.square.site. The attackers host malicious code on their own account located at attacker.square.site. When both are open, attacker.square.site inserts malicious JavaScript and WebAssembly into it. The researchers explained:This allows the attacker storefront to be co-rendered in Chrome with other store-front domains by calling window.open with their URLs, as demonstrated by prior work. One such domain is the customer accounts page, which shows the target users saved credit card information and address if they are authenticated into the target storefront. As such, we recover the pages data. Left: UI elements from Squares customer account page for a storefront. Right: Recovered last four credit card number digits, expiration date, and billing address via FLOP-Control. Credit: Kim et al. SLAPping LAP sillySLAP abuses the LAP feature found in newer Apple silicon to perform a similar data-theft attack. By forcing LAP to predict the wrong memory address, SLAP can perform attacker-chosen computations on data stored in separate Safari processes. The researchers demonstrate how an unprivileged remote attacker can then recover secrets stored in Gmail, Amazon, and Reddit when the target is authenticated. Top: Email subject and sender name shown as part of Gmails browser DOM. Bottom: Recovered strings from this page. Credit: Kim et al. Top Left: A listing for coffee pods from Amazons Buy Again page. Bottom Left: Recovered item name from Amazon. Top Right: A comment on a Reddit post. Bottom Right: the recovered text. Credit: Kim et al. "The LAP can issue loads to addresses that have never been accessed architecturally and transiently forward the values to younger instructions in an unprecedentedly large window" the researchers wrote. "We demonstrate that, despite their benefits to performance, LAPs open new attack surfaces that are exploitable in the real world by an adversary. That is, they allow broad out-of-bounds reads, disrupt control flow under speculation, disclose the ASLR slide, and even compromise the security of Safari."The researchers said that they suspect chips from other manufacturers also use LVP and LAP and may be vulnerable to similar attacks. They also said they don't know if browsers such as Firefox are affected because they weren't tested in the research.An academic report for FLOP is scheduled to appear at the 2025 USENIX Security Symposium. The SLAP research will be presented at the 2025 IEEE Symposium on Security and Privacy. The researchers behind both papers are: Jason Kim, Georgia Institute of Technology Jalen Chuang, Georgia Institute of Technology Daniel Genkin, Georgia Institute of Technology Yuval Yarom, Ruhr University BochumThe researchers published a list of mitigations they believe will address the vulnerabilities allowing both the FLOP and SLAP attacks. They said that Apple officials have indicated privately to them that they plan to release patches.In an email, an Apple representative declined to say if any such plans exist. We want to thank the researchers for their collaboration as this proof of concept advances our understanding of these types of threats," the spokesperson wrote. Based on our analysis, we do not believe this issue poses an immediate risk to our users.Dan GoodinSenior Security EditorDan GoodinSenior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82. 14 Comments
    0 Comments ·0 Shares ·40 Views
  • Experimental XB-1 aircraft goes supersonic for the first time
    www.newscientist.com
    TechnologyBoom Supersonics XB-1 aircraft broke the sound barrier during three test runs, a step toward the possible return of supersonic commercial flights 28 January 2025 The XB-1 supersonic aircraftBoom SupersonicThe experimental XB-1 aircraft, made by US company Boom Supersonic, flew faster than the speed of sound on 28 January. The achievement is the first time any civil aircraft has gone supersonic over the continental US and another step toward the possible return of supersonic commercial aviation.This jet really does have a lot of the enabling technologies that are going to enable us to build a supersonic airliner for the masses, said Greg Krauland, former chief engineer for Boom Supersonic, during a live stream of the test flight. AdvertisementAt the Mojave Air & Space Port in California, Boom Supersonics chief test pilot Tristan Geppetto Brandenburg took the XB-1 on its twelfth successful test flight and its first supersonic one. The sleek white prototype, with a blue-and-yellow tail assembly, broke the sound barrier on the first pass in the test airspace, reaching a speed of about Mach 1.11. Then Brandenburg flew back around for two more supersonic runs before returning to land.The only aircraft currently able to reach supersonic speeds are military fighter jets and bombers. Although the fabled commercial airliner Concorde made transatlantic flights for several decades starting in the 1970s, it retired in 2003 due to multiple challenges, including high fuel costs and a deadly accident in 2000 that killed all 109 people on board.The success of the XB-1 could herald a return for supersonic commercial flight. The test flights are meant to inform the design of a planned Overture airliner that Boom Supersonic says would cruise at Mach 1.7 and carry up to 80 passengers. The company plans to start producing these airliners this year and begin carrying passengers on them in 2029 and airlines like United and American have already placed orders.Other supersonic aircraft are also in the works, including from multinational company Dawn Aerospace and US space agency NASA. Fresh off the milestone XB-1 flight, Brandenburg teased a future demonstration that also involves NASA possibly hinting at a future joint flight with both the XB-1 and NASAs X-59 experimental aircraft. The X-59 is designed to minimise the shock wave that normally accompanies supersonic flight in order to create a sonic thump rather than a disruptive sonic boom.Were working with NASA on something that Im pretty excited about, said Brandenburg.Topics:
    0 Comments ·0 Shares ·37 Views
  • 'What happened with DeepSeek is actually super bullish': Point72 founder Steve Cohen on AI, Trump's impact, and giving up trading
    www.businessinsider.com
    Point72 founder and New York Mets owner Steve Cohen spoke at iConnections' Miami conference Tuesday.Cohen said Chinese company DeepSeek's recent breakthroughs are good for the AI industry overall."It advances the move to artificial superintelligence. And that's coming, it's coming quick," he said.Billionaire Steve Cohen isn't worried that the US has lost any kind of AI race with China just because of DeepSeek's recent breakthroughs.The Chinese AI startup has vaporized hundreds of billions of market value from some of the biggest names in the S&P 500 with its open-source models. Still, Cohen whose firm, $37 billion hedge fund Point72, has a $1.5 billion fund dedicated to AI named Turion believes "what happened with DeepSeek is actually super bullish" for the industry."It advances the move to artificial superintelligence. And that's coming, it's coming quick," said Cohen, speaking at the iConnections conference in Miami.Bumps in the road for the companies on the path to superintelligence are just that, he said bumps in the road."There's going to be a lot of winners here, and it's going to be episodic. It's not going to go in a straight line," he said, according to a recording of his talk obtained by Business Insider.While he's optimistic about AI's potential, Cohen said the overall market could slow down due to President Donald Trump's immigration and trade policies.Cohen, who has supported former New Jersey Gov. Chris Christie in the past, said Trump's focus on "unleashing America" and shedding regulations has "a lot to like." However, he believes the tariff and immigration policies proposed by the administration would slow growth this year."Tariffs are a tax, and that's going to slow consumer spending," he said. He thinks the economy will grow 2.5% in 2025's first half but slow in the second half to around 1.5%, and the Federal Reserve will struggle to hit its 2% inflation target thanks to unemployment remaining low due to a severe drop in immigration."I would expect the market to top over the next couple of months if it hasn't already topped already," he said.Wherever the market ends up this year, though, Cohen won't be trading it, at least not for his firm's investors. While he's regarded as one of the greatest stockpickers to ever live, Cohen decided to step away from trading last year to focus on running his two companies, Point72 and Major League Baseball's New York Mets."I describe it as being immersed in a video game, and it's so immersive that you just forget what's going on around you," he said of trading, and without it, he's able to focus more on the people at his companies."I'm 68 and had this vision of being 70, still behind screens. I was like, 'That doesn't make sense,'" Cohen said.
    0 Comments ·0 Shares ·36 Views