• TOWARDSAI.NET
    TAI #137: DeepSeek r1 Ignites Debate: Efficiency vs. Scale and China vs. US in the AI Race
    Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by LouieThis weeks AI discourse centered on DeepSeeks r1 release, which sparked a heated debate about its implications for OpenAI, GPUs, and the broader industry. Meanwhile, Google quietly rolled out an improved version of its own reasoning model Gemini Flash 2.0 Thinking, improving its AIME benchmark score to 73.3% (from ~64% in December). OpenAIs announcement of its planned $500B Stargate data center project a collaboration with SoftBank and Oracle painted a contrasting picture: while DeepSeek refined efficiency, OpenAI appears to be doubling down on scale.We have often covered Deepseeks model releases and technical innovations over the past year, and last week; I outlined r1s reinforcement learning (RL)-driven reasoning, 30x lower API costs than OpenAIs o1, and successful distillation into smaller models. This week, Deepseeks models went viral, its chatbot leaped to the top of the app store, and reactions oscillated between OpenAI is obsolete and DeepSeeks training costs are faked. In particular, many people cottoned on to Deepseeks impressive training costs (just $5.6m direct compute cost for v3 for the final model run announced in December) and lower inference prices for r1 vs OpenAI o1. This led many to question whether huge $bn training clusters are still needed and whether the US has lost its AI lead.We think much of the sudden reaction is overblown, and its entertaining that the r1 price reduction hits the media while the invention and consequences of reasoning models themselves have still gone largely unreported. Deepseek has a very impressive research team with a productive culture and structure (including high vertical integration and fewer silos between teams), but we think the US still has more leading AI researchers and companies. The main difference is that the best AI researchers in the US work for companies that are not GPU poor, and expertise has been prioritized to scale quickly over first principles-led improvements and tweaks to LLM architectures and methods. In China, the best researchers instead flocked to Deepseek, which is still GPU-poor (relatively speaking, due to sanctions) and has been focusing on finding next-generation methods to improve training and inference efficiency. Gains from their 10+ breakthroughs over the last 2 years (all publicly shared, many of them already included in the v2 release over 8 months ago) added up to what looks like a cost-efficiency advantage vs. US labs. However, OpenAI and Anthropic reportedly have a 70%+ gross margin, while Deepseek CEO said they price close to cost, so v3 is not 10x more efficient than 4o and r1, not 30x vs. o1 as prices would imply. Nevertheless, it is still significant that such a capable model is now made available open-source and that it could be trained and served at such an affordable price.Why should you care?We think its great to see new LLM techniques, efficiency gains, and such a strong open-source reasoning model. Hopefully, the release will pressure OpenAI to also show its o1 reasoning tokens, reduce prices, and release the much stronger o3! We also see huge potential for the open source community to build on top of these models and, in particular, in reinforcement fine-tuning these models for new domains.However, we dont think this is the end of building larger training clusters. Scaling laws still hold and all else equal, the more compute we put in, the more capable models we get out. Algorithmic and technique efficiency gains on top of this just means we get more out of our GPU clusters. It doesnt mean we wont get even more from larger training runs. More compute still stacks capability on top of all other improvements so there is no loss of incentive to have the biggest cluster. It is no surprise to see OpenAI hoping to push towards their $500bn Stargate data center plan! The main news over the past 4 months is just that now we also have new test-time compute scaling laws, which is yet another vector to scale both during training via the RL process and synthetic data generation and at inference time.Lost somewhat in the noise of the pricing a potentially much more significant aspect of r1 we noted this week is that despite being trained via rewards for solving math and LeetCode problems, it has also demonstrated significant improvements in creative writing. The r1 model now tops the eqbench leaderboard for Creative Writing with large gains over V3. We have also heard from many people who are finding Gemini Flash-Thinking 2.0 better than Gemini Pro 2.0 for creative writing tasks. This raises the question of just how far the generalization potential of this new paradigm of reasoning LLMs can take us.Hottest News1. OpenAI Launches Operator, an AI Agent That Performs Tasks AutonomouslyOpenAI launched a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform specific actions. Operator is powered by a new model called a Computer-Using Agent (CUA). Combining GPT-4os vision capabilities with advanced reasoning through reinforcement learning, CUA is trained to interact with the buttons, menus, and text fields people see on a screen.2. Google Releases Update to Gemini 2.0 Flash Thinking ModelGoogle quietly released another update to its own reasoning model Gemini 2.0 Flash Thinking first released in late December. Similar to Deepseek R1, and unlike OpenAI o1, the Flash Thinking model displays its reasoning process. The model is currently available for free while in its experimentation stage. The model climbed to a score of 73.3% on AIME (vs. ~64% in December) and 74.2% on the GPQA Diamond science benchmark (vs. ~66% in December and 58.6% for the non-reasoning Flash 2.0 model).3. Anthropic Introduces Citations To Reduce AI ErrorsAnthropic unveiled a new feature for its developer API called Citations. This feature lets Claude ground its answers in source documents. It provides detailed references to the exact sentences and passages used to generate responses, leading to more verifiable, trustworthy outputs. Citations are available only for Claude 3.5 Sonnet and Claude 3.5 Haiku. Additionally, Citations may incur charges depending on the length and number of the source documents.4. Hugging Face Shrinks AI Vision Model SmolVLM to Phone-Friendly SizeHugging Face introduced vision-language models that run on devices as small as smartphones while outperforming their predecessors, which required massive data centers. The companys new SmolVLM-256M model, requiring less than one gigabyte of GPU memory, surpasses the performance of its Idefics 80B model from just 17 months ago a system 300 times larger.5. OpenAI Teams Up With SoftBank and Oracle on $500B Data Center ProjectOpenAI announced it is teaming up with Japanese conglomerate SoftBank and with Oracle, among others, to build multiple data centers for AI in the U.S. The joint venture, the Stargate Project, intends to invest $500 billion over the next four years to build new AI infrastructure for OpenAI in the United States.6. Google Invests Further $1Bn in OpenAI Rival AnthropicGoogle is reportedly investing over $1 billion in Anthropic. This new investment is separate from the companys earlier reported funding round of nearly $2 billion earlier this month, led by Lightspeed Venture Partners, to bump the companys valuation to about $60 billion.Five 5-minute reads/videos to keep you learning1. Building Effective AgentsThis post combines everything Anthropic has learned from working with customers and building agents. It also shares practical advice for developers on building effective agents. The post covers when and how to use agents, the workflow, and more.2. Inside DeepSeek-R1: The Amazing Model that Matches GPT-o1 on Reasoning at a Fraction of the CostOne dominant reasoning thesis is that big models are necessary to achieve reasoning. DeepSeek-R1 challenges that thesis by matching the performance of GPT-o1 at a fraction of the compute cost. This article explores the technical details of the DeepSeek-R1 architecture and training process, highlighting key innovations and contributions.3. Agents Are All You Need vs. Agents Are Not Enough: A Dueling Perspective on AIs FutureThe rapid evolution of AI has sparked a compelling debate: Are autonomous agents sufficient to tackle complex tasks, or do they require integration within broader ecosystems to achieve optimal performance? As industry leaders and researchers share insights, the divide between these perspectives has grown more pronounced. This article presents arguments for both sides and provides a middle ground.4. 10 FAQs on AI Agents: Decoding Googles Whitepaper in Simple TermsThe future of AI agents holds exciting advances, and weve only scratched the surface of what is possible. This article explores AI agents by diving into Googles Agents whitepaper and addressing the ten most common questions about them.5. Image Segmentation Made Easy: A Guide to Ilastik and EasIlastik for Non-ExpertsTools like Ilastik and EasIlastik empower users to perform sophisticated image segmentation without writing a single line of code. This article explores what makes them so powerful, walks through how to use them, and shows how they can simplify image segmentation tasks, no matter your level of experience.6. Why Everyone in AI Is Freaking Out About DeepSeekOnly a handful of people knew about DeepSeek a few days ago. Yet, thanks to the release of DeepSeek-R1, its been arguably the most discussed company in Silicon Valley in the last few days. This article explains what has led to this popularity.Repositories & ToolsOpen R1 is a fully open reproduction of DeepSeek-R1.PaSa is an advanced paper search agent powered by LLMs that can autonomously make a series of decisions.Top Papers of The Week1. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningDeepSeek-R1-Zero and DeepSeek-R1 are reasoning models that perform comparable to OpenAI-o11217 on reasoning tasks. DeepSeek-R1-Zero is trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT), while DeepSeek-R1 incorporates multi-stage training and cold-start data before RL. Available in sizes 1.5B, 7B, 8B, 14B, 32B, and 70B, DeepSeek-R1-Zero and DeepSeek-R1 are open-sourced and distilled from DeepSeek-R1 based on Qwen and Llama.2. Humanitys Last ExamHumanitys Last Exam (HLE) is a multi-modal benchmark designed to be the final closed-ended academic benchmark with broad subject coverage. HLE is developed by subject-matter experts and comprises 3,000 multiple-choice and short-answer questions across dozens of subjects, including mathematics, humanities, and the natural sciences. Each question has a known, unambiguous, and easily verifiable solution that cannot be quickly answered via Internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE.3. Evolving Deeper LLM ThinkingThis paper explores an evolutionary search strategy for scaling inference time compute in LLMs. It proposes a new approach, Mind Evolution, that uses a language model to generate, recombine, and refine candidate responses. Controlling for inference cost, Mind Evolution significantly outperforms other inference strategies, such as Best-of-N and Sequential Revision, in natural language planning tasks.4. Agent-R\xspace: Training Language Model Agents to Reflect via Iterative Self-TrainingThis paper proposes an iterative self-training framework, Agent-R, that enables language Agents to Reflect on the fly. It leverages Monte Carlo Tree Search (MCTS) to construct training samples that recover correct trajectories from erroneous ones. It introduces a model-guided critique construction mechanism: the actor model identifies the first error step in a failed trajectory. Next, it is spliced with the adjacent correct path, which shares the same parent node in the tree.5. Reasoning Language Models: A BlueprintThis paper proposes a comprehensive blueprint that organizes reasoning language model (RLM) components into a modular framework based on a survey and analysis of all RLM works. It incorporates diverse reasoning structures, reasoning strategies, RL concepts, supervision schemes, and other related concepts. It also provides detailed mathematical formulations and algorithmic specifications to simplify RLM implementation.Quick Links1. Meta AI releases Llama Stack 0.1.0, the first stable release of a unified platform designed to simplify building and deploying generative AI applications. The platform offers backward-compatible upgrades, automated provider verification, and a consistent developer experience across local, cloud, and edge environments. It addresses the complexity of infrastructure, essential capabilities, and flexibility in AI development.2. Perplexity launched Sonar, an API service that allows enterprises and developers to integrate the startups generative AI search tools into their applications. Perplexity currently offers two tiers for developers: a cheaper and faster base version, Sonar, and a pricier version, Sonar Pro, which is better for tough questions.Whos Hiring in AIDeveloper and Technical Communications Lead @Anthropic (Multiple US Locations/Hybrid)AI Algorithm Intern @INTEL (Poland/Hybrid)Software Developer 3 @Oracle (Austin, TX, United States)Data Scientist @Meta (Seattle, WA, USA)Junior Software Engineer @Re-Leased (Napier, New Zealand)Designated Technical Support Engineer @Glean (Palo Alto, CA, USA)Gen AI Engineer | LLMOps @NEORIS (Spain)Interested in sharing a job opportunity here? Contact [emailprotected].Think a friend would enjoy this too? Share the newsletter and let them join the conversation.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Σχόλια 0 Μοιράστηκε 141 Views
  • TOWARDSAI.NET
    Introduction to Machine Learning
    Introduction to Machine Learning 0 like January 28, 2025Share this postAuthor(s): Carlos da Costa Originally published on Towards AI. Lay the foundation for your machine learning journey with this comprehensive introductionThis member-only story is on us. Upgrade to access all of Medium.Photo by Arseny Togulev on UnsplashWhen we hear about Machine Learning, our minds often jump to exciting technologies like ChatGPT, Gemini, and other generative AI tools. While these applications are impressive, they all share the same foundational principles of machine learning. To truly understand and harness the power of these innovations, its essential to build a strong foundation in the basics. In this blog post, well introduce the fundamental concepts of machine learning, providing a beginner-friendly introduction to machine learning for anyone eager to explore the world of machine learning and Artificial Intelligence.In this article, we will cover:What is Machine Learning?Training and testing setTypes of machine learningChallenges in Machine LearningDownload bellow the machine learning cheat sheet and roadmap!Unlock the world of machine learning with this comprehensive beginner's guide! Featuring both a detailed cheat sheetdaviddacosta.gumroad.comIn simple terms, machine learning is the science of teaching a computer to learn from data without being explicitly programmed.Image by the author created with napkin.aiConsider a program designed to predict whether it will rain. In traditional programming, we would manually create rules to evaluate weather patterns. For example, we might check temperature, wind speed, cloud Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Σχόλια 0 Μοιράστηκε 156 Views
  • WWW.DENOFGEEK.COM
    The Rings of Power Season 2 Viewership Data Isnt Great News for the Shows Five-Season Plan
    A new 2024 year-end report from streaming analytics company Luminate raised more than a few eyebrows when it released earlier this month. Among its many revelations (via Deadline) is the less-than-stellar data concerning major franchise streaming series released last year. According to the report, all Marvel and Star Wars series underperformed last year in terms of total minutes watched, including Echo and Agatha All Along, as well as the cancelled The Acolyte, when compared to previous streaming series in their respective franchises. Also add to the list:The Rings of Power season 2, which saw a 60% drop in total minutes watched when compared to season 1, which itself reportedly struggled to retain viewers all the way through to its finale in 2022. While its true that season 2 was always going to struggle to reach the meteoric numbers of season 1s early episodes, when anticipation and interest was at its highest for the blockbuster series, this is still a concerning downturn for an Amazon show which is also incredibly expensive to make and that has struggled to win over Lord of the Rings fans at large.Lets be clear here: Amazon is currently full steam ahead on The Rings of Power season 3. The show is still one of the most-watched series on Prime Video, with Amazon studio head Jennifer Salke saying in October that season 2 had been watched by over 55 million viewers (although its unclear how the company defines a viewer). Salke also said at the time that she expected season 2 to catch up to the first season, which had over 150 million viewers on the service. The second season remained in the top five in Nielsens list of most-watched original streaming series during its run, with the three-episode premiere topping the list in its first week (although with a 19% drop when compared to season 1s massive premiere). It was Prime Videos third highest opening week in the U.S. behind the Fallout and The Boys season 4 premieres, according to THR.Season 3 has also been in the works since last year. The answer is yes, were very excited, but we cant say anything other than were working on season 3, co-showrunner Patrick McKay confirmed when Total Film asked for a status update in December. We have a story we think is really strong, and were hoping to turn it around as fast as possible.The Rings of Power is planned as a five-season series with an estimated price tag of $1 billion. According to a report from THR in September, Amazon is still committed to that plan. Yet, its hard not to wonder what could happen if season 3 continues the downward trend in terms of how much time the audience is spending actually watching the series. With such a high production cost on the studios budget sheet, could the plan eventually change if the number of people watching and how long theyre watching becomes untenable?This desire to paint the show as anything less than a successits not reflective of any conversation Im having internally, Salke told THR in 2022 in response to a similarly concerning viewership report stating that less than 50% of viewers had finished season 1. At the time, Salke promised a second season with more dramatic story turns, and overall, the story did go over better with viewers, with a 49% audience score on Rotten Tomatoes versus season 1s 38%. An improvement, but still not stellar for a $1 billion series set in one of the most beloved fantasy worlds in pop culture.The season 2 finale set the stage for the wars to come, with Sauron now in control of the forces of Mordor and the nine rings of power for Men, and the Elves regrouping and planning to take the fight back to the Dark Lord. This all sounds like the setup for an excellent third seasonand, if viewers show up, fourth and fifth seasons, too.Lord of the Rings: The Rings of Power is streaming now on Prime Video.
    0 Σχόλια 0 Μοιράστηκε 155 Views
  • NEWS.XBOX.COM
    MLB The Show 25: Paul Skenes, Elly De La Cruz, and Gunnar Henderson Are Your Cover Athletes
    SummaryA triple play of talent: MLB The Show 25 features three cover athletes for the first time ever with Paul Skenes, Elly De La Cruz, and Gunnar Henderson.MLB The Show 25 launches March 18, 2025, for Xbox Series X|S; early access begins March 14.Check out the Gameplay Trailer next Tuesday, February 4, at 9:00 am PT on San Diego Studio and MLB The Show social channels.In 2025, MLB The Show is celebrating its monumental 20th anniversary, a milestone that honors two decades of baseball history, innovation, and unforgettable memories for baseball fans around the world.To mark this special occasion for San Diego Studio and PlayStation Studios, MLB The Show 25 proudly features not one, nor two, but three cover athletes for the first time ever: Paul Skenes, Elly De La Cruz, and Gunnar Henderson. These rising stars join a rich history of legendary cover athletes, symbolizing the future of baseball as we celebrate 20 incredible years of the franchise. Heres some more information about each of our cover athletes:Paul SkenesPaul Skenes journey began in high school and continued throughout college, where he cemented his status as one of most talented young pitchers in baseball by winning a national championship before being selected as the No. 1 overall pick by the Pittsburgh Pirates in the 2023 MLB Draft. The following season, Skenes made his MLB debut and became the first player in MLB history to start the 2024 MLB All-Star Game just one year after being drafted. He capped off an incredible 2024 season by earning National League Rookie of the Year honors and being named to the All-MLB First Team. His story and skillset align perfectly with Road to the Show, where you can chart your own path to greatness.Elly De La CruzElly De La Cruzs meteoric rise from the minors to the majors perfectly captures the excitement of working up to a big league debut and becoming a star. Since making his Major League debut in 2023 with the Cincinnati Reds, Elly has consistently proven to be one of the most electrifying players on the field. He became one of the youngest players in MLB history to hit for the cycle and is renowned for his blazing speed on the bases and his cannon-like arm, further cementing his status as one of the leagues most promising talents. The 2024 All-Star showcases an impressive five-tool skillset and a dynamic switch-hitting approach exactly what MLB The Show fans dream of when developing or acquiring their starting shortstop in Franchise mode.Gunnar HendersonRounding out this triple play of talent is the 2023 American League Rookie of the Year, Gunnar Henderson. Selected 42nd overall in the 2019 MLB Draft, Gunnar quickly ascended through the Orioles farm system, making his MLB debut late in the 2022 season. Heading into 2023, he was named the #1 prospect on MLBs Top 100 list. In his first full season in the majors, Gunnar became a cornerstone of the Orioles young core, helping lead the team to its first postseason appearance in seven years. His outstanding performance earned him AL Rookie of the Year honors and a Silver Slugger Award. Gunnars star continued to rise in 2024, as he was selected to start in the MLB All-Star Game and compete in the Home Run Derby. His MLB journey resonates deeply with MLB The Show fans who strive to dominate head-to-head competition in their pursuit of greatness on the diamond.Play Ball Starting March 18, 2025, on Xbox Series X|SWe are also thrilled to announce that MLB The Show 25 will launch on March 18, 2025 on Xbox Series X|S. Early Access begins on March 14, 2025, for anyone who purchases the Digital Deluxe Edition on the Microsoft Store for Xbox. Pre-orders for all editions (Standard Edition and Digital Deluxe Edition) will open on February 4 at 6:00 am PST on TheShow.com and Microsoft Store for Xbox.Well also have a first-look at our first Gameplay Trailer next Tuesday, February 4 at 9:00 am PT on San Diego Studio and MLB The Show social channels. You can also look forward to even more weekly feature reveals, including Feature Trailer Deep Dives featuring MLB Network host Robert Flores, and the return of Feature Premieres from the development team at San Diego Studios.For all the fans, make sure to sign up for the MLB The Show Scouting Report to stay updated on all the latest features, updates, and legends coming to MLB The Show 25, while unlocking amazing rewards along the way. Starting in April 2025, subscribers will receive an exclusive pack every month through December 2025 (internet connection required), packed with incredible in-game rewards. Plus, the Golden Ticket Sweepstakes will return in 2025, with winners selected every month for amazing prizes like Packs, Stubs, clothing, autographed items from Elly De La Cruz, Gunnar Henderson, and Paul Skenes, along with even more surprises we cant wait to reveal. Dont miss out sign up today on TheShow.com!While youre there, dont forget to also set up your MLB The Show account and prepare for Opening Day. MLB The Show 25 allows you to easily move from platform to platform and keep access to your entire inventory of cards (learn more here). Head over to https://account.theshow.com/ and create your MLB The Show Account. After creating your MLB The Show account, login with your platform account and link it to your MLB The Show account, and you are all set.For all the latest information on MLB The Show, be sure to head over toTheShow.com, sign up forThe Scouting Report, and follow ourX,Facebook andInstagram accounts.
    0 Σχόλια 0 Μοιράστηκε 176 Views
  • 9TO5MAC.COM
    Five new Apple products are launching early this year, heres whats coming
    The first month of 2025 is almost over, moving us closer to the first Apple product launches of the yearand several strong ones are coming. Heres a look at five new products expected to arrive in the next few months.M4 MacBook AirApples most popular Mac is about to get its next upgrade. Heres what to expect from the new MacBook Air:M4 chip16GB RAM standardNano-texture display option12MP Center Stage cameraSupport for two external displays with lid openLikely improved battery performance thanks to M4Same design with 13-inch and 15-inch sizesOverall it should be a solid update for the MacBook Air. As for timing, signs point to a February or early March releasesome time around when the M3 MacBook Air launched in early 2024.iPhone SE 4Apples next iPhone will be the one most people should probably buy. Its the new iPhone SE 4, which is expected to debut in March.The new iPhone SE will feature: an iPhone 14-style design, with OLED and no Home ButtonFace ID and a notchthe A18 chip just added to the iPhone 16Apple Intelligence support, which is otherwise exclusive to 16 and 15 Pro models8GB of RAM48MP main camera, likely similar to the iPhone 16sUSB-C portand the first ever Apple-designed 5G modemAll of this at a compelling, budget-friendly price pointlikely under $499.The product Im personally most excited about is Apples forthcoming HomePad smart display.Mark Gurman detailed the forthcoming product extensively in a recent Bloomberg piece, which should bring fresh energy to Apples Home product lineup. Expected features, per Gurman, include:Focus is on Siri, Communication, and Home controlRuns Safari, Music, Notes, and several other Apple apps but no App StoreDevice is touch, but will mostly be operated by voice through Apple Intelligences new App IntentsNew OS is a blend of watchOS and iOS StandBy mode; dynamic UI shifts based on user distanceSize is roughly two iPhones side-by-side, with about a 6-inch screenDevice has speakers, FaceTime camera, and batteryApple is working on wall attachments, plus a speaker base for desks, tables, kitchens, and nightstandsHeavy security focus as well as video/audio intercom feature that works with other home devicesHome Screen is customizable with the classic Apple widgets and Home controlsTaps into video doorbells and video cameras, with support for security alertsI cant wait to see Apples renewed push into Home products come to fruition. Gurman originally said the HomePad was coming in March, but later revised expectations by saying it could be delayed a bit.After a barren 2023 for iPad, Apple was originally expected to update its entire iPad lineup in 2024. We did get new iPad Pro, iPad Air, and iPad mini models, but the base iPad was ultimately left unchanged.Very soon though, the iPad will get updated to an 11th generation device. We dont know much about the new iPad, other than that it will get a faster A-series chip, likely one with 8GB of RAM that supports Apple Intelligence. It will potentially feature Apples new Wi-Fi and Bluetooth networking chip too.Mostly, it sounds like a standard iterative update for Apples most affordable iPad.M3 iPad AirThe base iPad wont be the only tablet launching soon. Despite Apple just revving the iPad Air last May with the M2 chip, rumors indicate an M3 upgrade is coming before long.Not much else is known about the M3 iPad Air except that it should support some updated keyboard accessories from Apple. Otherwise, its likely just a spec bump for the same 11-inch and 13-inch sizes we have already.Wildcard: Apple Watch SE 3Finally, theres a possible sixth product that could arrive soon: an Apple Watch SE 3.The last Apple Watch SE arrived in 2022, and leading up to Apples September 2024 event, rumors pointed to a launch for the SE 3but it never materialized.Except for the MacBook Air, these are products that no one cares about. Most would rather hold an iPad in their hands and use it, rather than a 6" HomePad from across the room. What a complete waste. Most are waiting on the new Mac Studio with M4 Max and M4 Ultra, a product that Apple has neglected, since it still uses a 2 year old CPU. View all commentsPrevious Apple Watch SE models have debuted in September, but there are two reasons to believe the new SE might arrive this spring instead.if the device was indeed targeting September 2024 but just missed it, Apple has little reason to wait a whole extra yearan Apple Watch SE launch could pair nicely with the iPhone SE 4s releaseApples upcoming products: wrap-upApples plans could always change, but if these five or more products do arrive in the early part of the year, it will make for an especially strong start to 2025 for the company. All leading up to whats sure to be an exciting WWDC and fall product launch season.Which of Apples products coming soon are you most excited for? Let us know in the comments.Best iPhone accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Σχόλια 0 Μοιράστηκε 146 Views
  • 9TO5MAC.COM
    Apple just published a meditative video tour of Silos massive sets
    Silo just wrapped up its second season on Apple TV+, but if you werent ready to leave silos 17 & 18 just yet, Apple just shared a new way to explore the shows dystopian world.Silo set tour video runs almost 24 minutesApple TVs YouTube account today published a unique video designed to draw you back into the world of Silo following its season two conclusion.Per the videos description:Experience an unprecedented look at the sets of Silo with this meditative tour of silos 17 and 18.The nearly 24-minute video tour takes you slowly room by room throughout the massive sets that Silo is set in.Although no word has yet been shared on an expected release date for Silo season 3, we do know that Apple TV+ has renewed the show for both seasons 3 and 4.Those next two seasons are expected to bring the series to completion, with the full story of the three novels Silos based on wrapped up.Silo seasons 1 and 2 are available to watch now on Apple TV+ with an active subscription.What do you think of the new Silo set tour video? Do you want to see more content like this from Apple TVs YouTube account? Let us know in the comments.Best Apple TV and Home accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Σχόλια 0 Μοιράστηκε 151 Views
  • FUTURISM.COM
    Trump Responds After DeepSeek Humiliates His Splashy AI Announcement
    A "wake-up call."Deep TroublePresident Donald Trump has responded to the rapid rise of the Chinese startup DeepSeek, whose recently released AI model has him and his Silicon Valley pals looking like a bunch of chumps."The release of DeepSeek AI from a Chinese company should be a wake-up call for our industries that we need to be laser-focused on competing to win," Trump said Monday at a GOP event in Florida.It's a relatively measured take from Trump, considering his usual though occasionally wavering hawkishness on China, complete with a metaphorical stern glance to the domestic tech sector.The Republican president also added that he viewed the model's low-cost as a "positive development.""Instead of spending billions and billions, you'll spend less and you'll come up with hopefully the same solution," Trump said.Stars MalignReleased last week, DeepSeek's open-source R1 model rivals the West's best at a fraction of the cost. It was developed using older Nvidia AI chips, for purportedly under $6 million. Despite these limitations, R1 matches up to leading chatbots like OpenAI's o1 model, and in certain benchmarks, even surpasses them.For Trump and his tech allies themselves often recent convertees to his political enclave the timing couldn't have been worse. The new administration had just announced its Stargate deal, which would raise $500 billion of private capital towards building AI infrastructure in the US. Its backers included OpenAI, Oracle, Softbank, and Emirati state-run investment firm MGX.That staggering sum is emblematic of the outrageous amounts of capital and processing power that the industry talking heads have been insisting is necessary to develop large AI models.Thus far, the preferred way of making AI more powerful has been through scaling, or leveraging more data and adding more AI chips. In other words, making the AI models bigger which isn't very efficient, money or energy-wise.All Caught UpWith DeepSeek riposting without anywhere near the same amount of resources, Stargate's half-trillion dollar price tag now looks like a ridiculous monument to the tech industry's arrogance.And generally, the stock market agrees. The buzz stirred by the Chinese AI model wiped out over $1 trillion from leading US tech stocks, or about two Stargates worth. Roughly $600 billion of those losses came from Nvidia.That said, big names like OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella have tried to keep a cool head amidst the chaos, praising Deepseek's achievements.And Trump's own AI and crypto czar David Sacks opined on X that R1 "shows that the AI race will be very competitive," stressing that the US "can't be complacent."Share This Article
    0 Σχόλια 0 Μοιράστηκε 151 Views
  • FUTURISM.COM
    Energy Companies Stocks Plummet as DeepSeek Shows AI Doesn't Need Entire Coal Plants to Cheat on Homework
    China-based AI startup DeepSeek has been absolutely liquidating the AI tech bubble since trading opened Monday.Basically, the company's powerful new AI model, known as R1, seems to demonstrate that filling schools with Anne Frank chatbots doesn't need to be the massive energy sink that AI industry has long insisted.That's rattled tech stocksin AI-adjacent industries, with the semiconductor company Broadcom down over 17.5 percent over two days, and Nvidia continuing to fluctuate after shedding nearly $600 billion in one day, dragging the NASDAQ down by an astonishing 3 percent.Now joining them are energy and utility corporations, as investors reel from DeepSeek's more efficient model seeming to use considerably less power than OpenAI to achieve similar results. Essentially, fossil fuel outfits had been banking on huge new datacenters needing tons of energy."Natural-gas producers EQT and Antero Resources each declined more than 9 percent. Pipeline giants Kinder Morgan and Williams Cos. ended 9.3 percent and 8.4percent lower, respectively," the Wall Street Journal reported. "Nuclear-plant owner Constellation Energy and Vistra, which runs one of Americas largest fleets of gas-fueled power plants as well as solar farms, were some of the top performing stocks in the S&P 500 last year. On Monday, they dropped 21percent and 28 percent, respectively."The timing of DeepSeek's reveal was astonishingly unfortunate, coming to prominence right after Donald Trump announced a $500 billion government-led AI venture, with the newly-minted president musing that the awesome potential of AI might require power plants hooked up directly to datacenters.Even worse timing which is saying something, under the circumstances might have been Chevron announcing a huge new partnership to build natural gas plants in the United States with the express purpose of running energy-hungry new datacentersafter DeepSeek's reveal and subsequent AI stock implosion.In a more rational world, DeepSeek's efficiency which was achieved through an open-source development model would be a universal cause for celebration.Unfortunately for our new tech oligarchy, the announcement flies directly in the face of the tech sector's preferred narrative of AI scaling: the idea that simply adding more computing power will lead to better and better AI, even if the environmental impact is grim.Investors have taken this financially convenient hypothesisinflated market valuations for startups, energy conglomerates, and big tech companies eager to spill untold lakes of oil, coal and uraniumstay ahead of China."Thus far OpenAI and its peer scaling labs have sought to convince the public and policymakers that scaling is the best way to reach so-called [artificial general intelligence]," AI reporter Karen Hao wrote on X-formerly-Twitter. "This has always been more of an argument based in business than in science."As such, tech critics are enjoying a bit of schadenfreude, given that the entire American model for AI development hinged on the tech's ability to replace nearly all knowledge workers in the US, accounting about 12 percent of the US labor force. Now that DeepSeek claims todo OpenAI's job more efficiently than it can, the American for-profit tech corporation is getting a taste of its own medicine."OpenAI has been burning through staggering sums of cash to keep up its scaling paradigm and has yet to figure out how to balance its checkbooks," Hao continued, "and it turns out it didn't need to spend so much cash."More on AI stock bubble:Trump Embraced AI and It Exploded Spectacularly in His FaceShare This Article
    0 Σχόλια 0 Μοιράστηκε 148 Views
  • SCREENCRUSH.COM
    Blockbuster Video Making a Comeback ... As a Restaurant?
    Big news, 90s kids Blockbuster may be coming back!But not quite in the way you'd expect.According to a post on Instagram, Blockbuster Video could be reincarnated as a nightclub, bar, restaurant and amusement park.Yes, you heard that right. You could be experiencing a Blockbuster-themed amusement park in the near future.The post revealed that the owner of the Blockbuster trademark filed an application to use the brand in several new ventures.A publicly viewable application confirmed reports of a Blockbuster comeback and lists night clubs; amusement centers; entertainment services in the nature of an amusement center attraction as well as bar and restaurant services; snack bar services as potential uses of the brand in its new iteration.READ MORE:Big Lots Selling More Than 500 Store Leases in These 47 StatesThis isn't the first time Blockbuster has tried to expand its brand.In the 90s, Blockbuster operated an indoor theme park called Block Party, where, according to Park Rovers, guests could enjoy a city-street theme with high-tech attractions designed for 18-to-45-year-olds, including a motion simulator ride and a high-tech maze.The venture was tested in Albuquerque and Indianapolis under the leadership of ex-DisneyexecutivesBill Burns and Fred Brooks.According toReviewTyme on YouTube, the adult theme parks were marketed as a place where grown-ups go to kid around.ReviewTyme's video noted that not only did Block Party have games and various attractions, but also a giant play center for the young at heart.Ultimately, the Block Party theme parks were a flop and didn't revitalize the brand as theyd hoped.Instead, they became a hot spot for rowdy 20-somethings, according to ReviewTyme.Blockbuster filed for bankruptcy in 2010 and closed most of its stores as movie watchers transitioned to streaming.One Blockbuster store remains in Bend, Ore., owned by Debbie and Ken Tisher.Get our free mobile app10 VHS Tapes You Totally Owned As A 90s KidThese movies were part of every 90s kid's VHS collection.Filed Under: Blockbuster, Fast FoodCategories: Movie News
    0 Σχόλια 0 Μοιράστηκε 160 Views
  • WEWORKREMOTELY.COM
    Chameleon: Senior Frontend Engineer
    We are looking for a Senior Frontend Engineer who considers themselves a Product Engineer, excels in a fast-paced remote environment, is enthusiastic about building quality software, and enjoys tackling a diverse range of problems.HighlightsWe are a remote-first Series A and VC-backed software company with ~40 team members distributed across the Americas and Europe.Looking for a Frontend Engineer with 4+ years SaaS experience living in the US, Canada, or Brazil to join our product teamSalary range for this role is $120-180k per annum (offer will be based on your seniority, equity and geography)Essential Skills needed for this role4+ years working full-time as an Engineer and 2+ years of React experience.Fluency and comfort in core web technologies (JavaScript, HTML, CSS) and common libs/frameworks (React, TailwindCSS, Vite, Tanstack Query, etc.).Test-driven mentality during your code production process.Engineers work closely with QA, but we expect our Engineers to write tests prior to the QA process.Familiarity with the best practices around UX, accessibility, frontend performance and feature-flagging.Experience with building up and from a component library.Other requirementsA home office, a stable high-speed internet connection, and the ability to work independently in a remote environment (well send you a new M3 Macbook Air with 16GB ram).You are geographically located in the US, Canada or Brazil and likely enjoy many aspects of working remotely.Note: Even if you are willing to work these hours we unfortunately cannot consider your application for this role.Fluency (written and verbal) in English.Responsibilities as an Engineer at ChameleonProduct Engineering: youll be evolving and maintaining our codebases, including:Our Dashboard, where our customers manage their usage of Chameleon, their Experiences, audiences, etc.Our Experience Editor, a browser extension used by our customers to create seamless, multi-purpose Experiences for their end-users.chmln.js, our JS library loaded in our customers web apps, responsible for loading and displaying Chameleon Experiences to many end-users.Project Management: you will collaborate and be responsible for keeping a tight feedback loop with our Product Team, by discussing details, providing feedback, helping defining and shaping specifications of the features and projects youll be working on, while ensuring alignment with technical best practicesProject Leadership: We value and encourage input and action beyond just the technical aspects. From specification to release, youll be responsible for keeping a tight loop with the rest of the team, ensuring that you rapidly reach out to your team to solve blockers, and ensuring a smooth rollout of new features for our customers.Engineering culture & teamOur engineering team consists of ~10 Engineers, 3 QAs, 1 PM, and 2 designers. Learn more about our team, culture, and vision on our About pageWere an async-first company. But what does that mean?Our recurring meeting cadence is low and we default to async discussions (via Slack threads, Linear ticket comments, Loom videos, etc.).We value and encourage self-management. Trust is a key element to our success as individuals.Proactive communication, collaboration and action on blockers. We encourage messages in public channels, so we can have visibility if someone needs help.Engineers use Tuple for pairing, to work together on projects/featuresEach person adjusts their work schedule according to what best works for themselves, considering work <> life balance.No daily meetings:We do not have daily standup meetings. Instead, we offer optional office hours time slots through the week, and also encourage ad-hoc Tuple pairing sessions.Everyone records a ~5min loom video at the end of each week, to recap what theyve been working on all week. The entire team has visibility of work in progress, can chime in with questions/comments, and provide feedback.We have a weekly Show & Tell meeting for exchanging knowledge, learnings, and questions, showcasing work in progress among ourselves, or just hanging out and bounding.The product team works in small pods focused on a specific feature/product (Quality, UX, Better, Bets, etc.). Check out the full description on our website here
    0 Σχόλια 0 Μοιράστηκε 150 Views