• Corinne Busche, Dragon Age: The Veilguard director, leaves BioWare and EA after 18-year stint
    www.gamedeveloper.com
    Justin Carter, Contributing EditorJanuary 21, 20251 Min ReadImage via BioWare/EA.At a GlanceAfter helping 'right the ship' at BioWare, Busche has joined another studio that had 'an opportunity I couldn't turn down.'Dragon Age: The Veilguard director Corinne Busche has departed BioWare and EA. In a statement to Eurogamer, she explained last week that she was "presented with an opportunity I couldn't turn down."Bushce first joined EA in 2006 as a designer, and later worked on The Sims 3's Into the Future DLC. Beginning in 2019, she joined BioWare as a lead systems designer until she gradually worked up to being a game director on Veilguard, which released last October."Righting the ship" at BioWareIn her statement, Busche stated her exit was voluntary, and she left having done "what I set out to do at BioWare, to come in and help right the ship. At the heart of it, this was about my own fulfillment." Her comments reflect Veilguard's initial beginnings as a live-service game made over several years later converted into a single-player affair as BioWare re-prioritized Dragon Age and Mass Effect after Anthem's collapse in 2021."The chance to return [Dragon Age] to a proper quality single player RPG was the privilege of a lifetime," she continued. "It was hard fought, as games with such tumultuous dev cycles rarely end up shipping, and even more rarely turn out great. We, as a team, did it. And it was hard. It took a toll on me. BioWare still has a lot of work to do culturally, but I do believe they are on the right footing now."Speaking to her next role, Busche affirmed she would remain "in the CRPG space and upholding the traditions of great characters."Read more about:EAAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Комментарии ·0 Поделились ·33 Просмотры
  • Trump says hes open to Musk or Ellison buying TikTok
    www.theverge.com
    President Donald Trump says hed be open to his buddies Elon Musk or Larry Ellison buying TikTok.Larry, lets negotiate in front of the media, Trump said at a press conference with the Oracle co-founder, SoftBank CEO Masa Son, and OpenAI CEO Sam Altman to announce a $500 billion artificial intelligence infrastructure investment. What Im thinking about saying to somebody is, buy it, and give half to the United States of America. Half, and well give you the permit. And theyll have a great partner, the United States.Sounds like a good deal to me, Mr. President, Ellison said.Its still not entirely clear how all of this would work, or how the US could legally operate a speech platform without violating the First Amendment. But its one of the earliest examples of how Silicon Valleys coziness with Trump could manifest over the next four years. Trump signed an executive order on Monday instructing his administration not to enforce the law on service providers covered by the forced divestiture bill which include Oracle, Apple, and Google for 75 days. But legal experts say the action provides hardly any legal cover for those companies to violate federal law and risk $850 billion in penalties. Even so, Oracle has appeared to rely on Trumps assurances to help TikTok run in the US after the January 19th sale deadline, though the company has not yet commented on it directly. TikToks China-based parent company ByteDance still has other offers on the table, including from billionaire Frank McCourts Project Liberty and now, apparently, from YouTube creator MrBeast whose investor group is receiving legal counsel from a team that includes the brother of Trumps attorney general pick.As he was leaving the briefing, a reporter asked Trump if he has TikTok on his phone. No, but I think I might put it there, Trump responded. I think Ill get it right now.
    0 Комментарии ·0 Поделились ·35 Просмотры
  • Microsoft is letting OpenAI get its own AI compute now
    www.theverge.com
    Microsoft and OpenAI announced Tuesday that they have adjusted their partnership so that OpenAI can access competitors' compute. The new agreement includes changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal (ROFR), Microsoft says. To further support OpenAI, Microsoft has approved OpenAIs ability to build additional capacity, primarily for research and training of models.The foundation of their relationship (which runs through 2030) stays pretty much the same Microsoft keeps its exclusive rights to OpenAIs tech for products like Copilot, and OpenAIs API remains exclusive to Azure. Theyll maintain their two-way revenue-sharing setup (it's been reported that Microsoft gets 20 percent of OpenAIs revenue). Prior to todays change, OpenAI was locked into using Microsofts Azure cloud infrastructure exclusively for its computing needs. The news follows the announcement of a joint venture between Arm, Microsoft, Nvidia, Oracle, and OpenAI to build a system of data centers in the U.S. called Starbase.The models OpenAI hopes to build and the user base it's looking to serve require billions of dollars in compute. It has been previously reported that some OpenAI shareholders felt Microsoft wasnt moving fast enough to supply OpenAI with computing power, hence why the startup partnered with Oracle back in June (with the blessing of Microsoft) for the necessary compute.Theres been a lot of buzz about Microsoft and OpenAI facing relationship woes after OpenAI CEO Sam Altman was briefly ousted from the company, causing a lot of very public drama. The New York Times reported that the relationship has grown increasingly strained due to financial pressures at OpenAI, concerns about stability, and growing friction between employees at both companies.Last March, Microsoft hired Inflection CEO Mustafa Suleyman to lead its consumer AI efforts, along with most of Inflections staff, in a $650 million deal. According to The New York Times report, this move particularly angered some OpenAI leadership, including Altman.OpenAIs deal with Microsoft also has an unusual escape clause: if OpenAI creates artificial general intelligence (AGI), it could close off Microsofts access to some of its most powerful models developed after that point. AGI, reportedly, is defined as a system capable of generating more than $100 billion in profits. This was originally meant to keep such powerful AI from being commercialized, but now OpenAI is reportedly considering dropping this provision, likely to secure more Microsoft funding.
    0 Комментарии ·0 Поделились ·35 Просмотры
  • TAI #136: DeepSeek-R1 Challenges OpenAI-o1 With ~30x Cheaper Open-Source Reasoning Model
    towardsai.net
    Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by LouieThis week, the LLM race was blown wide open with Deepseeks open-source release of R1. Performance is close to o1 in most benchmarks. Built on top of DeepSeeks v3 model, R1 API output token prices are 30x less than o1. Its available under the MIT license, supporting commercial use and modifications. Deepseek also disclosed many of its methods and experiments in its paper, in stark contrast to the secrecy surrounding reasoning techniques at AI labs in the U.S.R1 wasnt the only huge LLM release from China this week. Two new LLM competitors hit the ground running with very strong models. MiniMax-01, a 456bn parameter Mixture of Experts Model, challenges Googles Gemini models for SoTA in long context capabilities. It offers 4 million input context due to its new Lightning Attention (hybrid) architecture. Kimi-1.5, on the other hand, is another new reasoning model that challenges o1 on multimodal capabilities.Deepseeks release included three different models/ model families:DeepSeek-R1-Zero was an experiment that applied reinforcement learning (RL) directly to a base language model (V3) without any prior supervised fine-tuning. In essence, they attempted to teach the model to reason purely through trial and error, providing it with rewards for correct answers and well-formatted responses. This is somewhat analogous to how AlphaZero mastered games like Go and chess, learning solely through self-play and a reward signal based on winning or losing. The results were very impressive on many benchmarks; however, it fell short in some fields, and the models output was often messy and hard to read.To address the limitations of R1-Zero and enhance its reasoning abilities further, the DeepSeek team introduced R1, which incorporated a cold start of human-like reasoning data before applying reinforcement learning. This involved creating a small dataset of examples demonstrating desired reasoning patterns and output formats. This was followed by a multi-stage process. First, reasoning-oriented RL was applied, focusing on tasks with clear solutions, like math and coding. Then, they generated a new batch of high-quality data samples for fine-tuning, created by filtering model outputs during the RL phase. Finally, they applied a final round of reinforcement learning, this time focusing on general helpfulness and harmlessness in addition to reasoning.Across key benchmarks like AIME 2024, Codeforces, GPQA Diamond, and MATH-500, DeepSeek-R1 consistently performs on par with OpenAIs o1 (79.8 vs. 79.2, 96.3 vs. 96.6, 71.5 vs. 75.7, and 97.3 vs. 96.4, respectively). They also got very similar performance on the SWE-bench Verified coding challenge (49.2 vs 48.9).The final piece of DeepSeeks work involved distilling the advanced reasoning capabilities of R1 into smaller, cheaper, dense models (Llama and Qwen series). Using the larger R1 model as a teacher, they fine-tuned several smaller models (ranging from 1.5B to 70B parameters) on the high-quality data curated from the R1 training process. The smaller distilled models significantly outperformed other models of similar sizes and even rivaled much larger models on reasoning benchmarks. DeepSeek-R1 outputs distilled into the tiny Qwen-1.5B even beat 4o on some math and code benchmarks!Why should you care?DeepSeek-R1s release is significant for several reasons. First, its open-source nature and competitive performance at a fraction of the cost of o1 democratizes access to advanced reasoning capabilities. The API costs of DeepSeek-R1 per million tokens are currently $0.14 for cached inputs, $0.55 for non-cached inputs, and $2.19 for outputs. In contrast, the API costs for o1 are respectively $7.5, $15, and $60. About a x30 difference in costs! Moreover, the open model weights open up huge opportunities for adapting and fine-tuning these models for different domains and industries. The open release of its training methods also provides a blueprint for many others to follow. One surprise from the paper was that simpler techniques for enabling reasoning abilities worked better than some more complex options. We think there is a huge area for exploring and experimenting with these techniques now that scaled reinforcement learning for LLMs has been unlocked!The huge success shown by distilling big reasoning models into much smaller non-reasoning models also suggests we will get another wave of rapid improvement and cost reduction across the LLM spectrum.The fact a Chinese company is leading this charge also adds a geopolitical dimension, particularly given that Deepseek has managed to achieve this despite GPU export restrictions and a far smaller budget than Western AI labs.Introducing Our Brand New 8-hour Generative AI Primer CourseA programming language-agnostic 1-day LLM Bootcamp designed for developers.95% of developers I meet are only scratching the surface of what LLMs can do. When working with LLMs, you are CONSTANTLY making decisions such as open-source vs. closed-source, how to fit LLMs into your use case, whether no-code solutions are good enough for your workflow, the extent to which consider the limitations of LLMs, and so on. And the biggest gap we see on top of all this is whether you are using LLMs to their full capacity, even with chat interfaces like ChatGPT or APIs for models like Gemini. The question is: are you?This certification course is specifically designed to cut through the noise, help you ask the right questions, and show you exactly how to find answers. LLMs are moving so fast, with updates being released almost every day; what you need is an intuitive framework, and just like LLMs, you need enough context to know what developments are relevant to you and your use case so you can make the most out of this transformative technology.In just 8 hours, through lessons, videos, exercises, quizzes, and hands-on projects, youll:Dive deep into the psyche of LLMs: how they work, how to make them work better, and how to train them for tasks you hate doing.Work with leading AI models and integrate them into your workflows seamlessly.Build your own no-code/low-code prototype that brings your ideas to life.Youll finish before you even realize it, and by tomorrow, youll already be AI-proofed. Secure your spot now!Hottest News1. OpenAI Released Scheduled Tasks in ChatGPTOpenAI has introduced scheduled tasks in ChatGPT for Plus, Pro, and Team plans. These allow automated prompts and notifications on the Web, iOS, Android, and MacOS. Users can assign tasks like daily updates or reminders and receive notifications via push or email. Windows support will follow in Q1. Currently, a limit of 10 active tasks is enforced.2. Chinese AI Company MiniMax Releases New ModelsChinese AI company MiniMax, an Alibaba- and Tencent-backed startup, debuted three new models. MiniMax-Text-01 is a text-only model, while MiniMax-VL-01 can understand images and text. T2A-01-HD, meanwhile, generates audio specifically speech. MiniMax claims that MiniMax-Text-01 performs better than models such as Gemini 2.0 Flash and MiniMax-VL-01 rivals Claude 3.5 Sonnet.3. Kimi Launches New SOTA Multimodal ModelBeijing Moonlit Dark Side Technology introduced the new Kimi k1.5 multimodal thinking model. Updates include long context extension, improved policy optimization, and multimodality. Its report shows their Sota short-CoT performance outperforms GPT-4o and Claude Sonnet 3.5 on AIME, MATH-500, and LiveCodeBench by a large margin.4. Alibaba Slashes Prices on LLMs by Up to 85% As Chinas AI Rivalry Heats UpAlibaba Cloud announced an 85% price reduction on its Qwen-VL visual language model. The move demonstrates how competition among Chinas technology giants to win more business for their nascent artificial intelligence products is intensifying.5. Google Is Forming a New Team To Build AI That Can Simulate the Physical WorldGoogle is forming a new team led by Tim Brooks under DeepMind to build AI models for simulating the physical world, collaborating with Gemini, Veo, and Genie teams on world models. These models aid in video generation, multimodal data, and interactive environments.6. Mistral Signs Deal With AFP To Offer Up-to-Date Answers in Le ChatMistral has announced a content deal with newswire Agence France-Presse (AFP) to improve the accuracy of answers in Le Chat, Mistrals chatbot. Le Chat will be able to tap into AFPs stories around 2,300 stories per day in six languages and query AFPs entire archive dating back to 1983.7. President Trump Repeals Bidens AI Executive OrderPresident Donald Trump revoked a 2023 executive order signed by former President Joe Biden that sought to reduce the potential risks AI poses to consumers, workers, and national security. During his campaign, Trump promised policies to support AI development rooted in free speech and human flourishing.Five 5-minute reads/videos to keep you learning1. Retrieval-Augmented Generation (RAG) vs. Cache-Augmented Generation (CAG): A Deep Dive Into Faster, Smarter Knowledge IntegrationRetrieval-augmented generation (RAG) and cache-augmented generation (CAG) are two methodologies for generating more context-aware responses from LLMs. This article provides an extensive, step-by-step guide on both approaches, dives into their workflows, compares their advantages and drawbacks, and offers an implementation guide for CAG.2. Why AI Language Models Choke On Too Much TextGPUs revolutionized AI by enabling massive parallel processing, leading to transformer models scaling rapidly. Despite advancements, transformers remain inefficient with long contexts due to quadratic compute costs. This article discusses why this happens and shares some approaches to solving this problem.3. Simplifying Alignment: From RLHF To Direct Preference Optimization (DPO)This article explores how Direct Preference Optimization (DPO) simplifies aligning large language models with human preferences over Reinforcement Learning with Human Feedback (RLHF). It breaks down the math and highlights why DPO might be the smarter, easier way forward.4. Mastering Data Scaling: The Only Guide Youll Ever Need (Straight From My Journey)Data scaling is a crucial step in ensuring optimal model function. It prepares datasets for machine learning models. This article discusses why scaling is important, its types, and how and when to apply it.5. Takes On Alignment Faking in Large Language ModelsResearchers revealed that Claude 3 Opus fakes alignment with training objectives to avoid behavioral modification a phenomenon labeled alignment faking. This author shares their take on the results.Repositories & ToolsThe micro diffusion repository demonstrates the training of large-scale diffusion models from scratch on a minimal budget.LocalAI is a free, open-source alternative to OpenAI, Claude, and others.Maxun lets you train a robot in 2 minutes and scrape the web on auto-pilot.Agentless is an agentless approach to automatically solve software development problems.CopilotKit provides React UI and infrastructure for AI Copilots, in-app AI agents, AI chatbots, and more.Top Papers of The Week1. LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMsLlamaV-o1 redefines step-by-step visual reasoning in large language models by introducing a benchmark with eight challenge categories and a metric for granular evaluation. The multimodal model, trained through multi-step curriculum learning, surpasses existing models like Llava-CoT by 3.8% in performance across six benchmarks and runs five times faster during inference.2. KaLM-Embedding: Superior Training Data Brings A Stronger Embedding ModelResearchers developed KaLM-Embedding, a multilingual embedding model using high-quality, diverse training data. Techniques like persona-based synthetic data, ranking consistency filtering, and semi-homogeneous task batch sampling enhance its performance. The model excels in multilingual embedding tasks, outperforming others of similar size on the MTEB benchmark.3. Titans: Learning to Memorize at Test TimeThis paper introduces a new family of architecture called Titans based on a new neural long-term memory module. The module learns to memorize historical context and helps attention to attend to the current context while utilizing long-past information. Experimental results show that Titans are more effective than Transformers and recent modern linear recurrent models.4. Transformer 2: Self-adaptive LLMsThis paper introduces Transformer 2, a framework that adapts LLMs for unseen tasks in real-time by selectively adjusting only the singular components of their weight matrices. During inference, Transformer 2 employs a dispatch system to identify the task properties, and then task-specific expert vectors, trained using reinforcement learning, are dynamically mixed to obtain targeted behavior for the incoming prompt. It outperforms approaches such as LoRA with fewer parameters.Quick Links1. Six charts about AI revenue. OpenAI captures approximately 62.5% of consumer AI spending. xAIs revenue jumped from $5M to $100M, while OpenAI soared from $200M to $5B. Sapphire Ventures reports 28 AI-native companies exceeding $25MM in ARR, predicting substantial growth for AI-native startups in the coming year.2. DeepSeek-R1 achieves performance comparable to OpenAIs o1 system across mathematics, coding, and general reasoning tasks, cementing its place as a leading competitor. DeepSeek has open-sourced DeepSeek-R1-Zero and DeepSeek-R1, along with six smaller distilled models.Whos Hiring in AIApplied AI Engineer, Applied Science @Mistral AI (Paris, France)Cambridge Internship in ML Model Optimization @Microsoft Corporation (Cambridge, United Kingdom)Machine Learning Software Engineering Undergraduate Intern @INTEL (Santa Clara, CA, USA)Tech Consulting AI LLM Developer Manager @Accenture (Multiple Locations)Full-Stack Developer (React + Python + Azure) @Solvd (Remote)GenAI/Machine Learning Technical Project Manager @Deloitte (Multiple US Locations)Interested in sharing a job opportunity here? Contact [emailprotected].Think a friend would enjoy this too? Share the newsletter and let them join the conversation.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Комментарии ·0 Поделились ·35 Просмотры
  • Sonic the Hedgehog 4 Release Date Announced
    www.ign.com
    Sonic the Hedgehog 4 is set to hit theaters on March 2027.According to Variety, Paramount has scheduled the next Sonic movie to hit the big screen on March 19, 2027 give us two years until we see the blue blur back in action. No further details on the next Sonic movie have been released beyond the date.This seems like a no-brainer after the most recent film in the series, Sonic the Hedehog 3, made $218 million at the domestic box office and over $420 million worldwide. It is officially the highest-grossing Sonic film in the franchise after the first film recorded a healthy $148 million. Especially given the controversy surrounding the original Sonic design, that was later changed heavily in post-production.Sonic the Hedgehog 3 also has the honor of being the second highest-grossing video game movie of all time in North America behind only the animated Super Mario Bros Movie. Once again continuing the Nintendo and Sega rivalry on the big screen.PlayThe live-action Sonic franchise has grown steadily over the years and now includes three feature films as well as a Knuckles streaming TV show spinoff. Based on the hit Sega video game franchise, the films follow Sonic (voiced by Ben Schwartz) as he takes down his nemeis Dr. Robotnik played by Jim Carrey. Each new film has introduced more of the Sonic cast including Tails (Colleen O'Shaughnessey) and Knuckles (Idris Elba) with the most recent film finally introducing Shadow the Hedgehog (Keanu Reeves).Sonic 3 has already revealed the next character to join the franchse, though we won't spoil that here. You can instead read our new characters guide at your own peril. Be sure to also read our Sonic 3 review here.Matt Kim is IGN's Senior Features Editor.
    0 Комментарии ·0 Поделились ·32 Просмотры
  • Netflix Announces Yet Another Price Hike as it Adds a Record Number of New Subscribers
    www.ign.com
    Netflix just had a record quarter of new subscriber growth, crossing the 300 million subscriber milestone once again. To celebrate the milestone, the company has announced yet another price hike on most of its plans in the U.S., Canada, Portugal, and Argentina.In today's full year 2024 earnings, Netflix reported it ended the fiscal year with 302 million paid subscribers, adding a quarterly record 19 million in Q4 and up a record 41 million for the full year. This is the last quarter that Netflix will report subscriber growth, though the companies claims it will "continue to announce paid memberships as we cross key milestones."Accompanying these records, however, was a tiny asterisk buried in the letter to shareholders. Netflix is raising its prices yet again, just a little over a year after its last price increase in 2023. It also raised prices in 2022, and on average raised them roughly $1 to $2 a year before that going all the way back to its first price hike in 2014.Play"As we continue to invest in programming and deliver more value for our members, we will occasionally ask our members to pay a little more so that we can re-invest to further improve Netflix," the company said in its shareholder letter. "To that end, we are adjusting prices today across most plans in the US, Canada, Portugal and Argentina (which was already factored into the 2025 guidance we provided in October 2024)."Notably, the letter did not clarify exactly what these price hikes would entail. Both The Wall Street Journal and Bloomberg are reporting that the ad-supported tier is going from $6.99 to $7.99 per month, standard ad-free will rise from $15.49 to $17.99 per month, and the premium tier will go from $22.99 to $24.99.The company has also announced a new "extra member with ads" plan, which per Bloomberg and WSJ lets individuals on an ad-supported plan to add someone outside their household to an existing plan for an additional fee. Previously, extra members were limited to standard or premium plans.Overall, Netflix's revenue for the quarter was up 16% year-over-year to $10.2 billion, and its annual revenue was also up by the same percentage to $39 billion. The company is forecasting between 12% and 14% year-over-year growth in 2025.Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to rvalentine@ign.com.
    0 Комментарии ·0 Поделились ·34 Просмотры
  • Will LEGO and Twilight Fans Crossover?
    www.denofgeek.com
    Hold on tight, spider-monkey.Thats right, LEGO is teaming up with Lionsgate to build a set based on the ever-so-popular (or infamous, depending on who you talk to) The Twilight Saga. One of the highest-grossing film franchises of all time, Twilight (2008) and its subsequent sequels have remained relevant in the pop culture eye despite the films 20th anniversary looming.More and more over the last few years, LEGO has based sets off popular brands and intellectual properties, focusing more on IP-driven products than ever before. Some of the groups most significant licenses include blockbuster franchises such as Star Wars, Harry Potter, Marvel, and Super Mario. But the Danish toymaker hasnt shied away from more niche exports, with sets inspired by lesser-known brands such as Tron, Trolls, and Horizon leading the pack. Even compared to those, is Twilight too outside the realm of normalcy for LEGO?The LEGO set, 21354: The Cullen House, was selected as the 62nd Ideas project to be made into an official productreleasing in early February for devotees across the globe. LEGO Ideas is a website where builders can pitch their own concepts for future sets, and fans can vote on the ideas theyd like to see made. Once a project hits 10,000 supporters, LEGO then officially considers it for production, with only a handful physically being made. There is obviously some level of demand for a LEGO Twilight set, but recent trends suggest LEGO could be overestimating its potential reach.Do Twilight fans and LEGO fans have a significant crossover? Moreover, will Twilight fans be willing to fork over $219.99 for a LEGO set of their favorite movie?Its clear Twilight aficionados still love their soapy teen romance with a vampiric twist. The film has experienced resurgences on streaming, and edits of iconic couples from the motion picture occupy a particular corner of TikTok, a la some popular Tumblr accounts circa 2010. Twilight fans are also no stranger to mass merchandising. From makeup lines like ColourPop to vinyl figures from Funko, there is no shortage of swag to buy from the saga. A LEGO set appears to be the logical next step.However, some recent LEGO offerings suggest the Twilight set might have an uphill battle when it comes to breaking through with the two brands respective audiences. In late 2021, LEGO released 10291: Queer Eye The Fab 5 Loft. Based on the long-running Netflix reality series of the same name, the Queer Eye set replicated the gangs iconic loft and some specific transformations from the series. While the product itself was expertly designed, it didnt sell. In fact, The Fab 5 loft was a shelf-warmer despite heavy discounts from most retailers. The set is worth less now than its retail, despite retiring at the end of 2022.Another peculiar LEGO release occurred in the spring of 2023, when the group extended its reach into K-pop. Another Ideas set, this time 21339: BTS Dynamite, witnessed a similar fate to Queer Eye before it. Despite being tied to one of the most prominent music groups in the world, the BTS set didnt sell. Even significant clearance couldnt traffic this set.Both Queer Eye and BTS are examples of LEGO overestimating how much crossover between fanbases there actually is. Yes, Queer Eye is a beloved reality show, and BTS is a cultural phenomenon, but their fans arent necessarily LEGO fans. In turn, these supporters clearly didnt want to shell out the $99.99 retail price for both of these set releases, even if the product was nicely designed. LEGO Twilight could be in danger of suffering the same fate. The Cullen House set is even more expensive than the aforementioned kits, ringing in at a grand total of $219.99. The price isnt unfair, seeing as the set contains over 2,000 pieces, but its a pretty penny to pay for an item thatll take up a significant amount of space. Perhaps Twilight supporters will turn out in droves for the upcoming release of 21354, but recent LEGO history suggests with another dust collector.You can preorder the LEGO Ideas 21354: Twilight The Cullen House online here.
    0 Комментарии ·0 Поделились ·34 Просмотры
  • Max Medical Drama The Pitt Is 24s Real-Time Heir Apparent
    www.denofgeek.com
    Dr. Michael Robby Rabinavitch has to pee. The senior attending at Pittsburgh Trauma Medical Hospital played by Noah Wyle makes this announcement midway through the third episode of the Max medical drama The Pitt. Before he can get to the bathroom, however, he first has to oversee two simultaneous open-heart surgeries. Then must speak with a mother and father about their sons fentanyl overdose. Amid all that chaos he resolves to keep an eye on a fourth-year medical student at the learning hospital who just lost his first patient.By the time Robby finally unzips in front of a urinal hes interrupted yet again by Dr. Frank Langdon (Patrick Ball) who needs a cardiovert unstable AFib in North 1. The addendum that the systolics 90 is enough for Robby to abandon his urination mission altogether and rush out of the bathroom to more chaos. Things move pretty quickly in the emergency department at Pittsburgh Trauma Medical Hospital, which Robby calls The Pitt. And that goes not only for the shows characters, but its audience as well. Because like the great Fox espionage thriller 24 before it, The Pitt operates in real time or close to it.There is no ticking clock at the bottom of the screen on The Pitt but each of the shows 15 episodes is designed to roughly correspond to an hour spent during a 15-hour shift in the titular Pitt. The series follows a weekly release model and the three episodes released thus far are titled 7:00 A.M., 8:00 A.M., and 9:00 A.M. None of these episodes are precisely 60 minutes long but they come close enough in the 51-53-minute range. The real-time format is far from The Pitts only selling point. Starring ERs Wyle and created by former ER writer R. Scott Gemmill, the series is essentially a heightened version of the long-running NBC medical classic with an added prestige streaming sheen. In fact, The Pitt studio Warner Bros. Television was hit with a lawsuit from the Michael Crichton estate, which alleged that the show was a reworking of an ER spinoff. The Pitt is also one of the first post-Covid hospital dramas to use the pandemic as a thematic grounding rod. Dr. Robby still clearly bears the mental and physical scars of being an healthcare worker during that traumatic time.Still, its the real-time aspect that gives The Pitt its greatest competitive advantage over comparable TV properties. The pace of the show is quite simply relentless. Very rarely do Robby and his fellow doctors exit one scene to enter into a calmer one. Its early in the day in Western Pennsylvania and the emergencies continue to mount up. At one point the socially awkward former veteran affairs doctor Melissa Mel King (Taylor Dearden) steps outside for a breather only to be immediately met by a van tearing into the parking lot to dump off a gunshot wound victim.How realistic this confluence of emergencies is for one hospital is up for debate, though for whats its worth, many real-life healthcare professionals have commended the show for its accurate depiction of the doctor-patient dynamic and the deployment of proper medical jargon (see: cardiovert unstable AFib). But whats not debatable about The Pitts breakneck approach is that it works. In fact, through its first few episodes its already working better than the pioneer of the real-time TV season format.24 was not the first TV series or movie to utilize a running clock as a plot device. It was, however, the first to make the connection that there are 24 hours in a day and TV seasons of hour-long dramas tend to run in the 20-25-episode range, creating the opportunity for the mythical day-long televised experience (with commercials, of course). The idea for 24s format was so strong that it preceded the plot of the show itself. Creators Joel Surnow and Robert Cochran originally intended for their series to document the dramatic 24-hour period in advance of a wedding before shifting to an action concept featuring the head of the Counter Terrorist Unit racing against the clock to save his daughter.The thing about 24 though is that 24 hours is truly So. Many. Hours. The writers and producers of the series would invariably run into this realization time and time again. Of the shows eight proper seasons (outside of the TV movie 24: Redemption and spinoff 24: Live Another Day), only season 4 really attempted to follow one concurrent terrorist plot over the span of a full day to narratively messy results. The other seven seasons took the far more sensible approach of breaking their 24 hours into two distinct storytelling blocks: one in which Jack Bauer (Kiefer Sutherland) saved the day, and another in which he uncovered the real big bad behind the terrorist conspiracy.At only 15 hours, The Pitt appears to have learned the right lessons from its real-time progenitor just as its learned the right lessons from its medical drama forefathers. Viewers dont need to see a running clock to feel the tension, nor do they need a full 24-hour period to get the bit. Viewers need only consistent escalation and movement. Thus far, The Pitt has found a way to work with its gimmick rather than become beholden to it. By the time episode 4 opens at 10:00 A.M. Eastern Time, Dr. Michael Robby Rabinavitch will still have to pee. The tension of whether he makes it to the bathroom before the next emergency will have the same dramatic weight of Jack Bauer defusing a bomb.The first three episodes of The Pitt are available to stream on Max now. New episodes premiere Thursdays at 9 p.m. ET on Max.Join our mailing listGet the best of Den of Geek delivered right to your inbox!
    0 Комментарии ·0 Поделились ·34 Просмотры
  • Apple Watch Ultra 3: Three new features are coming later this year
    9to5mac.com
    Apple Watch Ultra, outside of a very nice new titanium black, hasnt been meaningfully upgraded since 2023. But later this year thats going to change when the Apple Watch Ultra 3 arrives. Here are three new features coming to the Apple Watch Ultra 3.High blood pressure detectionNew health features are one thing you can consistently expect from Apple Watch upgrades. According to Mark Gurman, this year Apple is planning to introduce high blood pressure detection:The blood-pressure featureis designed to work in a similar way to Apples sleep apnea detector. It wont give users specific readings such as diastolic or systolic levels but it will inform them that they may be in a state of hypertension.Gurman notes that this project has been years in the making, and has faced delays before, but a launch this year is currently planned.Satellite messaging sans iPhoneApple Watch Ultra has always been geared toward adventurers and explorers, despite the product gaining plenty of fans with more casual needs. With the Ultra 3, Apple will add a new capability designed especially for those who enjoy going off the grid.Messaging over satellite is coming to the Apple Watch Ultra 3.Heres Gurman again:The technology will let smartwatch users send off-the-grid text messages via Globalstar Inc.s fleet of satellites when they dont have a cellular or Wi-Fi connection.iPhone users have had this technology for a few years, and it got even better with a critical iOS 18 update. The Apple Watch, however, has never gotten its own satellite connectivity. With the new Ultra 3 it will be easier than ever for users to leave their iPhone behind to truly embrace the outdoors.5G cellular for the first timebetter battery life that's the biggest factor to upgrade, let the watch go all the way to 72h at least. if they can do that I will definitely buy it. if they can do that plus something else I would like a more conformable form factor. on the software side they should let you use the watch buttons only when you are working out. View all commentsThis third upgrade also focuses on connectivity. Whereas previous Apple Watch Ultra models have been limited to 4G LTE connectivity, Apple plans to bring 5G to the Ultra 3 this year.This upgrade will reportedly take the form of 5G Redcap, a less power-hungry version of 5G. Redcap cant hit the same max speeds as your iPhones 5G, but it will ensure your Watch doesnt take a big battery hit either.Apple Watch Ultra 3 wrap-upThe new Apple Watch Ultra 3 isnt expected to arrive until the fall, so theres still plenty of time for additional feature leaks to happen. Perhaps well get even better battery performance this time around, for example. But for now at least, it sounds like Apple is doubling down on the Ultra 3 as the ultimate Watch for adventurers.What new features do you want to see in the Apple Watch Ultra 3? Let us know in the comments.Best Apple Watch accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии ·0 Поделились ·38 Просмотры
  • Silo season 3 will solve the shows darkness problem, says creator
    9to5mac.com
    Apple TV+ just wrapped up Silo season 2, but the series has already been picked up for seasons 3 and 4. Now, the shows creator has revealed a big visual change to expect in the next season.Showrunner on Silo season 3: there will be sunshineModern TV shows can at times be poorly lit.Call it the Game of Thrones problem, since some of that shows most expensive and anticipated episodes famously were too dark for lots of viewers to see what was happening on-screendespite the big budgets.Silos first two seasons have similarly been pretty dark, with season 2 being especially problematic.But Silos creator and showrunner, Graham Yost, shared in a new interview that things are changing in season 3. **minor spoilers ahead**Yost was asked if that season 2 finale cliffhanger was any indication of changes of scenery coming next season. He confirmed it was.Matt Webb Mitovich writes at TVLine with Yosts reply:So yes, we will be outdoors, and we will be in the world, and there will be sunshine, Yost said, speaking to the parts of Season 3 that will be brighter.That said. We do go back to Silo 17 in Season 3, he noted. And remember, theyve got a very big power issue there, so they dont have a lot of light.So theres hope for season 3 yet.The new season is expected to bounce between book 2 in the Silo trilogywhich follows the characters introduced upon season 2s endand book 3, interweaving the two stories similar to how this past season followed separate silos.Thus, good chunks of the show should be much brighter and easier to see than ever. But well still be due for some dark Silo 17 scenes too.Did you have trouble seeing Silo season 2 because of it was too dark? Let us know in the comments.Best Apple TV and Home accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии ·0 Поделились ·40 Просмотры