0 Comments
·0 Shares
·11 Views
-
EA will use Flexion to bring its mobile games to third-party app storeswww.gamedeveloper.comJustin Carter, Contributing EditorFebruary 25, 20252 Min ReadImage via EA.At a GlanceThird-party app stores are slowly gaining more attention, and EA is one of the biggest publishers to recognize them.EA is partnering with marketing company Flexion to bring its mobile games to third-party app stores in the near future.Under this deal, select titles like EA Sports FC Mobile and The Sims Mobile will come to the Amazon App store, Samsung Galaxy Store, Xiaomi GetApps, and ONE Store. Flexion's self-touted distribution services lets it "reach new players from alternative stores, significantly reducing the upfront cost or work from its partners to boost revenue and activate new audiences."In its third quarter financials for 2024-2025, EA reported a "double digit increase" in new players and engagement year-over-year for EA Sports FC Mobile specifically. However, the overall mobile division made $276 million in net revenue, down 7 percent year-over-year. Releasing its games on more platforms may help reduce that gap or raise revenue during the remaining fiscal year, or for 2025-2026."Were delighted EA has chosen to work with us and cant wait to launch their hitmobilegames," wrote CEO Jens Lauritzson. "Developers make better margins, higher revenue and reach bigger audiences with broader market channels, like the alternative app stores. Using Flexions expert third-party services to do this reduces upfront financial and manpower costs and therefore risk."The rise of alternative app storesWithin the past year, non-iOS and Android storefronts have been getting more recognition in mobile markets. In 2024, developer Kabam Entertainment teamed with Flexion to take Marvel: Contest of Champions to alternative stores, including ONE Store Korea.A UK bill was passed the same year, aiming to prevent larger tech platforms like Apple and Google from giving preferential treatment to its own products. The bill followed the country's Digital Markets Act (DMA), which forced Apple to allow third-party stores on the UK iOS.Flexion will launch EA's first mobile game on third-party stores later this year. In 2022, Game Developer spoke with ZiMad's Vladimir Romanov about what these alternative storefronts can provide developers, which you can read here.Read more about:EAAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like0 Comments ·0 Shares ·12 Views
-
8BitDos Ultimate 2 controller gets an upgrade to next-generation anti-drift stickswww.theverge.com8BitDo has released an upgraded version of its Ultimate controller now available to preorder through Amazon for $59.99 in purple, black, and white color options. The new 8BitDo Ultimate 2 features a similar asymmetrical stick layout to 8BitDos original Ultimate controller that launched in 2022, but adds additional buttons, interactive LED lighting, and tunneling magneto-resistance (TMR) joysticks that are even more durable than Hall effect sticks.We still dont know if Nintendo will switch to Hall effect joysticks for the Switch 2, but companies like GuliKit have already moved away from them in favor of TMR. The technology has already long been used in hard drives to boost storage capacities. For controllers, it allows for joysticks that draw less power, which can improve battery life while nearly eliminating the risk of joystick drift that plagued the Nintendo Switch and other modern controllers before magnetic Hall effect technology was adopted.The 8BitDo Ultimate 2 features the RGB Fire Ring LED joystick effects first introduced on its smaller wired Ultimate C controller. Image: 8BitDo8BitDo is also bringing over the RGB Fire Ring lighting effects first introduced on its smaller-sized Ultimate C wired Xbox controller. As the branding implies, both joysticks feature a ring of color-changing LEDs around their base with several lighting modes that react to button presses (including the triggers) or the direction the joysticks are being pushed.A newly added switch changes the behavior of the Ultimate 2s triggers between a long or short pull. Image: 8BitDoThe Ultimate 2s triggers still use Hall effect sensors for improved accuracy and reliability, but 8BitDo has introduced a switch that lets you swap their behavior between longer draw triggers ideal for racing games and short pull tactile triggers for quicker responses while playing first-person shooters. And like the budget-minded 8BitDo Ultimate 2c, the Ultimate 2 has an extra pair of customizable shoulder buttons on the back.The controller connects to 8BitDos Ultimate Software V2, which is also available as a mobile app, allowing buttons to be remapped and the sensitivity of joysticks and triggers to be adjusted. The controller has support for motion controls for games that support it, and a charging dock is still included.Connectivity options include Bluetooth, a wired USB-C connection, or a low-lag 2.4GHz wireless connection using an included USB-C dongle. But like the cheaper Ultimate 2C, 8BitDo has only made its new Ultimate 2 controller compatible with PCs running Windows 10 and later or Android devices running Android 9.0 and newer.Versions of the new Ultimate 2 compatible with the Xbox, iOS, or Nintendo Switch (and presumably the Switch 2) havent been announced yet, but 8BitDo previously released additional versions of the original Ultimate controller with alternate compatibility.See More:0 Comments ·0 Shares ·8 Views
-
Technicolor is winding down operationswww.theverge.comTechnicolor Group, the French VFX giant that owns some of Hollywoods most in-demand post-production houses, appears to be on the brink of collapse putting thousands of jobs at risk.Variety reports that Technicolor has begun winding down operations after failing to secure a new round of investment necessary to keep the entire international outfit running. In a message sent to employees on Monday, Technicolor CEO Caroline Parot claimed that COVID-19 era setbacks and the 2023 writers strike were two sources of the severe cash flow pressures the company has been struggling to deal with.Parot also said Technicolor which operates in the U.S., Canada, Europe, India, and Australia must face reality, and explained that the company has petitioned the Paris Commercial Court to initiate receivership proceedings.In each country, the appropriate framework for orderly protection and way forward is currently being put in place to allow, when possible, to remain in business continuity, Parrot said. This decision was not taken lightly; every possible path to preserve our legacy and secure the future of our teams will be thoroughly explored to offer a chance to each of its activity to be pursued with new investors.Parots latest message to employees came days after workers in the US received WARN notices informing them of the potential for imminent mass layoffs, and Technicolors pivot to receivership in France gelled with the companys recent move in the UK to file for administration.Technicolor Group, which owns Moving Picture Company (Dune, Spider-Man: No Way Home), The Mill (Detective Pikachu, Severance), Mikros Animation (Teenage Mutant Ninja Turtles: Mutant Mayhem, Orion and the Dark), and Technicolor Games (Mass Effect: Legendary Edition), is no stranger to financial woes. The company was spun-off from Vantiva SA (formerly known as Technicolor SA) in 2020 after the latter filed for Chapter 15 bankruptcy and underwent a large-scale restructuring. Parot also pointed to Technicolors separation from Vantiva as a factor that contributed to its current difficult operational situation.0 Comments ·0 Shares ·8 Views
-
Convergence Releases Proxy Lite: A Mini, Open-Weights Version of Proxy Assistant Performing Pretty Well on UI Navigation Taskswww.marktechpost.comIn todays digital landscape, automating interactions with web content remains a nuanced challenge. Many existing solutions are resource-intensive and tailored for narrowly defined tasks, which limits their broader applicability. Developers often face the dual challenge of balancing computational efficiency with the need for a model that can generalize well across diverse websites. Traditional systems, heavily reliant on prompt-prediction, often lack the reflective reasoning required for the unpredictable nature of web environments. Additionally, proprietary models typically restrict access to detailed inner workings, making it difficult for researchers and practitioners in the open-source community to build on state-of-the-art methods. These persistent issues underline the importance of developing an automation tool that is both efficient and accessible.Convergence has introduced Proxy Lite: a mini, open-weights version of their well-regarded Proxy assistant. This 3B parameter Vision-Language Model is designed to extend sophisticated web automation capabilities to the open-source community. Rather than promising extraordinary feats, Proxy Lite aims to offer a balanced approach that marries efficiency with reliability. Its architecture builds on a solid foundation, allowing it to perform a variety of web-based tasks without imposing heavy computational demands.What makes Proxy Lite notable is its transparent design and open-weights approach. This encourages the community to explore, modify, and improve upon its framework. With an integrated system for Vision-Language Model (VLM) and browser interactions, Proxy Lite allows for nuanced control over browser tasks. The models configuration supports practical applications ranging from routine data extraction to more complex navigational tasks, all while keeping resource usage in check.Technical Aspects and Their BenefitsAt its core, Proxy Lite leverages a 3B parameter model built on the Qwen2.5-VL-3B-Instruct foundation. This choice reflects a commitment to balancing performance with efficiency. The model employs a three-phase process to generate responses:Observation: The model first examines the current state of the web pageconfirming, for instance, that an overlay or privacy banner has been dismissed.Thinking: It then methodically determines the next course of action, weighing the various possibilities based on the context.Tool Call: Finally, it issues a precise command to execute the selected action within the browser.This structured approach not only improves task reliability but also facilitates the models ability to generalize across different types of web interactions. By mirroring human-like reasoning processes, Proxy Lite manages to strike a balance between simplicity and sophistication. Moreover, its design supports a straightforward integration into both command-line interfaces and Streamlit applications, making deployment accessible even for those with modest technical resources.Performance Insights and Practical EvaluationsProxy Lite has been carefully evaluated using the WebVoyager benchmark, a comprehensive set of tasks designed to test web automation capabilities. The model achieved an overall score of 72.4%, a strong performance indicator given its open-weights nature. Detailed performance statistics across various websites reveal its thoughtful design:Allrecipes: Achieving an 87.8% success rate with an average of 10.3 message exchanges, it demonstrates effectiveness in content-rich environments.Amazon: A 70.0% success rate here highlights the models ability to navigate more complex, dynamic e-commerce platforms.Notable High-Profile Sites: With success rates in the low 80s on platforms such as Apple and GitHub, Proxy Lite consistently shows reliable behavior on diverse sites.Google Services: While some areas, such as Google Flights, yield lower success metrics, the overall performance remains competitive considering the models scope.These findings reflect a balanced performance, with Proxy Lite efficiently managing tasks without the overhead typically associated with larger, proprietary models. The comprehensive evaluation not only underscores its current utility but also points to potential enhancements through community-driven refinements.ConclusionProxy Lite emerges as a thoughtfully designed tool in the field of web automation. By addressing key challengessuch as resource constraints, generalization, and transparencyit offers a practical solution for automating routine online tasks. Its open-weights approach and modular design invite collaboration and ongoing development, providing a valuable resource for both academic research and commercial projects.Check outthe Technical Details andModel here.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our80k+ ML SubReddit. Asif RazzaqWebsite| + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/DeepSeek AI Releases DeepEP: An Open-Source EP Communication Library for MoE Model Training and InferenceAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Building an Interactive Weather Data Scraper in Google Colab: A Code Guide to Extract, Display, and Download Live Forecast Data Using Python, BeautifulSoup, Requests, Pandas, and IpywidgetsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Building a Legal AI Chatbot: A Step-by-Step Guide Using bigscience/T0pp LLM, Open-Source NLP Models, Streamlit, PyTorch, and Hugging Face Transformers Recommended Open-Source AI Platform: IntellAgent is a An Open-Source Multi-Agent Framework to Evaluate Complex Conversational AI System' (Promoted)0 Comments ·0 Shares ·7 Views
-
TAI #141: Claude 3.7 Sonnet; Software Dev Focus in Anthropics First Thinking Modeltowardsai.netAuthor(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by LouieAnthropic Claude 3.7 Sonnet reasoning model stole the show this week. This is partly due to how quickly you can test and see the models coding talent with native code rendering in Claudes Artefacts feature. This was also a positive week for open-source reasoning LLMs with Alibabas QwQ-Max-Preview now available and set for an imminent open-weights release. Meanwhile, Prime Intellect released SYNTHETIC-1 the largest open reasoning dataset yet, comprising 2 million verified math, coding, and science reasoning traces from DeepSeek-R1 alongside a 7B model fine-tuned on this data. This dataset will be valuable for fine-tuning reasoning models and customizing them to specific domains. Meanwhile, OpenAI disclosed that ChatGPT has hit 400 million weekly active users, which we calculate now covers 7.2% of global internet users!Anthropic Claude 3.7 Sonnets headline feature is its extended thinking mode, where the model now explicitly shows multi-step reasoning before finalizing answers. Anthropic noted that it focuses its reinforcement learning training on real-world code problems relative to math problems and competition code (a slight dig at OpenAIs o3 Codeforces focus here). This focus shows very impressive SWE-bench agentic coding capabilities while Maths benchmarks lag other leading reasoning models. Claude 3.7 Sonnet scores 62.3% without thinking mode on SWE (70.3% with a scaffold on a subset of problems), significantly ahead of OpenAI o1 at 48.9%, o3-mini high at 49.3% and in the ballpark of OpenAI o3s reported 71.7% score. On AIME MATH 2024, Claude 3.7 Sonnet scores 61.3% pass@1 (80.0% with parallel scaling), a big jump from Sonnet 3.5 News 16.0% but behind Grok-3s 83.9% (93.3% with parallel scaling) and OpenAI o3 at 96.7% score with parallel scaling.Looking at GPQA Diamond, Claude 3.7 Sonnet with Extended thinking achieves 78.2% pass @1 (or 84.8% with up to 64k tokens, including parallel scaling of multiple samples). This outperforms OpenAI o1 at 75.7% (78.3% with parallel scaling) and OpenAI o3-mini at 79.7%. However, Grok-3 thinking mode wins here with pass @1 80.2% (84.6% with parallel scaling), and OpenAIs unreleased o3 still leads at 87.7% with parallel scaling. In non-thinking modes, Claude 3.7 Sonnet scores 68.0% below Grok-3s non-thinking score of 75%, but outperforming Gemini-Pro 2.0 at 65% and OpenAI 4o at 51%.Claude 3.7 Sonnet keeps the same price as 3.5 even in thinking mode: $3 per million input tokens and $15 per million output tokens. This is a positive surprise given both OpenAI and Deepseek charge a large premium for their reasoning models which we think is justified by higher compute and memory usage and lower inference batch sizes possible when the average output generation length is longer (even with the same architecture). Via the API, Claude 3.7 Sonnet users also get extra flexibility to control the budget for thinking: you can tell Claude to think for no more than N tokens (up to 128k) allowing you to decide your own trade-offs between latency, cost, and capability.Anthropic recently revealed that 37% of Claude Chatbot queries are software-related and this doesnt even count the heavy use of Claude APIs for coding copilots and agents such as Cursor. Perhaps Anthropic is beginning to see this as a point of differentiation against OpenAI, Google Gemini, and xAI in any case beyond Claude 3.7s focused improvement on software-related benchmarks Anthropic also released its own coding agent. Claude Code is now in beta as a research preview. Designed to operate directly in the terminal, it integrates directly with developers workflows without additional servers or complex setups. Claude Code can edit files, fix bugs, execute tests, search through git history, resolve merge conflicts, and create commits all through natural language commands.Why should you care?As we noted last week reasoning models with test time compute scaling capability have become the new battleground for SOTA in LLMs. Now, Anthropic has also joined the party (following OpenAI, Deepseek, Alibaba, Google Deepmind, and xAI). Potentially in part due to this competition, Anthropic has also come with a more user-friendly approach both in its subscription chatbot, where 3.7s Thinking mode can simply be toggled (similar to Grok-3) and in its API where it provides the most flexibility yet for directly controlling thinking token usage and offers flat pricing vs. the base model. It will take some practice to test how well this thinking token control works and how much benefit you really get from pushing to the highest settings.On our side, the early testing of 3.7 shows some impressive capabilities, particularly in front-end code with mini apps and websites. We think this will only accelerate the momentum in LLM Native coding or what Andrej Karpathy calls vibes coding, where even experienced developers can quickly learn new software skills and languages top-down starting with natural language and an LLM-generated project. More on this soon, with the imminent release of our LLM native Python course for coding novices!Hottest News1. Anthropic Introduced Claude 3.7 Sonnet, Their Hybrid Reasoning ModelAnthropic announced Claude 3.7 Sonnet, its most intelligent and first hybrid reasoning model. It can produce near-instant responses or extended, step-by-step thinking that is visible to the user. API users also have fine-grained control over how long the model can think. Anthropic also introduced a limited research preview of Claude Code, a command-line tool for agentic coding.2. Prime Intellect Released SYNTHETIC-1, the Largest Open Reasoning DatasetPrime Intellect has introduced SYNTHETIC-1, an open-source dataset designed to provide verified reasoning traces in math, coding, and science. Built with the support of DeepSeek-R1, this dataset consists of 1.4 million structured tasks and verifiers. The objective of SYNTHETIC-1 is to improve reasoning models by supplying them with well-organized, reliable data, addressing the shortcomings of existing resources.3. Arc Institute Developed Evo 2, the Largest AI Model for BiologyArc Institute developed Evo 2, trained on the DNA of over 100,000 species across the entire tree of life. The model can write whole chromosomes and small genomes from scratch. It can also make sense of existing DNA, including hard-to-interpret non-coding gene variants linked to disease.4. Alibaba Unveils QwQ-Max-PreviewAlibaba launched QwQ-Max-Preview, a new reasoning model in the Qwen family of AI models. The model is built on the Qwen 2.5 Max and specializes in mathematics and coding-based tasks. The model is in its preview stage, and the company is expected to announce its full version soon.5. Open AI Introduced the SWE-Lancer BenchmarkSWE-Lancer introduces a benchmark with over 1,400 freelance software engineering tasks from Upwork, valued at $1 million. It evaluates model performance on tasks, ranging from $50 bug fixes to $32,000 feature implementations, through tests verified by engineers. Frontier models struggle with most tasks.6. Perplexity AI Open-Sourcing a Post-Trained DeepSeek-R1 With Censorship RemovedPerplexity AI introduced the R1 1776 model, a post-trained DeepSeek-R1 designed to eliminate Chinese Communist Party censorship while maintaining high reasoning abilities. Rigorous multilingual evaluations using human annotators and LLM judges confirmed that decensoring did not impact the models core reasoning capabilities. The model performed on par with the base R1 model across various sensitive topics.Five 5-minute reads/videos to keep you learning1. Andrej Karpathys Early Access Review of Grok 3In this article, Andrej Karpathy evaluated Grok 3, noting its strong performance in thinking tasks, comparable to OpenAIs models. Despite issues with humor and ethical sensitivity, it surpassed DeepSeek-R1 and Gemini 2.0 in some areas.2. DeepSeek vs. ChatGPT A Detailed Architectural and Functional BreakdownThis article compares two leading LLMs, ChatGPT and DeepSeek. It focuses on architectural design, training methodologies, performance, and limitations. While ChatGPT is more generic and can handle a broader variety of tasks, DeepSeek becomes a more feasible alternative for tightly focused applications.3. Build Your LLM Engineer Portfolio (Part 2): A 3-Month RoadmapThis step-by-step guide is designed to help you build, refine, and showcase a RAG portfolio to kickstart your career. It provides a comprehensive plan for your three-month journey toward creating an impressive RAG portfolio. Youll find essential preparation steps to establish a solid foundation for your projects, seven impactful projects that will enhance your expertise and help you stand out, and strategies for deploying and presenting your portfolio to achieve maximum exposure.4. 1 Billion ClassificationsThis blog explains how to calculate cost and latency for large-scale classification and embedding. It analyzes different model architectures, benchmarks costs across hardware choices and provides a clear framework for optimizing your own setup.5. Insights on Crosscoder Model DiffingCrosscoder-based model diffing is a promising method for isolating differences between two models with a single SAE training run. In this note, Anthropic discusses a few unexpected observations when applying this technique to real models. This will be helpful for researchers working actively in this space.Repositories & ToolsAibrix is an open-source initiative that provides building blocks to construct scalable GenAI inference infrastructure.FlashMLA is an MLA decoding kernel for Hopper GPUs optimized for variable-length sequence serving.Mastra is a Typescript framework that helps you build AI applications and features.Top Papers of The Week1. Accelerating Scientific Breakthroughs With an AI Co-ScientistThis paper introduces AI co-scientist, a multi-agent AI system built with Gemini 2.0. The agent acts as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals. It can potentially help accelerate the clock speed of scientific and biomedical discoveries.2. Qwen2.5-VL Technical ReportQwen2.5-VL advances visual recognition with enhanced object localization, robust document parsing, and long-video comprehension. It accurately extracts structured data and analyzes charts. Featuring dynamic resolution processing and Window Attention, Qwen2.5-VL reduces computational overhead.3. MoBA: Mixture of Block Attention for Long-Context LLMsThis paper introduces Mixture of Block Attention (MoBA), an innovative approach that applies the principles of Mixture of Experts (MoE) to the attention mechanism. This architecture demonstrates superior performance on long-context tasks while seamlessly transitioning between full and sparse attention, enhancing efficiency without compromising performance.4. Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse AttentionNSA, a Natively trainable Sparse Attention mechanism, improves efficiency in long-context modeling by integrating algorithmic innovations with hardware-aligned optimizations. It achieves substantial speedups and maintains model performance across benchmarks. NSAs dynamic hierarchical strategy combines coarse-grained token compression with fine-grained selection, excelling over Full Attention in 64k-length sequences during decoding, forward propagation, and backward propagation.5. CodeI/O: Condensing Reasoning Patterns via Code Input-Output PredictionCodeI/O enhances reasoning in language models by transforming code into input-output prediction formats, exposing models to reasoning patterns like logic flow planning and decision tree traversal. This method improves performance across multiple reasoning tasks.Quick Links1. OpenAIs COO recently shared some key updates in a post on X. ChatGPT has surpassed 400 million weekly active users. He also hinted at the upcoming GPT-4.5 and GPT-5 releases for both the chat interface and the API. Additionally, free users will have unlimited access to GPT-5 and enhanced agent capabilities.2. Meta announces LlamaCon, its first generative AI dev conference focusing on open-source AI developments. Despite competitive pressures from DeepSeek, Meta plans to release new Llama models with reasoning and multimodal capabilities.3. Together AI secured $305 million in Series B funding, led by General Catalyst and joined by notable investors like NVIDIA and Salesforce Ventures. This investment accelerates the companys AI Acceleration Cloud for open-source and enterprise AI application development.Whos Hiring in AIGen AI Developer @Syncreon Consulting (Toronto, Canada)Junior Data & AI Engineer Emirati @Ghobash Group (Dubai, UAE)AI Software Engineer (Mid/Senior) @SmartDev (Hanoi, Vietnam)Global Data Office Intern Summer 2025 @Visa (CA, USA)PDM Engineering Data Administrator @Bosch Group (Burnsville, MN, USA)Software Engineer NLU @Cognigy (Remote)Interested in sharing a job opportunity here? Contact [emailprotected].Think a friend would enjoy this too? Share the newsletter and let them join the conversation.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI0 Comments ·0 Shares ·14 Views
-
Words Matter: Are Language Barriers Driving Quiet Failures in AItowardsai.netAuthor(s): Kris Naleszkiewicz Originally published on Towards AI. The AI revolution is upon us, transforming how we work, live, and interact with the world.Yup. We know. Weve all heard.The media loves to cover spectacular successes and failures.But what about the quiet failures? The stalled projects. The initiatives that never quite get off the ground. Not because the technology doesnt work but because something more human gets in the way.Is this a familiar situation? Youre discussing an AI solution with a client. Theyre excited. Youre excited. The initial meetings go great. Lets bring in more stakeholders! they say. Soon, youve got the infrastructure team involved, five additional use cases under consideration, and another business executive at the table. The energy is palpable.Everyone sees the potential.And then the fighting starts. Or maybe not fighting exactly, but something more subtle. Resistance. Friction. Suddenly, a project that had everyone thrilled hit a wall. Why? Everyone was excited. Everyone saw the value. What changed?The answer might surprise you: its language.Not Python or Java or SQL but the everyday language we use to talk about AI. We have no shortage of technical challenges, but the most unexpected roadblocks to AI adoption often stem from a fundamental shift in how we need to work together. AI isnt just the new electricity of our age its forcing unprecedented collaboration between groups that previously operated in comfortable silos.When straightforward terms like performance, explainability, and risk carry such different meanings across teams, its no wonder some AI projects struggle to gain traction. These concepts form the foundation for discussing, evaluating, and implementing AI systems, but their meanings shift depending on whos using them. This linguistic flexibility isnt just a communication challenge its a window into deeper questions about professional identity, authority, and the changing nature of expertise in an AI-augmented workplace.As we introduce increasingly complex technical terminology around AI, these fundamental translation gaps only widen, creating invisible barriers that technical solutions alone cannot address.Setting the StageWe have all heard AI is the new electricity, but what that comparison misses is that when electricity transformed manufacturing, it didnt just change how things were powered it fundamentally restructured how people worked together.The same thing is happening with AI but more broadly. Electricity mainly required engineers and operators to collaborate. AI? Its forcing everyone to work together in unprecedented ways.AI Enthusaism Slipping Away. Image generated in DALL-E by author.Data scientists need domain experts to understand the problems theyre solving. Business leaders need technical teams to understand the possibilities and limitations. Front-line workers need to collaborate with both groups to ensure solutions work in the real world.And heres the kicker none of these groups are particularly good at talking to each other. Not because they dont want to, but because theyve never had to at least not at this depth.When Silos CrumbleThink about traditional technology implementations. You had clear handoffs: Business teams defined requirements, technical teams built solutions, and users learned to adapt. Everyone stayed in their lane and spoke their own language, and things mostly worked out.AI doesnt play that game.When data scientists build a model, they need to understand the business context not just surface-level requirements. When business teams deploy AI solutions, they need to understand more than just features and benefits they need to grasp concepts like model drift and edge cases. And users? Theyre not just learning new interfaces; theyre learning to collaborate with AI systems in ways that fundamentally change how they work.This isnt just cross-functional collaboration its forced interdependence. And its causing friction in unexpected places.LendAssist: Illustrative ExampleLets introduce LendAssist, an LLM-based mortgage lending assistant that we will use to illustrate this new reality.On paper, its straightforward. An AI system designed to streamline mortgage lending decisions, reduce processing time, and improve accuracy. LendAssists struggles highlight a critical challenge in AI adoption: seemingly straightforward terms can have radically different meanings for different stakeholders, leading to miscommunication and misunderstanding.What constitutes performance for a data scientist might be completely different for a data scientist building the product, the loan officer working with the product, or a customer interacting with the product.Similarly, explainability can have varying levels of depth and complexity depending on the audience.And risk can encompass a variety of issues and concerns, from technical failures to ethical dilemmas and job displacement.In the following sections, well explore these three key areas where language barriers arise.Expertise Paradox in AI AdoptionBefore we dive into specific challenges with LendAssist, lets discuss the expertise paradox, a fundamental tension that underlies them all.When LendAssist was first introduced, something unexpected happened. The most resistance didnt come from technophobes or change-resistant employees it came from the experienced loan officers and underwriters. The experts whose knowledge the system was designed to augment became its biggest skeptics.Why? The rapid rise of AI presents a unique challenge for experts in traditional fields. Its like suddenly finding yourself in a world where the games rules have changed, and your hard-earned expertise might not translate as seamlessly as youd hoped.This expertise paradox is a psychological and organizational hurdle that often gets overlooked in the excitement of AI adoption. Traditional tech leaders feel threatened by the need to start over as learners. Subject matter experts struggling with AI systems that challenge their domain expertise. There is a tension between deep knowledge of traditional systems and the need to adapt to AI-driven approaches.Organizations often face a delicate balancing act. They need to leverage their existing experts valuable experience while embracing AIs transformative potential. This creates tension and uncertainty as teams grapple with integrating traditional knowledge with AI capabilities.Through my work with AI implementations, Ive noticed a consistent pattern in how experts respond to this challenge. It typically manifests as three competing pressures Ive started mapping out to help teams understand whats happening.Maintaining Credibility I still know what Im doingExperts feel intense pressure to demonstrate that their knowledge remains relevant and valuable. Ive watched seasoned loan officers, for instance, struggle to show how their years of experience still matter when an AI system seems to make decisions in milliseconds.Embracing Change: I need to adapt to AIAt the same time, these experts recognize they need to evolve. This isnt just about learning new tools its about fundamentally rethinking how they apply their expertise. Ive seen loan officers transform from decision-makers to decision interpreters, but this shift rarely comes easily.Preserving Value: My experience mattersPerhaps most importantly, experts need to find ways to show how their experience enhances AI capabilities rather than being replaced by them. The most successful transitions Ive observed happen when experts can clearly see how their knowledge makes the AI better, not obsolete.The key to successful AI adoption is finding a balance between these three corners. Experts need to acknowledge the limitations of their existing knowledge, embrace the learning process, and find ways to leverage AI to enhance their expertise rather than viewing it as a threat.Despite these challenges, there are inspiring examples of experts successfully navigating the expertise paradox. These individuals embrace AI as a tool to augment their expertise and guide others in adapting to AI-driven approaches.GenAI Rollouts by Maturity. (McKinsey, 2025)This could explain a puzzling trend in AI adoption. A McKinsey survey completed in November 2024 and published in January 2025 and discussed in Superagency in the Workplace: Empowering people to unlock AIs full potential found that while one-quarter of executives have defined a GenAI roadmap, just over half remain stuck in the draft being refined stage.The technical capabilities exist, but organizations struggle with the human side of implementation. As technology continues evolving at breakneck speed, roadmaps must be built to evolve but we should recognize that many of the barriers arent technical at all.The invisible psychological and organizational traps repeatedly derail even the most promising AI initiatives.Performance A Multifaceted ChallengeThe data science team is ecstatic. LendAssists new fraud detection model boasts a 98% accuracy rate in their meticulously crafted testing environment. Champagne corks pop, high-fives are exchanged, and LinkedIn posts are drafted. But the celebration is short-lived. The operations team pushes back, overwhelmed by a 30% increase in false positives that clog their workflows.Meanwhile, the IT infrastructure team grapples with the models insatiable appetite for computing resources.And the business leaders, well, theyre left wondering why those key performance indicators (KPIs) havent budged an inch.Welcome to the performance paradox of AI adoption, where impressive technical achievements often clash with the messy realities of real-world implementation.Performance in AI is a chameleon, adapting its meaning depending on whos using the word. To truly understand this multifaceted challenge, we need to dissect performance through the lens of different stakeholders:Business Performance: The language of executives and shareholders focuses on the bottom line. Does LendAssist increase revenue? Does it reduce costs? Does it improve customer satisfaction and retention? Does it boost market share?Technical Performance: This is the domain of data scientists and engineers who are focused on metrics and algorithms. How accurate is LendAssists risk assessment model? Whats its precision and recall? How does it compare to traditional credit scoring methods regarding AUC and F1-score?Operational Performance: This is the realm of IT and operations teams concerned with utilization, efficiency, and scalability. How fast does LendAssist process loan applications? How much computing power does it consume? Can it handle peak loads without crashing? How easily does it integrate with existing systems?Human Performance: This is the often-overlooked dimension, focusing on the impact of AI on human workers. Does LendAssist make loan officers more productive? Does it reduce errors and improve decision-making? Does it enhance job satisfaction or create anxiety and resistance?But performance challenges are just the beginning.When different groups cant even agree on what good performance means, how do they explain their decisions to each other or, more importantly, customers?This brings us to an even thornier challenge: the crisis of explainability.Explainability The Black Box DilemmaA loan officer sits across from a client whos just been denied a mortgage by LendAssist. The client, understandably bewildered, asks, Why? The loan officer, with 20 years of experience explaining such decisions, finds herself staring blankly at the screen, unable to provide a clear answer. This isnt just about a declined mortgage its about a fundamental shift in professional authority, a moment where human expertise collides with the opacity of AI.Explainable AI (XAI) is no longer a luxury; its required to maintain trust, ensure responsible AI development, and navigate the evolving landscape of professional expertise.However, explainability itself has layers of understanding for different stakeholders, too.Technical Explainability Challenge: Our model shows high feature importance for these variables This might satisfy data scientists, but it leaves business users and clients in the dark. How does LendAssists technical team explain the models risk assessment to the data science team in a technically sound and understandable way?Process Explainability Challenge: But how does this translate to our existing underwriting workflow? Integrating AI into established processes requires explaining how it interacts with human decision-making. How does the data science team explain LendAssists integration into the loan approval process to the loan officers and underwriters, clarifying how it augments their existing expertise?Decision Explainability Challenge: How do we explain this to the customer? Building trust with clients requires clear, understandable explanations of AI-driven decisions. How do loan officers explain LendAssists loan denial decision to the client in a transparent and empathetic way without resorting to technical jargon?Impact Explainability Challenge: What does this mean for our business and regulatory compliance? Understanding the broader implications of AI decisions is crucial for responsible adoption. How do executives explain LendAssists impact on loan origination volume, risk mitigation, and compliance to stakeholders and regulators in an informative and persuasive way?Explainability isnt just about understanding its about authority.When professionals cant explain why decisions are made in their own domain, they lose not just control but their role as knowledge authorities. This can lead to resistance, fear of obsolescence, and difficulty integrating AI into existing workflows.Risk Navigating UncertaintyThe CTO champions LendAssist as the future of lending, painting a picture of streamlined workflows and data-driven decisions.The compliance team, however, sees looming regulatory disasters haunted by visions of biased algorithms and data breaches.Middle managers envision organizational chaos, with confused employees and disrupted workflows.Loan officers on the front lines of client interaction fear professional extinction and are replaced by an emotionless algorithm that spits out loan approvals and denials with cold, hard efficiency.It has the same technology but radically different risk landscapes.However, these surface-level conflicts mask a deeper pattern that reveals how organizations and individuals process the fundamental changes AI brings.Hidden Psychology of Risk When Talking about AIWe can break down this complex risk perception into four distinct levels:Level 1: What if it doesnt work? (Technical Risk) This is the most immediate and obvious concern. Will LendAssists AI models be accurate and reliable? Will the system be secure against cyberattacks? Will it comply with relevant regulations? But beneath these technical anxieties lies a deeper fear: losing control over familiar processes. When compliance officers obsess over LendAssists error rates, they often express anxiety about shifting from rule-based to probability-based decision-making. Theyre grappling with the uncertainty inherent in AI systems, where outcomes arent always predictable or easily explained.Level 2: What if it works too well? (Operational Risk) This is where things get interesting. As AI proves its capabilities, concerns shift from technical failures to operational disruptions. How will LendAssist impact the daily work of loan officers and underwriters? Will it disrupt existing processes and create confusion? Will it lead to job losses? But the real fear here is more personal: Will AI erode the value of human skills and experience? When loan officers worry about LendAssist processing applications too quickly, theyre asking, Will speed make my experience irrelevant? Theyre grappling with the potential for AI to diminish their role and authority in the lending process.Level 3: What if it works differently than we expect? (Strategic Risk) This level delves into the broader implications of AI adoption. Will LendAssist have unintended consequences? Will it disrupt the competitive landscape? Will it create new ethical dilemmas? But the underlying fear is about professional identity. When managers resist LendAssists recommendations, they often protect their identity as decision-makers more than questioning the AIs judgment. Theyre grappling with the potential for AI to redefine their roles and responsibilities, challenging their authority and expertise.Level 4: What if it changes who we are? (Identity Risk) This is the deepest and most existential level of risk perception. Will LendAssist fundamentally change how we work and interact with each other? Will it alter our understanding of expertise and professional identity? Will it reshape our values and beliefs about the role of technology in our lives? This is where the fear of obsolescence truly takes hold. When senior underwriters label LendAssist too risky, theyre expressing fear about transitioning from decision-makers to decision-validators. Theyre grappling with the potential for AI to transform their sense of self-worth and professional purpose.How technical and identity risks become intertwined makes AI risk assessment particularly challenging. When a loan officer says, LendAssists risk models arent reliable enough, they might be expressing fear of losing their ability to make judgment calls or anxiety about their role in the organization changing.The more organizations focus on addressing technical risks, the more they might inadvertently amplify identity risks by suggesting that human judgment is secondary to AI capabilities. As AI systems like LendAssist become more capable, they dont just present technical risks they force us to reconsider what it means to be an expert in an AI-augmented world.These layered challenges might seem insurmountable when viewed through a purely technical lens. After all, how do you solve a technical problem when the real issue lies in professional identity? How do you address performance concerns when different stakeholders define success in fundamentally different ways?What Ive found is that acknowledging these language barriers is the first crucial step toward overcoming them. When we recognize that resistance to AI adoption often stems from communication gaps rather than technological limitations, we open up new paths forward.The Path Forward: A Practical PerspectiveOnce you recognize these language barriers, they become surprisingly manageable. Were not just dealing with technical challenges were dealing with translation challenges. We need to become multilingual in the different ways our stakeholders talk about and understand AI.The organizations Ive seen succeed with AI adoption arent just technically sophisticated theyre linguistically sophisticated.They create a shared vocabulary that respects different perspectives. They recognize expertise transitions as a core implementation part and build bridges between technical and professional languages. They value communication skills as much as technical skills.ConclusionThis isnt just another factor to consider in AI adoption its often the factor determining go or no go decisions.The good news? While technical challenges typically require significant resources, language barriers can be addressed through awareness and intentional communication. Were all figuring this out together, but recognizing how language shapes AI adoption has been one of the most potent insights. Its changing how I approach projects, how I work with stakeholders, and, most importantly, how I help organizations navigate the fundamental changes AI brings to professional expertise.The choice isnt between technical excellence and human understanding its about building bridges between them.And sometimes, those bridges start with something as simple as recognizing that we might mean different things when we say performance, explainability, or risk.Further reading and citationWhy AI Projects Fail and How They Can SucceedBy some estimates, more than 80 percent of AI projects fail. Thats twice the rate of failure of information technologywww.rand.orgKeep Your AI Projects on TrackAI-and especially its newest star, generative AI-is today a central theme in corporate boardrooms, leadershiphbr.orgSuperagency in the workplace: Empowering people to unlock AIs full potentialAlmost all companies invest in AI, but just 1% believe they are at maturity. Our new report looks at how AI is beingwww.mckinsey.comJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI0 Comments ·0 Shares ·18 Views
-
Warner Bros. Cancels Wonder Woman Game, Closes Three Studioswww.ign.comWarner Bros. is canceling its planned Wonder Woman game and shutting down three studios: Monolith Productions, Player First Games, and WB San Diego, according to Bloomberg reporter Jason Schreier.Schreier broke the news on Bluesky today, followed by the release of a full report on Bloomberg. Shortly after Schreier's post, WB confirmed the shut downs to Kotaku in a statement:We have had to make some very difficult decisions to structure our development studios and investments around building the best games possible with our key franchises - Harry Potter, Mortal Kombat, DC and Game of Thrones. After careful consideration, we are closing three of our development studios Monolith Productions, Player First Games and Warner Bros. Games San Diego. This is a strategic change in direction and not a reflection of these teams or the talent that consists within them.The development of Monoliths Wonder Woman videogame will not move forward. Our hope was to give players and fans the highest quality experience possible for the iconic character, and unfortunately this is no longer possible within our strategic priorities. This is another tough decision, as we recognize Monoliths storied history of delivering epic fan experiences through amazing games. We greatly admire the passion of the three teams and thank every employee for their contributions. As difficult as today is, we remain focused on and excited about getting back to producing high-quality games for our passionate fans and developed by our world class studios and getting our Games business back to profitability and growth in 2025 and beyond.Earlier this year, another Bloomberg report suggested Wonder Woman was in trouble after rebooting and switching directors in early 2024. This came amid larger struggles at the company's gaming division, including layoffs at Rocksteady, the lukewarm reception to Suicide Squad: Kill the Justice League and the shutdown of MultiVersus.And even more recently, WB Games has appeared to undergo a restructuring of sorts, as long-time games head David Haddad announced his departure from the company, and rumors circulated that the division might be sold off. Specifically, this move represents a blow to WB's DC universe-connected gaming efforts. Notably, just yesterday, James Gunn and Peter Safran said in a presentation that it would be "a couple of years" before the first DCU video game.With this closure, the games industry loses three incredibly storied studios. Monolith Productions, which had been working on Wonder Woman, was founded in 1994 and acquired by WB in 2004. It's best-known for Middle-earth: Shadow of Mordor and its sequel, Shadow of War, the former of which pioneered the lauded Nemesis system that WB successfully patented in 2021.Player First Games, a newer studio established in 2019, was responsible for MultiVersus. The game was well-received critically and saw launch success, but underperformed relative to WB's expectations. WB San Diego, similarly, is a newer studio established in 2019 with a focus on mobile, free-to-play games.These shutdowns continue a trend going back roughly three years of increasing games industry layoffs, project cancellations, and studio closures. In 2023 alone, it's estimated that over 10,000 game developers were laid off. That number shot up to over 14,000 in 2024, and while 2025 has seen numerous closures, the exact number of impacted individuals is hazier due to fewer companies reporting these layoffs and shutdowns, or specific numbers affected.Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to rvalentine@ign.com.0 Comments ·0 Shares ·7 Views
-
Paradise's Seventh Episode is the Most Upsetting Episode of TV You'll Watch All Yearwww.ign.comThis article contains spoilers for Paradise on Hulu.Hulus new hit show Paradise is usually a pulpy good time. Sure theres drama in the Dan Fogelman joint, and plenty of tears and family angst (per the showrunners M.O. from This is Us). Theres even some action and intrigue, thanks to Sterling K. Browns upright secret service agent trying to unravel the mystery of who killed the President, played by James Marsden. Its all good fun, at least until the seventh episode, The Day, which is the most upsetting episode of TV youll watch this year.Is that a bold pronouncement for two months into 2025? Surely. But while the first six episodes are a good time at the ol boob tube, Episode 7 will punch you in the gut and bring you to tears for nearly the entire episode. Its an unrelenting grand guignol of emotion that progresses multiple plot points in the show and sets up the season finale while you watch the world end.And the focus of The Day or at least whats so gut-wrenching about it isnt the disaster, which we only get to see glimpses of on news footage, and hear about in phone calls and broadcasts. Instead, the episode presents what happened inside The White House in near real-time, and it is horrifying.In the series, we meet Agent Xavier Collins (Brown), who has been tasked as the lead Secret Service agent guarding President Cal Bradford (Marsden). While it initially seems like Bradford is a young ex-president in the mode of JFK, lazing away his days after two terms in office drinking and womanizing, theres actually a lot more going on which becomes clear after Xavier discovers the dead body of Bradford on his bedroom floor.Its all very soap opera weepy up until this point.In fact, up until his death, Bradford was still President of about 25,000 people, what remains of the human race after an extinction-level disaster, who now live in the fake suburban community of Paradise under a mountain in Colorado. Over the course of the season through flashbacks, we slowly discover what may have led to this disaster, as well as how Paradise was built. Most of it is down to billionaire Samantha Redmond (Julianne Nicholson), who decided to prepare for a coming climate disaster by building this underground community, a little slice of heaven that is inspired by the dying wish of her youngest son. Again: This Is Us, Dan Fogelman, etc, you get the gist. Its all very soap opera weepy up until this point.And yes, this is a show about climate change, something that isnt as they used to say advertised on the tin. Redmond encounters an unruly scientist in an earlier episode who warns her that there are about 10 years until a devastating climate-fueled disaster destroys the world Not slowly, but all at once.Thats what we get to see happen in Episode 7. Thanks to the melting of the polar ice caps, a volcano explodes in Antarctica, instantly melting a huge chunk of the ice and causing a tsunami hundreds of feet tall to crash over the Earth, destroying everything in its wake. The whole situation certainly seems to be Hollywood, Day After Tomorrow, on the surface and it is. Having a climate disaster that is, in essence, well experience unlivable levels of heat over several years wouldnt be nearly as cinematic as what happens in the series.Part of the neat, narrative trick that Fogelman and company have worked into this series is that we already know what is supposed to happen. In an earlier episode, Cal asks Xavier to walk him through the protocol for the disaster, code-named Versailles likely named that due to the quick exit the royal family in France took from Versailles during the French Revolution. As Xavier talks him through the 20-minute plan they joke about Die Hard, chat about their families, and Cal strongly indicates to Xavier that his wife should stop taking business trips. Unfortunately, Xavier doesnt really know the severity of the situation. So despite Cals strong looks at Xavier trying to indicate how bad things are going to get, Xavier merely defers that he can never tell his wife what to do so of course shes going to keep going out of town.The signs are all there that this is going to be an emotional disaster, as well as a physical one. We also know that in the present Xaviers wife is dead, and hes solo-parenting his two kids in Paradise. We also have an inkling that Cal was drinking himself into a stupor every night before his untimely murder thanks to whatever happened on the day they left The White House. But again, given how pulpy everything has been up until now, theres no reason to think this episode will be an emotionally harrowing hour that will leave you physically and emotionally drained.It is, though. And the reason for that is everything starts to go wrong immediately, in big and small ways. Instead of the smooth roll-out that was presented in Cal and Xaviers late-night walk, the sudden tsunami that destroys Australia we get constant updates of which major cities are wiped off the face of the Earth as the episode continues sends the White House into a frenzy. We watch as Cal wrestles with lying to the American people, ultimately telling the truth about the disaster through a final broadcast, urging everyone to spend time with the ones they care about before they die. We see that he does not escape cleanly as the people left behind in the White House realize the President and staff are exiting without them, leaving them to their deaths those who arent shot down by the Secret Service first.Of course, Xaviers wife is out of town, and watching the usually unflappable agent fall apart as he tries repeatedly to get through to her on the jammed phone lines, working with the President and Joint Chiefs of Staff to get her a route to one of the planes to Colorado, is heartbreaking As is the way he keeps avoiding his childrens questions about their mother. And the punch, that Cal always knew she wasnt going to make it, leads to Xavier flipping out on the tarmac in a way that will tear your heart out.Adding on to that is a trickle-down effect where Xavier, who is dealing with the fact that he may not get his wife out in time, is doling out the same lies to the Presidents long-time secretary, who is not on the to-be saved list. Once she realizes she wont make it out, all she asks of Xavier is that they save her special needs son (we dont see him in the episode), leading to a moment right before the President flees where the action pauses on her. Mr. President, she says, My sister is here with Edward, theyre at the gate. So Ill go get him. She turns to leave and pick up her son But he was never getting taken along. And the level of betrayal at that moment is more crushing than any tsunami destruction footage.That footage is brutal, though. The one time we get to see whats happening in the world at large is through a news report from Jakarta, where a reporter is broadcasting from the tallest building in the city. They should be safe but of course, arent. And as the tsunami approaches, the crash of the waves causes a repeated and off-tempo sonic boom that will make you jump in your seat. As the reporter realizes the wave is going to overtake them, the broadcast cuts out, it cuts back to a silent newsroom, and we watch the staff outside the Oval Office, similarly, silent, in tears, the full weight of whats happening hitting them for the first time both literally and figuratively.In the middle of this, Marsden gives the performance of a lifetime as President Bradford, initially acting strong and choosing a blue tie over a yellow one because its more calming, and then ultimately heartsick over getting to leave when everyone else is being left to die. There are classic drama moments like when he asks a White House janitor why hes still cleaning the building with everything thats going on, but even these more trope-y pieces of drama work because Marsdens Bradford is so anguished over his role in not saving more people when he could.Is The Day manipulative? Sure. But its to prove a point, and show the weight of a climate disaster that is coming in the real world so slowly we think it wont happen in our lifetimes when it more than likely will. Thanks to the real-time structure, the performances, the direction, and the unrelenting nature of the hour, theres no way you cant come out of it feeling grief-stricken about this potential future of the Earth. In next weeks season finale, Paradise should be back to the pulp and fun that characterized the first six episodes. But for one hour, Hulus sci-fi show makes us live on the last day of the human species. Its shattering and will leave you in despair. Heres hoping and dreading that the team will find something else as traumatic and emotionally urgent for the already-announced Season 2. Its a lot. But if you can handle it, its also a must-watch.0 Comments ·0 Shares ·6 Views
-
Kathleen Kennedys Legacy Is More Than Just Star Warswww.denofgeek.comIts worth pointing out, however, that the mass-politicization of online fandom in general in a post-Twitter, post-Joe Rogan world has been trending in the direction of performative outrage for years-i.e. it is perhaps inevitable these days that Marvel fans who grew up loving Anthony Mackie in Captain America: The Winter Soldier 11 years ago are now helping shape opinions with 45 minutes about him starring in Brave New World is outrageous, actually.Kennedy walked into this arena with eyes wide open to run a factory in an era where franchises were not meant to be nurtured and protected, but exploited and strip-mined. Still, even within those confines, she produced a few good Star Wars movies and at least one terrific television series in Andor, which took more risks with its IP than almost anything else being produced under the Marvel umbrella, or among competitors who have similarly tried to make superhero movies or Lord of the Rings shared universe expansions.And, again, Kennedy has enjoyed a remarkable career well before that contentious galaxy far, far away.A California lifer, Kennedy wasnt even 25 when she got her foot in the door as an assistant for John Milius, the director of films like The Wind and the Lion (1975), the guy who thought of giving Quint that iconic speech in Jaws, and also the executive producer of Steven Spielbergs 1941. That epic comedy would be one of Spielbergs few missteps, but on the production he noticed Kennedys ability to pitch great ideas despite nominally being a secretary. He quickly would make her his associate on Raiders of the Lost Ark. By the time the sequel Indiana Jones and the Temple of Doom came around three years later, she was a junior producer on the movie.In the interim, she helped co-found Spielbergs production company Amblin Entertainment, which led to her executive producing 1980s favorites like The Goonies, Who Framed Roger Rabbit?, and the Back to the Future trilogy. She also would become one of Spielbergs most trusted personal producers on films that include Empire of the Sun, Jurassic Park, and Munich. She likewise worked outside of Spielbergs orbit in the 1990s and 2000s as a producer on films that run the gamut from Twister and The Sixth Sense to Seabiscuit and The Curious Case of Benjamin Button.The reason Kennedy got the Lucasfilm gig after the Disney purchase is she had a history of coming up in the same Hollywood where filmmakers like Lucas and Spielberg made those cultural touchstones Disney so eagerly builds on. She was there at the laying of the foundations in the case of producing two-thirds of the original Indiana Jones flicks. She earned her place in film history for being a smart, resourceful filmmaker who contributed to movies that werent only commercially viable, but in many instances built to last as classics through the decades that followed.0 Comments ·0 Shares ·12 Views