0 Comments
0 Shares
63 Views
Directory
Directory
-
Please log in to like, share and comment!
-
WWW.WIRED.COMGoogle Pixel 9a Review: Still the Best SmartphoneIt might not look like a classic Pixel phone anymore, but this Android is still the best smartphone bargain.0 Comments 0 Shares 49 Views
-
APPLEINSIDER.COMHow to use profiles to change how Terminal windows look in macOSThe Mac Terminal app is your window into macOS's UNIX underpinnings. Here's how to customize the appearance of Terminal in macOS.Use customization to change the look of Terminal windows.The UNIX operating system standard goes back decades. To 1970 in fact, when it was created at Bell Labs using the C programming language.There have been countless variants of UNIX over the years, and AT&T still owns the patents and rights to what is now known as UNIX System V. Continue Reading on AppleInsider | Discuss on our Forums0 Comments 0 Shares 87 Views
-
ARCHINECT.COMWill Princeton stand by academic freedom or fold to funding threats?Princeton University president Christopher L. Eisgruber has earned praise over his stance on academic freedom and protecting funding opportunities from the political pressures now being placed on universities by the Trump Administration. Facing Title VI lawsuits and the loss of $210 million in grant funding, he tells NPR: "We make our decisions at Princeton based on our values and our principles… we’re going to stand strong for our values. We believe it’s important to defend academic freedom, and that’s not something that can be compromised."Eisgruber repeated his position in an interview with the NYT's The Daily podcast this week. Princeton is joined by other Ivies mulling threats for the removal of grants totaling $1 billion (Cornell), $510 million (Brown), and $175 million (UPenn). Harvard has said it will borrow $750 million and Columbia largely capitulated to demands that were tied to $400 million last month.0 Comments 0 Shares 76 Views
-
GAMINGBOLT.COMEndless Legend 2 Enters PC Early Access This SummerAmplitude Studios’ Endless Legend 2 will kick-start early access on PC starting this Summer. The developer has released a new trailer, highlighting the setting and new features, including changing land masses and dialogue systems. Check it out below. Like its predecessor, Endless Legend 2 is a 4X strategy game where players select a faction and co-exist (or compete) against others. The world is nearing its end due to “cataclysmic events” and you can decide whether to forge friendships, rule through power or uncover the land’s mysteries. Each faction has different traits, units and skills, and you can either control them differently in battles or auto-resolve them for faster pacing. Heroes also return, offering unique traits to help your empire and armies. As for the dynamically shifting terrain, it can provide new resources, but beware – some previously isolated factions may gain footholds to your cities. Amplitude hasn’t confirmed how much content will be available when early access launches, so stay tuned for more details.0 Comments 0 Shares 57 Views
-
VENTUREBEAT.COMWriter unveils ‘AI HQ’ platform, betting on agents to transform enterprise workJoin our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Enterprise AI company Writer unveiled a new platform today that it claims will help businesses finally bridge the gap between AI’s theoretical potential and real-world results. The product, called “AI HQ,” represents a significant shift toward autonomous AI systems that can execute complex workflows across organizations. “This is not another hype train, but a massive change coming to enterprise software,” said May Habib, Writer’s CEO and co-founder, at a press conference announcing the product. “The vast majority of the enterprise has not gotten meaningful results from generative AI, and it’s been two years. There has never before been such a gap between what the tech is capable of and what the enterprise results have been.” AI HQ is Writer’s answer to this problem—a platform for building, activating, and supervising AI “agents” that can perform sequences of tasks traditionally requiring human intervention. These agents can make decisions, reason through problems, and take actions across different systems with little human oversight. How Writer’s AI agents move beyond chatbots to deliver real business value The announcement comes as many enterprises reevaluate their AI strategies. According to Habib, most AI implementations have failed to deliver substantial value, with businesses struggling to move beyond basic generative AI use cases. “Process mapping is the new prompt engineering,” Habib said, highlighting how the company’s approach has evolved beyond simply crafting the right text prompts to designing entire workflows for AI systems. AI HQ consists of three main components: a development environment called Agent Builder where IT and business teams collaboratively create agents; Writer Home, which provides access to over 100 pre-built agents for specific industries and functions; and observability tools for monitoring and governing agent behavior at scale. During a product demonstration, Writer executives showed how customers are already using these technologies. In one example, an investment management firm uses Writer’s agents to automatically generate fund reports and personalized market commentary by pulling data from Snowflake, SEC filings, and real-time web searches. Another demonstration showed a marketing workflow where an agent could analyze a strategy brief, create a project in Adobe Workfront, generate content, find or create supporting images, and prepare the material for legal review. Enterprise AI that actually works: How Writer’s autonomous agents tackle complex business workflows Writer’s pivot to agent-based AI reflects broader market trends. While many companies initially focused on using large language models for text generation and chat functions, businesses are increasingly exploring how AI can automate complex processes. “Ten percent of the headcount is going to be enough,” Habib told Forbes in a recent interview about the potential workforce impact of agent technologies. This dramatic assertion underscores the transformative potential—and potential disruption—these technologies may bring to knowledge work. Anna Griffin, Chief Marketing Officer at cybersecurity firm Commvault and an early adopter of Writer’s agent technology, spoke during the press conference about the value of connecting previously siloed systems. “What if I could connect our Salesforce, Gainsite, Optimizely? What if I could pull together enough of the insights across these systems that we could actually work to create an experience for our customer that is seamless?” Griffin said. Her advice for others: “Think about the hardest, gnarliest problem your industry has, and start thinking about how agentic AI is going to solve that.” The future of AI learning: Writer’s self-evolving models remember mistakes and learn without retraining The event also featured a presentation from Waseem AlShikh, Writer’s co-founder and CTO, who unveiled research into “self-evolving models” — AI systems that can learn from their mistakes over time without additional training. “If we expect AI to behave more like a human, we need it to learn more like a human,” AlShikh explained. He demonstrated how traditional AI models repeatedly make the same errors when faced with a maze challenge, while self-evolving models remember past failures and find better solutions. “This unique architecture means that over time, as the model is used, it gains knowledge — a model that gets smarter the more you engage with it,” AlShikh said. Writer expects to have self-evolving models in pilot by the end of the year. Inside Writer’s $1.9 billion valuation: How enterprise AI adoption is driving explosive growth Writer’s aggressive expansion comes after raising $200 million in Series C funding last November, which valued the company at $1.9 billion. The funding round was co-led by Premji Invest, Radical Ventures, and ICONIQ Growth, with participation from major enterprise players including Salesforce Ventures, Adobe Ventures, and IBM Ventures. The company has witnessed impressive growth, with a reported 160% net retention rate, meaning customers typically expand their contracts by 60% on average after initial adoption. According to a Forbes report published today, some clients have grown from initial contracts of $200,000-$300,000 to spending approximately $1 million each. Writer’s approach differs from competitors like OpenAI and Anthropic, which have raised billions but focus more on developing general-purpose AI models. Instead, Writer has developed its own models — named Palmyra—specifically designed for enterprise use cases. “We trained our own models even though everyone advised against it,” AlShikh told Forbes. This strategy has allowed Writer to create AI that’s more secure for enterprise deployment, as client data is retrieved from dedicated servers and isn’t used to train models, mitigating concerns about sensitive information leaks. Writer’s ambitions face obstacles in a competitive landscape. The enterprise AI software market — projected to grow from $58 billion to $114 billion by 2027 — is attracting intense competition from established tech giants and well-funded startups alike. Paul Dyrwal, VP of Generative AI at Marriott who appeared at Writer’s press conference, shared advice for enterprises navigating this rapidly evolving field: “Focus on fewer, higher-value opportunities rather than chasing every possibility.” The announcement also comes amid growing concerns about AI’s impact on jobs. While Habib acknowledged that AI will change work dramatically, she painted an optimistic picture of the transition. “Your people are instrumental to redesigning your processes to be AI-native and shaping what the future of work looks like,” she said. “We think that very soon, on a horizon of five to 10 years, we won’t be doing work as much as we will be building AI that does the work. This will create exciting new roles, new AI-related jobs that are interesting and rewarding.” From software vendor to innovation partner: Writer’s vision for AI-native enterprise transformation As Writer positions itself at the forefront of enterprise AI, Habib emphasized that the company sees itself as more than just a software vendor. “We’re not a software vendor here. We see ourselves as more than that. We’re your innovation partners,” she said. “If you want to rebuild your company to be AI-native, if you want to be part of the most important enterprise transformation maybe ever, go sign up to be in the Writer agent beta right now. Together, we can dream big and build fast.” The Agent Builder and observability tools are currently in beta, with general availability expected later this spring, while the Writer Home and library of ready-to-use agents are available to all customers starting today. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.0 Comments 0 Shares 70 Views
-
WWW.THEVERGE.COMRazer’s PC-to-mobile streaming platform is now availableUsing Razer PC Remote Play with the Kishi Ultra mobile controller and an Android device unlocks an additional haptics feature. | Image: Razer Razer has finally launched its platform for streaming PC games to mobile devices at their native screen resolutions, aspect ratios, and refresh rates. Razer PC Remote Play is now available for download on the App Store and Google Play and is compatible with Windows, Apple, and Android mobile devices running at least Windows 11, iOS 18, and Android 14, respectively. Built on the Moonlight streaming client, Razer PC Remote Play requires people to install the Razer Cortex game launcher – which now has a redesigned interface – on their PCs, which is compatible with services like Steam, Epic Games, and Microsoft’s PC Game Pass. Mobile devices will need the Razer Nexus game launcher as well as the new Razer PC Remote Play app installed. Razer PC Remote Play was first announced at CES 2025 and has been in beta. Razer says the launch version of the app now includes the “AV1 video codec for improved quality and lower latency” plus support for the Razer Kishi Ultra and all controllers that are compatible with iOS and Android. People that are using the Kishi Ultra with Android devices will benefit from Razer’s Sensa HD Haptics feature. Sounds from a game are used to create haptic feedback by leveraging the same hardware mobile devices use to provide silent vibrating notifications. When streaming PC games to the iPad, the Razer PC Remote Play app is fully compatible with connected keyboards, mice, and trackpads, potentially making Apple’s tablet a good solution for streaming and playing first person shooter games.0 Comments 0 Shares 59 Views
-
WWW.MARKTECHPOST.COMInterview with Hamza Tahir: Co-founder and CTO of ZenMLBio: Hamza Tahir is a software developer turned ML engineer. An indie hacker by heart, he loves ideating, implementing, and launching data-driven products. His previous projects include PicHance, Scrilys, BudgetML, and you-tldr. Based on his learnings from deploying ML in production for predictive maintenance use-cases in his previous startup, he co-created ZenML, an open-source MLOps framework for creating production grade ML pipelines on any infrastructure stack. Question: From Early Projects to ZenML: Given your rich background in software development and ML engineering—from pioneering projects like BudgetML to co-founding ZenML and building production pipelines at maiot.io—how has your personal journey influenced your approach to creating an open-source ecosystem for production-ready AI? My journey from early software development to co-founding ZenML has deeply shaped how I approach building open-source tools for AI production. Working on BudgetML taught me that accessibility in ML infrastructure is critical – not everyone has enterprise-level resources, yet everyone deserves access to robust tooling. At my first startup maiot.io, I witnessed firsthand how fragmented the MLOps landscape was, with teams cobbling together solutions that often broke in production. This fragmentation creates real business pain points – for example, many enterprises struggle with lengthy time-to-market cycles for their ML models due to these exact challenges. These experiences drove me to create ZenML with a focus on being production-first, not production-eventual. We built an ecosystem that brings structure to the chaos of managing models, ensuring that what works in your experimental environment transitions smoothly to production. Our approach has consistently helped organizations reduce deployment times and increase efficiency in their ML workflows. The open-source approach wasn’t just a distribution strategy—it was foundational to our belief that MLOps should be democratized, allowing teams of all sizes to benefit from best practices developed across the industry. We’ve seen organizations of all sizes—from startups to enterprises—accelerate their ML development cycles by 50-80% by adopting these standardized, production-first practices. Question: From Lab to Launch: Could you share a pivotal moment or technical challenge that underscored the need for a robust MLOps framework in your transition from experimental models to production systems? ZenML grew out of our experience working in predictive maintenance. We were essentially functioning as consultants, implementing solutions for various clients. A little over four years ago when we started, there were far fewer tools available and those that existed lacked maturity compared to today’s options. We quickly discovered that different customers had vastly different needs—some wanted AWS, others preferred GCP. While Kubeflow was emerging as a solution that operated on top of Kubernetes, it wasn’t yet the robust MLOps framework that ZenML offers now. The pivotal challenge was finding ourselves repeatedly writing custom glue code for each client implementation. This pattern of constantly developing similar but platform-specific solutions highlighted the clear need for a more unified approach. We initially built ZenML on top of TensorFlow’s TFX, but eventually removed that dependency to develop our own implementation that could better serve diverse production environments. Question: Open-Source vs. Closed-Source in MLOps: While open-source solutions are celebrated for innovation, how do they compare with proprietary options in production AI workflows? Can you share how community contributions have enhanced ZenML’s capabilities in solving real MLOps challenges? Proprietary MLOps solutions offer polished experiences but often lack adaptability. Their biggest drawback is the “black box” problem—when something breaks in production, teams are left waiting for vendor support. With open-source tools like ZenML, teams can inspect, debug, and extend the tooling themselves. This transparency enables agility. Open-source frameworks incorporate innovations faster than quarterly releases from proprietary vendors. For LLMs, where best practices evolve weekly, this speed is invaluable. The power of community-driven innovation is exemplified by one of our most transformative contributions—a developer who built the “Vertex” orchestrator integration for Google Cloud Platform. This wasn’t just another integration—it represented a completely new approach to orchestrating pipelines on GCP that opened up an entirely new market for us. Prior to this contribution, our GCP users had limited options. The community member developed a comprehensive Vertex AI integration that enabled seamless orchestration in Question: Integrating LLMs into Production: With the surge in generative AI and large language models, what are the key obstacles you’ve encountered in LLMOps, and how does ZenML help mitigate these challenges? LLMOps presents unique challenges including prompt engineering management, complex evaluation metrics, escalating costs, and pipeline complexity. ZenML helps by providing: Structured pipelines for LLM workflows, tracking all components from prompts to post-processing logic Integration with LLM-specific evaluation frameworks Caching mechanisms to control costs Lineage tracking for debugging complex LLM chains Our approach bridges traditional MLOps and LLMOps, allowing teams to leverage established practices while addressing LLM-specific challenges. ZenML’s extensible architecture lets teams incorporate emerging LLMOps tools while maintaining reliability and governance. Question: Streamlining MLOps Workflows: What best practices would you recommend for teams aiming to build secure, scalable ML pipelines using open-source tools, and how does ZenML facilitate this process? For teams building ML pipelines with open-source tools, I recommend: Start with reproducibility through strict versioning Design for observability from day one Embrace modularity with interchangeable components Automate testing for data, models, and security Standardize environments through containerization ZenML facilitates these practices with a Pythonic framework that enforces reproducibility, integrates with popular MLOps tools, supports modular pipeline steps, provides testing hooks, and enables seamless containerization. We’ve seen these principles transform organizations like Adeo Leroy Merlin. After implementing these best practices through ZenML, they reduced their ML development cycle by 80%, with their small team of data scientists now deploying new ML use cases from research to production in days rather than months, delivering tangible business value across multiple production models. The key insight: MLOps isn’t a product you adopt, but a practice you implement. Our framework makes following best practices the path of least resistance while maintaining flexibility. Question: Engineering Meets Data Science: Your career spans both software engineering and ML engineering—how has this dual expertise influenced your design of MLOps tools that cater to real-world production challenges? My dual background has revealed a fundamental disconnect between data science and software engineering cultures. Data scientists prioritize experimentation and model performance, while software engineers focus on reliability and maintainability. This divide creates significant friction when deploying ML systems to production. ZenML was designed specifically to bridge this gap by creating a unified framework where both disciplines can thrive. Our Python-first APIs provide the flexibility data scientists need while enforcing software engineering best practices like version control, modularity, and reproducibility. We’ve embedded these principles into the framework itself, making the right way the easy way. This approach has proven particularly valuable for LLM projects, where the technical debt accumulated during prototyping can become crippling in production. By providing a common language and workflow for both researchers and engineers, we’ve helped organizations reduce their time-to-production while simultaneously improving system reliability and governance. Question: MLOps vs. LLMOps: In your view, what distinct challenges do traditional MLOps face compared to LLMOps, and how should open-source frameworks evolve to address these differences? Traditional MLOps focuses on feature engineering, model drift, and custom model training, while LLMOps deals with prompt engineering, context management, retrieval-augmented generation, subjective evaluation, and significantly higher inference costs. Open-source frameworks need to evolve by providing: Consistent interfaces across both paradigms LLM-specific cost optimizations like caching and dynamic routing Support for both traditional and LLM-specific evaluation First-class prompt versioning and governance ZenML addresses these needs by extending our pipeline framework for LLM workflows while maintaining compatibility with traditional infrastructure. The most successful teams don’t see MLOps and LLMOps as separate disciplines, but as points on a spectrum, using common infrastructure for both. Question: Security and Compliance in Production: With data privacy and security being critical, what measures does ZenML implement to ensure that production AI models are secure, especially when dealing with dynamic, data-intensive LLM operations? ZenML implements robust security measures at every level: Granular pipeline-level access controls with role-based permissions Comprehensive artifact provenance tracking for complete auditability Secure handling of API keys and credentials through encrypted storage Data governance integrations for validation, compliance, and PII detection Containerization for deployment isolation and attack surface reduction These measures enable teams to implement security by design, not as an afterthought. Our experience shows that embedding security into the workflow from the beginning dramatically reduces vulnerabilities compared to retrofitting security later. This proactive approach is particularly crucial for LLM applications, where complex data flows and potential prompt injection attacks create unique security challenges that traditional ML systems don’t face. Question: Future Trends in AI: What emerging trends for MLOps and LLMOps do you believe will redefine production workflows over the next few years, and how is ZenML positioning itself to lead these changes? Agents and workflows represent a critical emerging trend in AI. Anthropic notably differentiated between these approaches in their blog about Claude agents, and ZenML is strategically focusing on workflows primarily for reliability considerations. While we may eventually reach a point where we can trust LLMs to autonomously generate plans and iteratively work toward goals, current production systems demand the deterministic reliability that well-defined workflows provide. We envision a future where workflows remain the backbone of production AI systems, with agents serving as carefully constrained components within a larger, more controlled process—combining the creativity of agents with the predictability of structured workflows. The industry is witnessing unprecedented investment in LLMOps and LLM-driven projects, with organizations actively experimenting to establish best practices as models rapidly evolve. The definitive trend is the urgent need for systems that deliver both innovation and enterprise-grade reliability—precisely the intersection where ZenML is leveraging its years of battle-tested MLOps experience to create transformative solutions for our customers. Question: Fostering Community Engagement: Open source thrives on collaboration—what initiatives or strategies have you found most effective in engaging the community around ZenML and encouraging contributions in MLOps and LLMOps? We’ve implemented several high-impact community engagement initiatives that have yielded measurable results. Beyond actively soliciting and integrating open-source contributions for components and features, we hosted one of the first large-scale MLOps competitions in 2023, which attracted over 200 participants and generated dozens of innovative solutions to real-world MLOps challenges. We’ve established multiple channels for technical collaboration, including an active Slack community, regular contributor meetings, and comprehensive documentation with clear contribution guidelines. Our community members regularly discuss implementation challenges, share production-tested solutions, and contribute to expanding the ecosystem through integrations and extensions. These strategic community initiatives have been instrumental in not only growing our user base substantially but also advancing the collective knowledge around MLOps and LLMOps best practices across the industry. Question: Advice for Aspiring AI Engineers: Finally, what advice would you give to students and early-career professionals who are eager to dive into the world of open-source AI, MLOps and LLMOps, and what key skills should they focus on developing? For those entering MLOps and LLMOps: Build complete systems, not just models—the challenges of production offer the most valuable learning Develop strong software engineering fundamentals Contribute to open-source projects to gain exposure to real-world problems Focus on data engineering—data quality issues cause more production failures than model problems Learn cloud infrastructure basics–Key skills to develop include Python proficiency, containerization, distributed systems concepts, and monitoring tools. For bridging roles, focus on communication skills and product thinking. Cultivate “systems thinking”—understanding component interactions is often more valuable than deep expertise in any single area. Remember that the field is evolving rapidly. Being adaptable and committed to continuous learning is more important than mastering any particular tool or framework. Question: How does ZenML’s approach to workflow orchestration differ from traditional ML pipelines when handling LLMs, and what specific challenges does it solve for teams implementing RAG or agent-based systems? At ZenML, we believe workflow orchestration must be paired with robust evaluation systems—otherwise, teams are essentially flying blind. This is especially crucial for LLM workflows, where behaviour can be much less predictable than traditional ML models. Our approach emphasizes “eval-first development” as the cornerstone of effective LLM orchestration. This means evaluation runs as quality gates or as part of the outer development loop, incorporating user feedback and annotations to continually improve the system. For RAG or agent-based systems specifically, this eval-first approach helps teams identify whether issues are coming from retrieval components, prompt engineering, or the foundation models themselves. ZenML’s orchestration framework makes it straightforward to implement these evaluation checkpoints throughout your workflow, giving teams confidence that their systems are performing as expected before reaching production. Question: What patterns are you seeing emerge for successful hybrid systems that combine traditional ML models with LLMs, and how does ZenML support these architectures? ZenML takes a deliberately unopinionated approach to architecture, allowing teams to implement patterns that work best for their specific use cases. Common hybrid patterns include RAG systems with custom-tuned embedding models and specialized language models for structured data extraction. This hybrid approach—combining custom-trained models with foundation models—delivers superior results for domain-specific applications. ZenML supports these architectures by providing a consistent framework for orchestrating both traditional ML components and LLM components within a unified workflow. Our platform enables teams to experiment with different hybrid architectures while maintaining governance and reproducibility across both paradigms, making the implementation and evaluation of these systems more manageable. Question: As organizations rush to implement LLM solutions, how does ZenML help teams maintain the right balance between experimentation speed and production governance? ZenML handles best practices out of the box—tracking metadata, evaluations, and the code used to produce them without teams having to build this infrastructure themselves. This means governance doesn’t come at the expense of experimentation speed. As your needs grow, ZenML grows with you. You might start with local orchestration during early experimentation phases, then seamlessly transition to cloud-based orchestrators and scheduled workflows as you move toward production—all without changing your core code. Lineage tracking is a key feature that’s especially relevant given emerging regulations like the EU AI Act. ZenML captures the relationships between data, models, and outputs, creating an audit trail that satisfies governance requirements while still allowing teams to move quickly. This balance between flexibility and governance helps prevent organizations from ending up with “shadow AI” systems built outside official channels. Question: What are the key integration challenges enterprises face when incorporating foundation models into existing systems, and how does ZenML’s workflow approach address these? A key integration challenge for enterprises is tracking which foundation model (and which version) was used for specific evaluations or production outputs. This lineage and governance tracking is critical both for regulatory compliance and for debugging issues that arise in production. ZenML addresses this by maintaining a clear lineage between model versions, prompts, inputs, and outputs across your entire workflow. This provides both technical and non-technical stakeholders with visibility into how foundation models are being used within enterprise systems. Our workflow approach also helps teams manage environment consistency and version control as they move LLM applications from development to production. By containerizing workflows and tracking dependencies, ZenML reduces the “it works on my machine” problems that often plague complex integrations, ensuring that LLM applications behave consistently across environments. Asif RazzaqWebsite | + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Boson AI Introduces Higgs Audio Understanding and Higgs Audio Generation: An Advanced AI Solution with Real-Time Audio Reasoning and Expressive Speech Synthesis for Enterprise ApplicationsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/OpenAI Open Sources BrowseComp: A New Benchmark for Measuring the Ability for AI Agents to Browse the WebAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Google Introduces Agent2Agent (A2A): A New Open Protocol that Allows AI Agents Securely Collaborate Across Ecosystems Regardless of Framework or VendorAsif Razzaqhttps://www.marktechpost.com/author/6flvq/OpenAI Introduces the Evals API: Streamlined Model Evaluation for Developers0 Comments 0 Shares 82 Views
-
WWW.IGN.COMCabin Crew Simulator Codes (April 2025)Last updated April 10, 2025: Checked for new Cabin Crew Simulator codes!Looking for additional SkyBux to customize and upgrade your airline? This article has you covered! Here you can find a list of all currently active Cabin Crew Simulator codes. Redeem them to boost your SkyBux and spend them on in-flight meals, travel to different destinations, and new aircrafts in Roblox. Working Cabin Crew Simulator Codes (April 2025) Here are the currently active Cabin Crew Simulator codes for April 2025 and the rewards you'll get for redeeming them:dubai - 2,500 Skybux (NEW)amenity - 2,500 Skybux (NEW)candycane - 1,800 Skybuxtrees - 1,500 Skybux spooky - 2,000 SkyBux london - 1,500 SkyBux 200m - 2,000 SkyBux ally - 1,200 SkyBux gear - 2,000 SkyBux myles - 2,000 SkyBux All Expired Cabin Crew Simulator CodesThe following codes can no longer be redeemed as of April 2025:airportstarcustomizedecoration100mairstairsservicegalleybobajetwaybadgesnowpilotlandingcruisingcaptainevacuateairlinermissionwheelsupHow to Redeem Cabin Crew Simulator CodesTo redeem Cabin Crew Simulator codes, you'll need to follow these steps:Load up Cabin Crew Simulator on RobloxPress PlayLook for the giftbox icon on the left-hand side of the screenPaste the code into the box then press enter or the Claim buttonWhy Isn't My Cabin Crew Simulator Code Working? When a Cabin Crew Simulator isn't working, it's usually for two reasons. Either the code has expired of it's a typo. When it's a typo, it will say "Invalid Code" when you press enter. To avoid typos being an issue, we'd recommend copying the codes directly from this article, then pasting them into the codes box in Cabin Crew Simulator. If a code is no longer redeemable, it will say "Expired" when you hit enter. How to Get More Cabin Crew Simulator Codes We'll keep this article updated each day, but if you want to get Cabin Crew Simulator codes as soon as they drop, you'll want to follow @CabinCrewRBLX on X. There is also a Discord channel for Cruising Studios, where codes are posted in the Announcements channel.What is Cabin Crew Simulator in Roblox?The aim of Cabin Crew Simulator is to create your own successful airline and aircraft. You'll be thrown into the role of Cabin Crew, who is responsible for making sure passengers enjoy their flights and arrive safely at their destinations. You'll need to perform various tasks during flights, from boarding passengers to serving them drinks and snacks, all of which will reward you with SkyBux. The in-game currency will allow you to purchase bigger airplanes, unlock new destinations, upgrade your uniform, and more. Lauren Harper is a freelance writer and editor who has covered news, reviews, and features for over a decade in various industries. She has contributed to guides at IGN for games including Elden Ring, The Legend of Zelda: Tears of the Kingdom, Starfield, Pikmin 4, and more. With an MA in Victorian Gothic History and Culture, she loves anything that falls under that category. She's also a huge fan of point-and-click adventures, horror games and films. You can talk to her about your favourites over at @prettyheartache.bsky.social.0 Comments 0 Shares 77 Views
-
THENEXTWEB.COMAn answer to AI’s energy addiction? More AI, says the IEAThe International Energy Agency (IEA) has published its first major report on the AI gold rush’s impact on global energy consumption — and its findings paint a worrying, and perhaps contradictory, picture. Energy use from data centres, including for artificial intelligence applications, is predicted to double over the next five years to 3% of global energy use. AI-specific power consumption could drive over half of this growth globally, the report found. Some data centres today consume as much electricity as 100,000 households. The hyperscalers of the future could gobble up 20x that number, according to the IEA. By 2030, data centres are predicted to run on 50% renewable energy, the rest comprising a mix of coal, nuclear power, and new natural gas-fired plants. The findings paint a bleak picture for the climate, but there’s a silver lining, the IEA said. While AI is set to gobble up more energy, its ability to unlock efficiencies from power systems and discover new materials could provide a counterweight. “With the rise of AI, the energy sector is at the forefront of one of the most important technological revolutions of our time,” said Fatih Birol, IEA’s executive director. “AI is a tool, potentially an incredibly powerful one, but it is up to us – our societies, governments, and companies – how we use it.” AI can help to optimise power grids, increase the energy output of solar and wind farms through better weather forecasting, and detect leaks in vital infrastructure. The technology could also be used to more effectively plan transport routes or design cities. AI also has the potential to discover new green materials for tech like batteries. However, the IEA warned that the combined impact of these AI-powered solutions would be “marginal” unless governments create the necessary “enabling conditions.” “The net impact of AI on emissions – and therefore climate change – will depend on how AI applications are rolled out, what incentives and business cases arise, and how regulatory frameworks respond to the evolving AI landscape,” the report said. Divisions in the AI energy debate While AI could, theoretically, curb energy use, major questions remain. Meanwhile, the technology’s negative climate impact is already set in. The IEA predicts data centres will contribute 1.4% of global “combustion emissions” by 2030, almost triple today’s figure and nearly as much as air travel. While that doesn’t sound like much, the IEA’s figure doesn’t account for the embodied emissions created from constructing all those new data centres and producing all the materials therein. Alex de Vries, a researcher at VU Amsterdam and the founder of Digiconomist, told Nature that he thinks the IEA has underestimated the growth in AI’s energy consumption. “Regardless of the exact number, we’re talking several percentage of our global electricity consumption,” said de Vries. This uptick in data centre electricity use “could be a serious risk for our ability to achieve our climate goals,” he added. Claude Turmes, Luxembourg’s energy minister, accused the IEA of presenting an overly optimistic view and not addressing the tough realities that policymakers need to hear. “Instead of making practical recommendations to governments on how to regulate and thus minimise the huge negative impact of AI and new mega data centres on the energy system, the IEA and its [executive director] Fatih Birol are making a welcome gift to the new Trump administration and the tech companies which sponsored this new US government,” he told the Guardian. Aside from AI, there are more proven ways to curb energy use from data centres. These include immersion cooling, pioneered by startups like Netherlands-based Asperitas, Spain’s Submer, and UK-based Iceotope. Another is repurposing data centre heat for other applications, which is the value proposition of UK venture DeepGreen. All of these weird and wonderful solutions will need to scale up fast if they are to make a dent in data centres’ thirst for electricity. Ultimately, we also need to start using computing power more wisely. The debate on sustainable AI will continue at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag. Story by Siôn Geschwindt Siôn is a climate and energy reporter at TNW. From nuclear fusion to escooters, he covers the length and breadth of Europe's clean tech ecos (show all) Siôn is a climate and energy reporter at TNW. From nuclear fusion to escooters, he covers the length and breadth of Europe's clean tech ecosystem. He's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. Siôn has five years journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with0 Comments 0 Shares 78 Views