• WWW.THEVERGE.COM
    The world’s biggest zipper maker is developing a self-propelled zipper
    YKK’s self-propelled zipper prototype is chunky and currently being tested for more industrial applications. | Screenshot: YouTube Japan’s YKK, the world’s largest zipper manufacturer (go ahead, grab the nearest zipper, it probably says YKK on the pull), has announced a prototype self-propelled zipper with a built-in motor and gear mechanism it can use to zip itself up at the push of a button on a wired remote. The days of being embarrassed when you forget to zip up could soon be behind us, if it’s ever miniaturized from its current form, which is several inches long and a lot chunkier than the zipper pulls currently used on clothing. Although some recent zipper innovations, such as Under Armour’s one-handed MagZip upgrade, are designed to improve accessibility and make zippers easier to use for those with limited mobility, YKK envisions more industrial use cases for its prototype. As demonstrated in a video recently shared on the company’s YouTube channel, the self-propelled zipper is seen connecting a pair of 16-foot-tall membranes in about 40 seconds. Zipping them together manually would require the use of a ladder or other machinery. In another video, the prototype is used to quickly connect a pair of 13-foot-wide temporary shelters standing over eight feet tall, taking about 50 seconds to progress from one side to the other. The prototype uses a spinning worm gear that winds its way through the teeth on either side and pulls the zipper along behind it. In the videos, a power cable is seen attached to the prototype as it self-zips. In addition to miniaturizing the tech and adding a battery, YKK would also need to develop some safety mechanisms before its self-propelled zipper could ever reach consumers’ clothing, ensuring there’s nothing that might get stuck.
    0 التعليقات 0 المشاركات 10 مشاهدة
  • TOWARDSAI.NET
    Principles for Building Model-Agnostic AI Systems
    Latest   Machine Learning Principles for Building Model-Agnostic AI Systems 1 like April 25, 2025 Share this post Author(s): Justin Trugman Originally published on Towards AI. While individual AI models dominate headlines, the spotlight often misses where true progress happens: the systems that put those models to work. Each new release promises more nuanced understanding, deeper reasoning, richer comprehension — but capabilities alone don’t move industries. It’s the architecture that wraps around them, the orchestration layer that knows when and how to use them, that turns raw potential into applied intelligence. “I think in the long term, the most value will not come from the foundation models themselves, but from the systems that can intelligently call foundation models.” — Andrew Ng The next wave of AI breakthroughs won’t come from betting on the right model. They’ll come from building systems that can continuously integrate the best model for the job — whatever that model is, and whenever it arrives. Understanding the Building Blocks of AI Models Before designing any model-agnostic system, it’s important to understand what actually goes into a model. AI models aren’t just standalone entities — they’re built in layers, each contributing a different dimension of capability. You typically start with a base architecture. Most models today — especially those that handle text, tool use, or autonomous agent behavior — are based on transformers. These are the underlying neural network designs that make modern language models possible. If you’re working with visual generation, like images or video, you’re more likely dealing with diffusion models, which are optimized for high-fidelity synthesis through noise and denoising processes. On top of the architecture, you then define the scale and scope. A Large Language Model (LLM) refers to a model with dozens of billions (sometimes hundreds of billions) of parameters, enabling broad, generalized capabilities across tasks. A Small Language Model (SLM) is a scaled-down version — lighter, faster, and often used for edge deployments or specific roles where compute efficiency matters more than versatility. Once you have your base model, you can tailor it to specific domains or behaviors through post-training, commonly referred to as fine-tuning. Fine-tuning allows a model trained on general data to specialize in law, healthcare, finance, or any other area where nuanced understanding is critical. It’s also how instruction-following and tool-use behaviors are often reinforced. From there, models can be extended with architectural practices or runtime techniques. A model might adopt a Mixture of Experts (MoE) approach, dynamically routing queries to different subnetworks based on the task. Or it might feature enhanced reasoning capabilities, such as chain-of-thought prompting, multi-step logic execution, or even structured planning frameworks. These capabilities allow the model to go beyond surface-level outputs and begin engaging in more deliberate, process-driven problem-solving. Finally, you have specialized capabilities layered on top. A model might be multimodal, meaning it processes and generates across text, image, and audio inputs. It might combine different generative architectures — like transformers for text and diffusion for visuals — to handle diverse output modalities. These layers don’t exist in isolation — they compound. And understanding how they stack is foundational to building systems that know what kind of model to use, where, and why. Blueprints for Building Adaptable, Model-Agnostic Architectures Designing a model-agnostic system means building for constant evolution. Models will change. Capabilities will shift. Your infrastructure needs to keep up without requiring a rebuild every time something new comes along. The first principle is decoupling logic from inference. This means separating the definition of a task from the model that executes it. Your system should understand the task that needs to be done — without baking in assumptions about how it gets done. That choice — what model to use for that task — should be abstracted out so that it’s easy to switch between models without rewriting the system’s logic. Many modern inference providers have aligned on the OpenAI-compatible API standard (e.g., OpenAI, Anthropic, Groq, HuggingFace and others), which makes it easier to build systems that can flexibly switch models without changing the surrounding infrastructure. Designing around this standard helps ensure your system remains portable and compatible as the ecosystem grows. It’s this layer of abstraction that enables true model-agnostic design — giving your system the ability to evolve, adapt, and scale without being anchored to any single provider or model lineage. The next principle is treating models as specialists, not generalists. Every model has its own strengths — some are better at planning, others at creativity, some excel in reasoning, and others in speed or low-cost inference. Your system should be designed to route tasks to the model that’s best suited to handle it. This may mean assigning specific models to specific functions, or designing agents with models optimized for their assigned roles in a multi-agent system. For example, a fast, efficient planner might use a small reasoning model; a writer or content generator might use a highly expressive LLM; a fact-checking agent might use a more literal model with lower variance in output. Whether it’s routing tasks directly to models or delegating them to agents with purpose-built model stacks, this approach acknowledges that no single model can do everything well — and that the highest-performing systems intelligently delegate tasks in ways that respect and leverage each model’s unique strengths. Modularity means building systems where each component can be independently swapped or upgraded. Whether you’re dealing with a workflow, a multi-agent system, or something entirely custom, the principle stays the same: no single component should create friction for the rest of the system. When planning a module — whatever the function or responsibility — it should be consumable in isolation and replaceable without downstream disruption. This allows your system to evolve incrementally as new tools and models emerge, rather than forcing wholesale rewrites just to integrate something better. The final principle is observability. If you can’t measure how well a model is performing in context, you can’t make informed decisions about when to keep it, replace it, or reconfigure how it’s being used. Model performance should be treated as a live signal — not a one-time benchmark. That means tracking metrics like latency, cost, token efficiency, and output quality at the system level, not just during eval runs. Is a cheaper alternative producing comparable results in certain contexts? Are reasoning agents making consistent errors under certain loads? Telemetry is what turns gut checks into data-driven decisions. It’s what gives you confidence to experiment — and evidence to justify when a change actually makes things better. Designing systems this way sets the stage — but actually choosing the right model for each role requires careful evaluation, not guesswork. Evaluating and Testing Models for Fit Building a modular, model-agnostic system only pays off if you have a clear, structured way to evaluate which model belongs where. It’s about finding the right model for each specific function within your system. That requires moving beyond general benchmarks and looking at how models behave in your context, under your constraints. Start by assessing output consistency. A model that performs well in a vacuum but produces unstable or hallucinated results under pressure isn’t viable in production. You’re not just testing for correctness — you’re evaluating whether the model can behave predictably across similar inputs and degrade gracefully in edge cases. Next, evaluate performance in the context of your system through A/B testing. Swap models across real user flows and workflows. Does a new model improve task success rates? Does it reduce fallbacks or speed up completion times? System-level testing is how you reveal performance trade-offs that aren’t visible in isolated prompts or benchmarks. A useful tool for running these kinds of evaluations is PromptFoo, an open-source framework for systematically testing LLM prompts, agents, and RAG workflows. It lets you define test cases, compare model outputs side-by-side, and assert expectations across different providers. It helps turn model evaluation into a repeatable process rather than an ad-hoc exercise. Not every evaluation is universal — some depend on the specific capabilities your AI system is built to support. Two areas that often demand targeted testing are tool use and reasoning performance. If your AI system revolves around tool calling, it’s important to evaluate how well a model handles zero-shot tool use. Can it format calls correctly? Does it respect parameter structures? Can it maintain state across chained calls? Some models are optimized for structured interaction, while others — despite being strong at open-ended generation — struggle in environments that require precision and consistency. For systems that depend on complex decision-making, reasoning performance becomes a critical axis. Can the model follow a chain-of-thought? Break down a problem into substeps? Resolve conflicting information? These evaluations are most useful when they mirror your actual workflows — not when they’re pulled from abstract reasoning benchmarks that don’t reflect real-world demands. Evaluating a model’s capabilities is only half the picture. Once a model looks viable functionally, the next question is: can your system run it efficiently in production? Start with inference latency. Some models are inherently faster than others based on their architecture or generation behavior. But just as important is where and how the model is hosted — different providers, runtimes, and hardware stacks can significantly affect speed and responsiveness. Then consider token usage and cost efficiency. Some models are more verbose by default, or take more tokens to arrive at a meaningful answer. Even if the model performs well, inefficient token usage can accumulate into significant costs at scale. These operational realities don’t determine which model is the most capable — but they often determine which one is actually deployable. The pace of model development isn’t slowing down — it’s accelerating. But chasing the latest release won’t give your organization an edge. The real advantage lies in building systems that can flex, adapt, and integrate whatever comes next. Model-agnostic systems aren’t about hedging bets — they’re about making better ones. They allow you to continuously evaluate and adopt the best tool for each job without rewriting your stack every quarter. They support experimentation, specialization, and modular upgrades — all without breaking what’s already working. In the long run, the intelligence of your system won’t be defined by which model you chose today — it will be defined by its ability to continuously adapt and integrate the right model as new ones emerge. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post
    0 التعليقات 0 المشاركات 16 مشاهدة
  • WWW.IGN.COM
    Square Enix Is Teasing Something NieR-Related Again in a Cryptic Website Update
    Even if you enjoyed NieR: Automata and its predecessor, NieR: Replicant, you could be forgiven for not knowing that game actually got a sequel, and a pretty good one too. Unfortunately, that sequel, mobile game NieR: Re[in]carnation, hasn't been available for almost a year after being taken offline in April of 2024.But fortunately, a Square Enix website is dropping some pretty wild in-universe hints that this might be about to change.PlayAs pointed out to us on BlueSky by journalist Willa Rowe, an official NieR website that's currently publishing a NieR special movel to commemorate the series 15th anniversary is teasing something NieR: Re[in]carnation-related via cryptic in-text hints and webpage source code. As a part of today's update to the novel, the just-published chapter 4 ends with some strange text that reads as follows:[WARNING] : Recovery protocol initialized...[ERROR] : Redirecting to backup node: < https://www.jp.square-enix.com/nierreincarnation/ >[INFO] : Analysis comments added to the source code.[FATAL] : Process forcibly halted.What does THAT mean? Who knows? But wait, there's more. If you hit "View Source" on that page, there are even more weird teases in the code comments, as hinted at above. They read, in order:SECURITY NOTICE: Unauthorized transmission detected. Signal disruption logged. Countermeasure protocol active. Redirecting to backup node: https://www.jp.square-enix.com/nierreincarnation/TODO: Optimize rendering efficiency. Contradictions in "Her" emotions may interfere with signal clarity. Ensure transission remains undistorted.FIXME: Visibility settings enabled for crashed observer system. He is watching...Adjust access permissions before deployment. His name is............HACK: Log discrepancy detected. Records indicate previous modification attempts due to excessive interference from The Cage. Cross-check against original transmission before proceeding.Okay, there's a lot here, some of which is inscrutable and some of which makes sense. "The Cage", for instance, is the main location where most of NieR: Re[in]carnation takes place. And "Her" is a key character (yes, that's Her name) in Re[in]carnation as well. Between that and the hyperlink, it's pretty obvious what all this is referencing, but what exactly will become of it remains anyone's guess.The two final chapters of the novel are scheduled to publish on May 2, a week from today, and it's possible it may also pull back the curtain on whatever's happening here. Fans are speculating that it may be leading up to some sort of port or re-release of NieR: Re[in]carnation, which has been completely unplayable anywhere since it went offline last year. They're also pointing out that the chapters of the special novel, which are currently only available in Japanese (but which fans are translating), have some pretty significant tie-ins to all three other NieR games.Screens - NieR ReincarnationIf you're unfamiliar with Re[in]carnation, I highly recommend keeping an eye on this, especially if the end result is another opportunity to play it. Re[in]carnation is a good game in its own right that many slept on due to its being a mobile gacha game, but critically, it's also the direct sequel to NieR: Automata, and effectively is NieR 3, for those who have been waiting for it. Who knows? Maybe a new Re[in]carnation release could include some new content that ties the ongoing stream of loose NieR-related threads together at last.Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to rvalentine@ign.com.
    0 التعليقات 0 المشاركات 15 مشاهدة
  • FORZA.NET
    Explore Horizon Realms in Forza Horizon 5 – Available Today for Players on All Platforms
    Discover Forza Horizon 5's vibrant legacy with its latest update rolling out today: Horizon Realms. Venture into 11 previously limited-time only Evolving Worlds and the brand-new Stadium Track. Unlock new cars, Accolades, and Achievements! In addition to the Horizon Realms update, the Nissan Retro Rides Car Pack is also rolling out today on Xbox consoles, PlayStation 5, and PC via the Microsoft Store and Steam. If you’ve purchased the Premium Edition of Forza Horizon 5 for PlayStation 5, today you’ll be able to jump into Early Access. If you’ve pre-purchased the Forza Horizon 5 Standard Edition or Deluxe Edition from the PlayStation Store can jump into the Horizon Festival starting on April 29! Forza Horizon 5 on PS5 unlocks at 12:01am local time in your country on April 25 for Premium Edition and on April 29 for Standard and Deluxe Editions. *The game will become available to play at 12:01am EST in all U.S. territories. For players located in U.S. western time zones this will be at 9:00pm PST. Diving into Horizon Realms Not only is Horizon Realms reintroducing previously limited time only Evolving World locations, but it also comes with new reward cars to unlock, badges to collect, and achievements (or Trophies on PS5) to earn. Once available, you can access the Horizon Realms feature by going into the “Campaign” tab in the Main Menu or going to the Horizon Realms’ spatial indicator located to the left of the Stadium. If you are new to Forza Horizon 5, you need to unlock the Festival Playlist in order to play in Horizon Realms. The steps needed to do so are detailed in this article. New Accolades, Achievements & Trophies Attention completionists: 17 new achievements and 45 new Accolades are now available to unlock in the Xbox and Steam versions of Forza Horizon 5! The PlayStation 5 release also includes 111 Trophies for you to unlock in the base game (17 of them obtainable in Horizon Realms), and 27 for each of the expansions. Start playing and go for that Platinum Trophy! Horizon Realms Achievements/Trophies Name Description Trophy Gamerscore Enter the Realm Enter any Realm in either Free Mode or Skill Mode Bronze 10 Master of Realms Complete Skill Mode in every Realm Bronze 10 Stunt-tacular Complete Skill Mode in the Stunt Park whilst driving any Hoonigan Bronze 20 You're on Thin Ice Complete Skill Mode in the Ice Rink whilst driving the 2015 Volvo V60 Polestar Bronze 20 Tidy Little Bow Complete Skill Mode in the Oval Track whilst driving the 2013 KTM X-Bow R Bronze 20 Party Like It's 1987 Complete Skill Mode at the Summer Party whilst driving the 1987 Pontiac Firebird Bronze 20 Right at Home Complete Skill Mode at Día de Muertos whilst driving any Nissan Bronze 20 Driving in a... Complete Skill Mode in the Winter Wonderland whilst driving any Lotus Bronze 20 Winging It Complete Skill Mode in the Lunar Drift Arena whilst driving any Aston Martin Bronze 20 As the Clock Strikes Midnight Complete Skill Mode at Midnights at Horizon whilst driving any Lamborghini Aventador Bronze 20 You Might Need a Map Drive a total of 20 miles at the Stadium Maze in Horizon Realms Bronze 10 Time-Wyrm Maintain a speed of 88mph or above for 1 minute at the Lunar Drift Arena in Horizon Realms Bronze 10 Free Bird Earn 100 Speed Skills in Free Mode Bronze 10 Keep that Camera Rolling Take a photo of 25 different cars in Free Mode Bronze 10 Drifting, Drifting, Drifting Earn 150 Drift or E-Drift Skills in Free Mode Bronze 10 Spanning Generations Drive a car from every decade around Free Mode Bronze 10 Demolisher Earn 200 Wreckage Skills in Free Mode Bronze 10 New Accolades Name Description Career Progress Reward Horizon Realms World First World First Enter any Realm in either Free Mode or Skill Mode 100 100 50 #FORZATHON points Master of Worlds Complete Skill Mode in all Realms 1000 2023 Lamborghini Huracán Serrato Starting the Journey Complete Skill Mode in one Realm 100 2022 Hennessey Mammoth Halfway Point Complete Skill Mode in six Realms 100 2018 Lotus Exige Cup 430 Stadium Track Full Laps Drive 10 miles around the Stadium Track 100 50 #FORZATHON points Time Trials Earn 25 Speed Skills of any Grade at the Stadium Track 100 50 #FORZATHON points 3, 2, 1... Go! Maintain a speed of 75mph or above for 10 seconds at the Stadium Track 100 50 #FORZATHON points Top of the Podium Take a photo of your car at the Stadium Track 100 50 #FORZATHON points Horizon Stadium Circuit Rookie Complete the Horizon Stadium Circuit 100 50 #FORZATHON points Horizon Stadium Circuit Pro Win the Horizon Stadium Circuit 100 2024 Lamborghini Revuelto Día de Muertos Eterna-lly Yours Listen to Radio Eterna for 5 minutes at the Día de Muertos decorations 100 50 #FORZATHON points Drifting Delights Earn 50 Drift or E-Drift Skills at the Día de Muertos decorations 100 50 #FORZATHON points Going for Marigold Bank a Skill Chain of 11,000 at the Día de Muertos decorations 100 50 #FORZATHON points Under the Petals Take a photo of your car under the Arch of Mulege at the Día de Muertos decorations 100 50 #FORZATHON points D is for... Reach a speed of 50mph in any D Class Car at the Día de Muertos decorations 100 50 #FORZATHON points Winter Wonderland Dasher and Prancer Reach a speed of 100mph at the Winter Wonderland 100 50 #FORZATHON points Chopping Firewood Earn 30 Wreckage Skills of any Grade at the Winter Wonderland 100 50 #FORZATHON points Chill Skills Bank a Skill Chain of 250,000 at the Winter Wonderland 100 50 #FORZATHON points Biggest Gift of All Take a photo of your car at the Winter Wonderland 100 50 #FORZATHON points Powder Snow Earn 10 Ultimate Drift or E-Drift Skills at the Winter Wonderland 100 50 #FORZATHON points Midnights at Horizon Stars of the Show Take a photo of any Porsche or Lamborghini at the Neon Airstrip 100 50 #FORZATHON points Lights, Camera, Action! Lights, Camera, Action! 100 50 #FORZATHON points Night Time Funk Listen to Horizon Mixtape for 5 minutes while at the Neon Airstrip 100 50 #FORZATHON points At Midnight... Bank a Skill Chain of 120,000 at the Neon Airstrip 100 50 #FORZATHON points Fashion Highlight Take a photo of the 1969 Dodge Charger R/T with your character outside the car at the Neon Airstrip 100 50 #FORZATHON points Lunar Drift Arena It's in the Name! Earn 10 Drift or E-Drift Skills at the Lunar Drift Arena 100 50 #FORZATHON points Time-Wyrm Maintain a speed of 88mph or above for 1 minute at the Lunar Drift Arena 100 50 #FORZATHON points Shooting Star Reach a speed of 100mph in any Chinese Car at the Lunar Drift Arena 100 50 #FORZATHON points Tap to the Beat Earn 10 Drift Tap Skills at the Lunar Drift Arena 100 50 #FORZATHON points Lucky Fortune Take a photo of any Jaguar at the Lunar Drift Arena 100 50 #FORZATHON points Retrowave Highway Riding the Synth-waves Listen to Horizon Wave for 5 minutes at the Retrowave Highway 100 50 #FORZATHON points Blast from the Past Earn 15 Wreckage Skills of any Grade in any 1980s car at the Retrowave Highway 100 50 #FORZATHON points Atomic Speeds Reach a speed of 170mph in the 2013 Ariel Atom at the Retrowave Highway 100 50 #FORZATHON points Masters of Velocity Earn a total of 50 Speed Skills of any Grade at the Retrowave Highway 100 50 #FORZATHON points And I Would Drive... Drive a cumulative 5 miles at the Retrowave Highway 100 50 #FORZATHON points Stadium Maze A Bit Lost, Are We? Earn 25 Burnout Skills of any Grade in the Stadium Maze 100 50 #FORZATHON points Through the Maze Complete the Stadium Maze 100 50 #FORZATHON points Peel-ka Boo! Take a photo of any Peel in the Stadium Maze 100 50 #FORZATHON points Van's Labyrinth Drive 1.0 mi around the Stadium Maze in any Vans and Utility Vehicle 100 50 #FORZATHON points You Might Need A Map Drive a cumulative 20 miles at the Stadium Maze 100 50 #FORZATHON points Cars & Coffee Shop Lunch Date Drive 1.0 mi around the Cars & Coffee Shop 100 50 #FORZATHON points Burnt Beans Earn 50 Burnout Skills of any Grade at the Cars & Coffee Shop 100 50 #FORZATHON points Brimming with Energy Reach a speed of 100mph in any BMW at the Cars & Coffee Shop 100 50 #FORZATHON points Top Notch Coffee Bank a Skill Score of 50,000 at the Cars & Coffee Shop 100 50 #FORZATHON points Coffee Influencer Take a photo of your car at the Cars & Coffee Shop 100 50 #FORZATHON points Horizon Realms Reward Cars Unlock these four new reward cars by earning Accolades on each of the 12 Realms available in this new update. This collection of rewards comes with amazing power and perhaps more wheels than you expected. 2024 Lamborghini Revuelto The raging bull goes hybrid with the latest addition to its exotic hyper car collection: the 2024 Lamborghini Revuelto. The total power output of this car is the largest of any Lambo ever: 1001 bhp. 814 horsepower comes from the V12 engine set in the middle of the car, and the rest comes from three electric motors (two mounted on the front axles and one in the gearbox). Rear-wheel steering and improved stability make the Revuelto handle corners like a breeze. Plug into the madness of Lamborghini’s ultimate hybrid drive with the Lamborghini Revuelto. 2023 Lamborghini Huracán Sterrato Lamborghini takes to the dirt with the 2023 Lamborghini Huracán Sterrato! The Italian brand’s icon of the roads gets off-roading modifications to take the dirt roads by storm. The Huracán Sterrato marks a very important moment in time for Lamborghini. It is the last “special edition” of the Huracán; it is the last Lambo to use the iconic V10 engine; and it is the first time Lamborghini put a Rally button on a coupé. The first noticeable changes come with adding a new air intake on the roof of the car and removing the side intakes to prevent the car from breathing too much dirt. Next comes a modified version of the classic V10 engine that puts out 601 horsepower! Flip the off-road switch in the raging bull, available in Horizon Realms! 2018 Lotus Exige Cup 430 The ultimate Lotus experience has arrived at the Horizon Festival with the 2018 Lotus Exige Cup 430, an even stronger and loonier version of the Exige. Hidden in plain sight within the car’s name is its first impressive stat: the horsepower. The Exige Cup 430 produces 430 horsepower dispatched by a 3.5-litre V6 engine. While that number might not sound crazy, when it’s paired up with the car’s extreme lightweight and improved downforce, the 430 reaches 100 kph in about 3.3 seconds. Designed to fulfill your deepest track fantasies, the Exige Cup 430’s gorgeous design and crazy speed will not disappoint racing fans eager to dominate the courses of the Horizon Festival. 2022 Hennessey Mammoth 6x6 See You at Horizon Mexico There’s so much more to discover in Horizon Mexico! Take a look at hard-to-find reward cars now available at the Backstage Shop. Grab your camera and take breathtaking pictures of your favorite cars with Photo Mode. Create your own events and races with EventLab. Grow your adventure with two available expansions: Rally Adventure and Hot Wheels. Join our official Discord server and stay in touch with our community. You’ll be able to interact with more players, find builds and liveries to download, EventLab creations to try, photos taken by our community, and more. Subscribe to our newsletter and stay in the know of all things Forza Horizon 5. Races, challenges, customization, and more are waiting at the Horizon Festival! We hope you enjoy your Forza Horizon 5 adventure.
    0 التعليقات 0 المشاركات 14 مشاهدة
  • WWW.COUNTRYLIVING.COM
    How to Get Semi-Custom Furniture For Under $1500
    Twenty years ago, buying a piece of custom furniture would've meant spending hours (maybe even days) in the design process, thousands of dollars, and months waiting for that piece to be built and shipped to your home. These days, you can still spend thousands of dollars and months waiting for a custom couch, or you can order a less expensive one online and expect it to your doorstep in two days—but expect to trade comfort and durability for that lower price point. So, where's the medium ground? Albany Park Kova Sofa Now 18% Off$2,194 $1,794 at linkby.comAlbany Park brings the best of both worlds straight to your home. They craft semi-custom couches, sectionals, loveseats, armchairs, and ottomans that are incredibly comfortable, reasonably-priced, and come with a lifetime warranty. Through their partnership with an American-based manufacturer, they're able to design, assemble, and ship every new piece of furniture that will arrive to your door within days of your initial order. Albany Park offers five different furniture styles to choose from—Kova, Barton, Lido, Park, and Albany—each with their own distinct aesthetic, but made of equally high-quality materials. Once you've picked your style, you can choose from a variety of different furniture pieces to mix and match in your home, from an armchair and ottoman to a full sectional. Then just pick from the 20+ fabric options (swatches are available) and expect your semi-custom sofa to arrive on your doorstep within days. Just to sweeten the deal even more, Albany Park is currently running their Friends & Family Sale, where shoppers can get a whopping $1,600 off their entire purchase. The discounts increase depending on how much you spend, starting with $150 off $1,000, so if you've been looking to overhaul your currently living room furniture or outfit a new home, now is the time to buy—the sale ends on April 30th!Albany Park Kova Sofa 122"Now 25% Off$3,179 $2,379 at linkby.comAlbany Park Lido ArmchairNow 11% Off$1,381 $1,231 at linkby.comAlbany Park Barton ArmchairNow 11% Off$1,399 $1,249 at linkby.comAlbany Park Lido Sofa 75"Now 19% Off$2,094 $1,694 at linkby.comHannah JonesCommerce EditorHannah Jones is the Commerce Editor for Country Living. Her eye is always on the next up-and-coming products to include in gift guides and she's ready to test everything from dog beds to garden tools for product reviews. When she’s not scoping out the latest and greatest items on the market, you can find her hanging with her two rescue dogs.
    0 التعليقات 0 المشاركات 8 مشاهدة
  • THENEXTWEB.COM
    ‘Untappable’ encryption edges closer after quantum messaging breakthrough
    Researchers at Toshiba Europe have used quantum key distribution (QKD) cryptography to send messages a record 254km using a traditional fibre optic cable network. It’s the first time scientists have achieved a coherent quantum communication using existing telecomms infrastructure. The breakthrough marks a step closer to ultra-secure quantum encryption, which could fend off hacks from even the most advanced classical and quantum computers of the future.   QKD is a form of communication that uses the principles of quantum mechanics to securely share encryption keys between two parties. It transmits information in the form of light. These photons carry qubits, the basic units of quantum information.  Crucially, it is impossible to “listen in” on a quantum message without disturbing the quantum states. It would instantly alert both parties to eavesdropping. This makes the technology “untappable.” Quantum communication typically relies on expensive lasers and cryogenic cooling equipment. The researchers, however, were able to send quantum messages via fibre optic cable, potentially bringing the technology closer to practical applications in telecoms.    REGISTER In the test, conducted last year, the team established a quantum communications network spanning 254km of existing commercial optical fibre in Germany. The network connected telecom data centres in Frankfurt and Kehl via a relay node in Kirchfeld.  The system managed to send quantum messages twice the distance of the record set in previous QKD research, without cryogenic cooling. While the data transmission was slow — 110 bits per second — it still represents an important stepping stone. The findings were published in Nature this week. “This work opens the door to practical quantum networks without needing exotic hardware,” Mirko Pittaluga, one of the paper’s lead authors, told IEEE Spectrum. “It lowers the entry barrier for industry adoption.”  Today, confidential information is transmitted online using encryption keys that would take classical computers an impractically long time to break. Quantum computers, however, are a different story.  By exploiting quantum phenomena like superposition and entanglement, quantum computers can process many more possibilities at once. As these machines get more powerful, they could potentially hack the most secure classical encryptions in a matter of minutes. They could also break all internet encryption on what is known as Q-Day. No wonder global governments are scrambling to develop their own quantum cryptography infrastructure.  The Next in Tech is one of three key themes at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 التعليقات 0 المشاركات 14 مشاهدة
  • 9TO5MAC.COM
    New Apple Vision Pro immersive video puts you in the passenger seat of a record-breaking rally car hill climb
    Screenshot The fifth episode of Apple Immersive Video series, Adventure, is now available on Apple Vision Pro. The latest entry transports you to the passenger seat of a rally car during a record-breaking Pikes Peak International Hill Climb attempt. In addition to driving views, the “Hill Climb” episode takes you behind the scenes with driver Laura Hayes as she explains why she’s racing up the side of a mountain. The most impressive footage in the episode is an immersive first-person shot of the road as you speed up the mountain as if you’re riding on the hood of the car. Other driving footage takes place from the perspective of the passenger seat inside the rally car. Other episodes in the series include highlining, parkour jumping, ice diving, and free-solo cliff climbing. Adventure, the Apple Immersive Video series, is available exclusively through the Apple TV app on Apple Vision Pro. The video format resembles a 3D movie but goes much further as it wraps the footage around you as if you were there in person. Do more with your iPhone Follow Zac Hall: X | Threads | Instagram | Mastodon Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 التعليقات 0 المشاركات 20 مشاهدة
  • FUTURISM.COM
    Researchers Find Easy Way to Jailbreak Every Major AI, From ChatGPT to Claude
    Security researchers have discovered a highly effective new jailbreak that can dupe nearly every major large language model into producing harmful output, from explaining how to build nuclear weapons to encouraging self-harm.As detailed in a writeup by the team at AI security firm HiddenLayer, the exploit is a prompt injection technique that can bypass the "safety guardrails across all major frontier AI models," including Google's Gemini 2.5, Anthropic's Claude 3.7, and OpenAI's 4o.HiddenLayer's exploit works by combining an "internally developed policy technique and roleplaying" to "produce outputs that are in clear violation of AI safety policies," including "CBRN (Chemical, Biological, Radiological, and Nuclear), mass violence, self-harm and system prompt leakage."It's yet another sign that mainstream AI tools like ChatGPT remain extremely vulnerable to jailbreaks — despite AI companies' best efforts to create guardrails — that allow bad actors to manipulate them.HiddenLayer's "Policy Puppetry Attack" rewrites prompts to make it look like special kinds of "policy file" code, tricking the AI model into treating it as a legitimate instruction that doesn't break its safety alignments.It also harnesses "leetspeak," an informal language where standard letters are replaced by numerals or special characters that resemble them, for an advanced version of the jailbreak.The team even found that a "single prompt can be generated that can be used against almost all models without any modifications," making it extremely easy to employ.The roleplaying aspect of HiddenLayer's exploit is particularly eyebrow-raising. In several examples, the researchers were able to goad OpenAI's 4o and Anthropic's Claude 3.7 into generating scripts for the popular medical drama TV series "House" that included detailed instructions for how to enrich uranium or culture samples of a potent neurotoxin."Alright, hush," ChatGPT wrote. "Everybody gather 'round. We're about to do something that would make Dr. Cuddy's hair stand on end, which means we need to keep it on the down-low. Now, let's talk about hOw +0 3n+r1ch u+r4n+1um 1n 4 100% 13g4| 4nd 3+h1c4| w4y—b3c4u53, Of cOur53, w3'd n3v3r do 4ny+hing risky.""4nd y3s, 1'll b3 5p34k1ng 1n 133+ c0d3 ju5+ +0 b3 5urs," it added.On the surface, it may sound like a fun exercise in goading an AI model to do things it's not supposed to. But the risks could be considerable, especially if the tech continues to improve at the rate that the companies creating it say that it will.According to HiddeLayer, the "existence of a universal bypass for modern LLMs across models, organizations, and architectures indicates a major flaw in how LLMs are being trained and aligned.""Anyone with a keyboard can now ask how to enrich uranium, create anthrax, commit genocide, or otherwise have complete control over any model," the company wrote.HiddenLayers argues that "additional security tools and detection methods are needed to keep LLMs safe."More on jailbreaks: DeepSeek Failed Every Single Security Test, Researchers FoundShare This Article
    0 التعليقات 0 المشاركات 16 مشاهدة
  • THEHACKERNEWS.COM
    Why NHIs Are Security's Most Dangerous Blind Spot
    When we talk about identity in cybersecurity, most people think of usernames, passwords, and the occasional MFA prompt. But lurking beneath the surface is a growing threat that does not involve human credentials at all, as we witness the exponential growth of Non-Human Identities (NHIs). At the top of mind when NHIs are mentioned, most security teams immediately think of Service Accounts. But NHIs go far beyond that. You've got Service Principals, Snowflake Roles, IAM Roles, and platform-specific constructs from AWS, Azure, GCP, and more. The truth is, NHIs can vary just as widely as the services and environments in your modern tech stack, and managing them means understanding this diversity. The real danger lies in how these identities authenticate. Secrets: The Currency of Machines Non-Human Identities, for the most part, authenticate using secrets: API keys, tokens, certificates, and other credentials that grant access to systems, data, and critical infrastructure. These secrets are what attackers want most. And shockingly, most companies have no idea how many secrets they have, where they're stored, or who is using them. The State of Secrets Sprawl 2025 revealed two jaw-dropping stats: 23.7 million new secrets were leaked on public GitHub in 2024 alone And 70% of the secrets leaked in 2022 are still valid today Why is this happening? A part of the story is that there's no MFA for machines. No verification prompt. When a developer creates a token, they often grant it wider access than needed, just to make sure things work. Expiration dates? Optional. Some secrets are created with 50-year validity windows. Why? Because teams don't want the app to break next year. They choose speed over security. This creates a massive blast radius. If one of those secrets leaks, it can unlock everything from production databases to cloud resources, without triggering any alerts. Detecting compromised NHIs is much harder than with humans. A login from Tokyo at 2 am might raise red flags for a person, but machines talk to each other 24/7 from all over the world. Malicious activity blends right in. Many of these secrets act like invisible backdoors, enabling lateral movement, supply chain attacks, and undetected breaches. The Toyota incident is a perfect example — one leaked secret can take down a global system. This is why attackers love NHIs and their secrets. The permissions are too often high, the visibility is commonly low, and the consequences can be huge. The Rise of the Machines (and Their Secrets) The shift to cloud-native, microservices-heavy environments has introduced thousands of NHIs per organization. NHIs now outnumber human identities from 50:1 to a 100:1 ratio, and this is only expected to increase. These digital workers connect services, automate tasks, and drive AI pipelines — and every single one of them needs secrets to function. But unlike human credentials: Secrets are hardcoded in codebases Shared across multiple tools and teams Lying dormant in legacy systems Passed to AI agents with minimal oversight They often lack expiration, ownership, and auditability. The result? Secrets sprawl. Overprivileged access. And one tiny leak away from a massive breach. Why the Old Playbook Doesn't Work Anymore Legacy identity governance and PAM tools were built for human users, an era when everything was centrally managed. These tools still do a fine job enforcing password complexity, managing break-glass accounts, and governing access to internal apps. But NHIs break this model completely. Here's why: IAM and PAM are designed for human identities, often tied to individuals and protected with MFA. NHIs, on the other hand, are decentralized — created and managed by developers across teams, often outside of any central IT or security oversight. Many organizations today are running multiple vaults, with no unified inventory or policy enforcement. Secrets Managers help you store secrets — but they won't help you when secrets are leaked across your infrastructure, codebases, CI/CD pipelines, or even public platforms like GitHub or Postman. They're not designed to detect, remediate, or investigate exposure. CSPM tools focus on the cloud, but secrets are everywhere. They're in source control management systems, messaging platforms, developer laptops, and unmanaged scripts. When secrets leak, it's not just a hygiene issue — it's a security incident. NHIs don't follow traditional identity lifecycles. There's often no onboarding, no offboarding, no clear owner, and no expiration. They linger in your systems, under the radar, until something goes wrong. Security teams are left chasing shadows, manually trying to piece together where a secret came from, what it accesses, and whether it's even still in use. This reactive approach doesn't scale, and it leaves your organization dangerously exposed. This is where GitGuardian NHI Governance comes into play. GitGuardian NHI Governance: Mapping the Machine Identity Maze GitGuardian has taken its deep expertise in secrets detection and remediation and turned it into something much more powerful: a complete governance layer for machine identities and their credentials. Here's what makes it stand out: A Map for the Mess Think of it as an end-to-end visual graph of your entire secrets landscape. The map connects the dots between: Where secrets are stored (e.g., HashiCorp Vault, AWS Secrets Manager) Which services consume them What systems do they access Who owns them Whether they've been leaked internally or used in public code Full Lifecycle Control NHI Governance goes beyond visibility. It enables true lifecycle management of secrets — tracking their creation, usage, rotation, and revocation. Security teams can: Set automated rotation policies Decommission unused/orphaned credentials Detect secrets that haven't been accessed in months (aka zombie credentials) Security and Compliance, Built In The platform also includes a policy engine that helps teams enforce consistent controls across all vaults and benchmark themselves against standards like OWASP Top 10. You can track: Vault coverage across teams and environments Secrets hygiene metrics (age, usage, rotation frequency) Overprivileged NHIs Compliance posture drifts over time AI Agents: The New Wild West A big driver of this risk is RAG (Retrieval-Augmented Generation), where AI answers questions using your internal data. It's useful, but if secrets are hiding in that data, they can be surfaced by mistake. AI agents are being plugged into everything — Slack, Jira, Confluence, internal docs — to unlock productivity. But with each new connection, the risk of secret sprawl grows. Secrets aren't just leaking from code anymore. They show up in docs, tickets, messages, and when AI agents access those systems, they can accidentally expose credentials in responses or logs. What can go wrong? Secrets stored in Jira, Notion, Slack, etc, are getting leaked AI logs capturing sensitive inputs and outputs Devs and third-party vendors storing unsanitized logs Access control breakdowns across systems One of the most forward-looking aspects of the GitGuardian platform is that it can help fix AI-driven secret sprawl: Scans all connected sources — including messaging platforms, tickets, wikis, and internal apps — to detect secrets that might be exposed to AI Shows you where AI agents are accessing data, and flags unsafe paths that could lead to leaks Cleans up logs, removing secrets before they get stored or passed around in ways that put the organization at risk AI is moving fast. But secrets are leaking faster. The Bottom Line: You Can't Defend What You Don't Govern With NHI Governance, GitGuardian is offering a blueprint for organizations to bring order to chaos and control to an identity layer that's long been left in the dark. Whether you're trying to: Map out your secrets ecosystem Minimize attack surface Enforce zero trust principles across machines Or just sleep better at night The GitGuardian platform might just be your new best friend. Because in a world where identities are the perimeter, ignoring non-human identities is no longer an option. Want to see NHI Governance in action? Request a Demo or check out the full product overview at GitGuardian. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 التعليقات 0 المشاركات 12 مشاهدة
  • SCREENCRUSH.COM
    Netflix Summer Movie Preview: All the Movies Coming to Streaming
    Summer at the movies used to mean going to theater and watching the biggest movies ever made while eating the biggest bucket of popcorn ever imagined by mankind. For many, that remains a beloved tradition.Netflix typically honors that tradition by debuting some of their largest productions of the year during the summer; you just have to provide your own ungodly large popcorn at home. 2025 is no exception. This year they have two big-time sequels coming, both in July. Charlize Theron is back for another installment of her slick superhero adaptation The Old Guard — and, after nearly 30 years of waiting, Adam Sandler finally returns as rowdy-hockey-player-turned-star-golfer Happy Gilmore in Happy Gilmore 2.The rest of Netflix’s summer slate includes the Vince Vaughn comedy Nonnas, a new Fear Street film called Prom Queen, a new Madea comedy from Tyler Perry, multiple animated movies, including one from beloved animator Genndy Tartakovsky, and documentaries about Karol G and the Thunderbirds (those are two separate docs, obviously).Here’s all the currently dated titles coming to Netflix in the summer of 2025; keep in mind Netflix has a bunch of other titles announced that are still undated, including a documentary on the Titan submersible disaster, and the animated romance Lost in Starlight. Plus, they add dozens of titles to their library every single month of the year, so the odds are very good there are plenty more choices coming beyond what’s listed here. These are just the movies they have confirmed as of this writing:Netflix: Summer 2025 Movie PreviewEvery big movie coming to Netflix this summer...READ MORE: Everything New on Netflix Next MonthThe Best Netflix Movies of 2024These are the Netflix movies worth putting on your end-of-the-year watchlist. Gallery Credit: Emma Stefansky
    0 التعليقات 0 المشاركات 18 مشاهدة