• All Jizo Statue locations in Assassins Creed Shadows
    www.polygon.com
    Jizo Statues are small unique collectibles in Assassins Creed Shadows which you need to find to complete Mayus quest as one of the romance options.When you find a Jizo statue, you can honor it by leaving a fruit as an offering. They can be found on the main roads all over the nine regions and, although you will find most of them naturally as you progress in Assassins Creed Shadows, youll need to visit less inviting corners of the map to spot some of them.In this Assassins Creed Shadows guide, well show you where to find all Jizo Statues and explain what you get for doing so.How to find Jizo Statues in Assassins Creed ShadowsJizo Statues are unique collectibles spread found in every region of Assassins Creed Shadows. There are a total of 69 Jizo Statues in the game and finding them is the most difficult part when it comes to completing this challenge.You will find all of the Jizo Statues on the roads in the regions in the game. If you passed by one and havent seen it, dont worry. An icon of a small statue appears when you get near them and it stays on the world map until you complete your offering. Once you find one, you just need to interact with it to trigger a short animation of you honoring the Jizo statue.Each Jizo Statue you find gives you 50 XP. Gaining a little bit of experience isnt the only reason for you to look for these statues though. Finding all of them is a mandatory step to complete the side mission Mayus Offerings, which unlocks one of Naoes romance routes.Below, see where to find all Jizo Statues in Assassins Creed Shadows, organized by region in the general order in which you go through them for the main story.Izumi Settsu Jizo Statue locationsThere are eight Jizo Statues in Izumi Settsu, and theyre mostly close to places you need to visit as you progress through the main story. To reach the ones on the east side of the region, you can follow the main road leaving from Katano. To find the one on the north, crossing the river, use the road which connects Eguchi Crossing and Tatsuki.Yamashiro Jizo Statue locationsThere are six Jizo Statues in Yamashiro. Although most of them are around Kyoto, there is one statue on the southeast side that might take you some time to reach. If youve already unlocked the Todaiji Kakurega in Yamato, you can take the road and head north from there. For the remaining ones, just look for the roads leaving Kyoto in the direction you need to go and you will find the statues.Omi Jizo Statue locationsOmi is a huge area and youll have to cover long distances to find its eleven Jizo Statues. Your first contact with the region is through the southern side. Using the Seta Bridge as a reference, you can travel using the main road toward the northwest or northeast side of the region, going around Lake Biwa. Nagahama is the best starting point to get the remaining four on the top part of the map.Iga Jizo Statue locationsThere are only three Jizo Statues in Iga, making it the quickest region to cover. The first ones are between Mibuno Vale and Ichinomiya. Its easier to start from the former, using the roads to go southwest. Next, follow the river south to find the Hinotani Shrine, where the last statue is.Yamato Jizo Statue locationsThe seven Jizo Statues located in Yamato are spread out all over this region. The one up to the north, you can reach by starting from Koriyama and taking the main road. Another tricky one to reach is the Jizo Statue on the east, next to the river. Depending on where youre in the game, you might have unlocked the Mitsue Kakurega, which saves you some time. Otherwise, youll need to start from Gose and try to enjoy the long trip.Wakasa Jizo Statue locationsWakasa is home to 10 Jizo Statues. Many of them are easily reached by taking the roads out of Obama. This is the best place to start even to reach the statue to the east in Kumagawa Juku. Now, for the northern portion of the map, there isnt much to be done. If you have already visited Sotomo Gate, you can save some time by taking the road south and west to find the other statues.Harima Jizo Statue locationsHarima is one of the last areas you explore during the main story, and you can find seven Jizo Statues in it. If youre just entering Harima from Izumi Settsu, we advise you to start on Senri Hills and head to The Warfield. From there, you have access to the roads that lead to four statues. To reach the remaining three statues on the left side of the map, you should use Himeji as a reference and start from there.Kii Jizo Statue locationsKii will force you to traverse long distances to honor its 10 Jizo Statues. Entering the area from Izumi Settsu, you have two options. You can either follow the coast or go south from Kawachi Heights. We suggest the latter since it puts you close to one of the more distant statues.Your next step should be following the coast. There is a road that covers all of it and can take you to the remaining statues. To reach the statue in Iseji Trails on the east side, you can take the Riverside Kakurega if you have access to it, or just ride your horse to get there.Tamba Jizo Statue locationsWith seven Kizo Statues to find in Tamba, you cant avoid exploring large portions of the region. Reaching the four statues on the bottom part of the map is easier if you start from Takeda. All the roads you need are connected to this area. Fukuchiyama is the main reference on the northern side. Teleport there and use its roads to reach the other three statues.For more Assassins Creed Shadows guides, see our running lists of Lost Pages, Kuji-kiri, and armor locations. Or see our full Assassins Creed Shadows walkthrough, and our guides on how to get all companions and romance options.
    0 Σχόλια ·0 Μοιράστηκε ·47 Views
  • Get dozens of Delta Greens Ennie-award winning stories for just $25
    www.polygon.com
    Drawing from inspiration like The X-Files, Control, Call of Cthulhu, and other conspiracy-laden works of fiction, Delta Green is a TTRPG full of sleeper agents, secretive institutions, and otherworldly phenomena with a healthy dose of trauma sprinkled in for good measure. If this all sounds like a fun weekend, you can currently get everything you need to start playing Delta Green and over a dozen pieces of supplementary content for just $25 at Humble.This bundle includes PDF versions of the Delta Green: Agents Handbook, and Handlers Guide, as well as the Need to Know quickstart guide, and the Handlers Screen, which contains handy reference tables for whoever is running your game. Several of the rulebooks and other materials featured in this bundle are also included as modules for the Roll20 and Foundry Virtual Tabletop systems. Youll also get seven digital asset packs to create your own in-game handouts and other material to bring your original campaigns to life. Besides the rulebooks and other source material, this bundle also includes seven collections of short stories set in the world of Delta Green, which could be used to inspire the setting for your next campaign. As with other Humble bundles, a portion of each sale goes to benefit a nonprofit. In this case, part of your purchase will benefit Direct Relief, a foundation that provides disaster aid in resource-poor communities in the United States and across the world. You can use the Adjust Donation drop-down menu on the right side of the bundle page to adjust how the funds from your sale are distributed.
    0 Σχόλια ·0 Μοιράστηκε ·37 Views
  • Building and calibrating trust in AI
    uxdesign.cc
    How to manage the inherent uncertainty ofAI.Figure 1: The trust continuum: No trust is harmful, but overtrust is dangerous. You need to pull your users into the golden middle of calibrated trust.Trust makes relationships go roundwhether it is between people, businesses, or the products we rely on. Its built on a mix of qualities like consistency, reliability, and integrity. When any one of these breaks, the relationship cracks with it. A friend who disappears for a year, a car that wont start half the time, a teammate who never pulls their weighttrust thins out, and we start looking for alternatives.AI is a tough nut to crack when it comes to trust. By nature, its probabilistic, uncertain, and makes mistakes. Earning user trust takes real effort, and once youve earned it, you often need to dial it back. In AI, overtrust is dangerous. If users accept AI outputs by default, errors will snowball into bad decisions with real-world consequences [1][2]. Your users should become responsible collaborators who calibrate their trust by questioning, adjusting, and taking ownership of how they use the system (figure1).So, how do you build appropriate trust into your AI system? When working with AI teams, I often see trust reduced to model accuracy. Users dont trust the system because it makes mistakes. The assumption is that trust is a technical problem, and engineers or data scientists need to fixit.But thats only part of the picture. In reality, trust is primarily built through how users understand and experience your producthow well it understands their needs, communicates its value, and supports them in the moment. This article focuses on the user-facing dimensions of trust, namely the addressed use case, value creation, and user experience.Figure 2: Building trust through the user-facing components of your AI system (cf. this article for the full model of AI systems).Well explore practical techniques and design patterns for building and calibrating trust. I will illustrate these with insights from a real-world project where we used AI to support R&D teams at a major automotive manufacturer.Use case: Optimize for relevance andimpactChoosing the right use case is the first strategic step towards trust. Today, AI has become fairly accessible, and we see generic AI features like chat pop up in every other product. Often, they feel generic, awkward, and disconnected from real user needs. If you dont want to fall into this me-too bucket, you need to show users that you understand them and that your AI is there to help, not distract.At Anacode, we recently partnered with the R&D division of a major automotive manufacturer. Our goal was to track trends and emerging technologies to support the companys innovation efforts. We kicked things off with a motivated group of internal AI champions, but soon, we ran into skepticism from the wider team. These were seasoned engineers and researcherspeople who pride themselves on knowing their domain inside and out. The last thing they wanted was a black box spitting out unsolicited advice. But a tool that subtly enhances their expertise, boosts their outcomes, and improves their standing in the company? For sure, that would be interesting.Solving the rightproblemThe problem of our users wasnt a lack of intelligence or insightit was signal overload. Every week, new patents, startups, funding rounds, and papers landed in their inboxes and newsfeeds. They needed help seeing what actually mattered, so we framed the system as a radar that could spot early signals, surface momentum, and point experts toward whats worth investigating. This directly addressed their pain points and brought initialbuy-in.More generally, our use case needs to check twoboxes:It must be a problem where AI can shine and create significant value (cf. chapter 2 of my book The Art of AI Product Development on discovering the best AI opportunities).It must be perceived as a problem by the users. When AI steps into a space users are frustrated bytime-consuming, noisy, repetitive tasksits appreciated. It supports users and clears space to focus on the parts they care about, such as judgment, creativity, and strategy.Seamlessly integrating into existing workflowsYour AI should support your users, rather than disrupting their workflows and adding cognitive burden. In our case, the R&D teams already had well-worn (though not always efficient) processes and artefactsbriefs, reports, technical review decks, etc. A tool that introduced friction wouldntstick.Heres a straightforward way to plan for seamless integration:Start by mapping the existing, humanprocess.Pinpoint where AI can add real valuewhether by automating grunt work, surfacing hidden insights, or speeding up analysis.Build a tailored AI journey around thesemoments.Plan to deliver outputs in the formats users alreadytrust.In our case, the obvious move was to start with the first step in the process, namely technology landscape monitoring, where large quantities of data need to be combined, structured, and distilled into insights (figure3).Figure 3: Mapping AI functionality against the existing humanprocessAlign impact withtrustThe higher the stakes of your AI, the more carefully you need to build and support trust. A bad movie recommendation on Netflix is harmless. But a flawed R&D insight can lead to a negative impact down the road, especially if it is formulated in the upbeat, self-confident language of a modern LLM. Initially, our system monitored and quantified trends. That was a relatively safe role, which supported early adoption. But as we ventured into evaluating and recommending specific innovations, skepticism increased. Experts questioned how AI derived its suggestions and whether it understood real industry challenges.To mitigate this, we started with low-key features like highlighting competitor innovations rather than advising on the companys own R&D activities. Over time, the AI got more competent, and we expanded its scope. Framing is also importantby talking about innovation ideas rather than recommendations, we made clear that users were still in the driving seat. The expert responsibility of assessing and refining the ideas was still on theirside.Your trust capital is build and reinforced in a virtuous cycle. As users build trust into small features, they will be more likely to accept more AI over time [8]. On the other hand, as your AI system keeps improving, it will also be more likely to live up to growing expectations as you increase itsscope.ValueWhen your product gets out, it should hook users with immediate, tangible value. That initial win opens the door for engagement. But trust doesnt build overnightyou need to keep delivering and raising the bar. As users rely more on your system, value must compound. This ongoing progress is what helps offset AIs inherent imperfectionsits uncertainty, occasional errors, and evolving boundaries.Provide a fast track to measurable valueAI can create value along different dimensions, such as productivity, process improvement, and emotional benefits (see this post for more details). In my experience, starting with simple productivity and efficiency benefits is the best way to get your foot in the door. Find ways to save time and cost for your users. This is measurable, tangible, and often humanly appreciated. Once you have built an initial layer of trust, you can expand to more subtle benefits.In our example, users struggled with monitoring large quantities of data over time. Thus, we started by providing verified and relevant bits of insights, letting users want more. These were concise and factual summaries of relevant trends, for example: Solid-state battery patents are up 40% year-over-year. Toyota and Hyundai lead the activity. Capital is shifting toward next-gen anode materials. Backed by relevant data, this insight quickly made it into internal reports and attracted more users. Not because it was new informationeveryone knew solid-state was heating upbut because it was framed cleanly and reliably and saved hours of hunting and verification.Thats where AI gains its right to play: cost or time savings that are apparent in existing workflows and decisions. By contrast, if experts have to engage in a long discovery to see the value of your AI tool, theyll likely drop the ball (I could as well search for this data myself).Demonstrate integrity with realistic communicationThis principle is simple, but hard to stick to. In a world of marketing superlatives and inflated promises, being honest can feel scary. But in the long run, it pays off. Avoid the trap of overpromising, whether it is about the accuracy, scope, or value of your AI. If the product doesnt hold up, users will get frustrated and disengage (Weve seen this beforebig claims, no follow-through.)In the UX section, you will also learn how to communicate limitations and uncertainty inside the product using design patterns like confidence scores and oversight prompts.Inject domain expertise into yoursystemEspecially in B2B, trust crumbles fast when AI doesnt get its domain. If your system talks in generic terms or misses the nuance experts expect, forcing them to edit the outputs constantly, theyll walk away. For the latest, this will happen when they think something along the lines of Im spending as much time fixing this as I would doing it from scratch.In our case, early versions of the system flagged autonomous vehicle software as a key trend. This was correct, but it felt shallow and obvious. After fine-tuning for industry-specific relevance, the system started surfacing more granular signalslike anode-free lithium battery R&D, or the use of self-supervised learning in in-cabin driver monitoring systems. It got proficient at using the jargon of its users, and the insights could be directlyreused.My article Injecting domain expertise into your AI system provides a comprehensive overview of the methods to customize your AI system for specific domains. There is a learning curve to thisyour AI doesnt need to act like a domain expert from the beginning. Often, giving users an opportunity to enrich your system with their domain knowledge will not only improve performance but also create a sense of ownership and deepen engagement. This leads us to the next trust-building technique, namely the visible compounding of value through continuous improvement.Commit to continuous improvementOne of the best practices of successful AI development is to launch early and collect relevant data and feedback for improvement. Releasing an imperfect system can be scarybut in the end, AI is never perfect, so get used to the idea. In the first three months, our system surfaced several useful signals, but many insights were still not to the point. While this was enough to demonstrate initial value, we were clearly in for a race with time. Once users use your AI more frequently, they develop a relationship of their own. Their expectations grow as they become more proficient with the tool and want to rely on it in their dailywork.You need to catch this momentum and continuously optimise your system. Fortunately, in most cases, you can create feedback mechanisms that not only point you to the shortcomings of your system, but also allow to collect meaningful training data to improve your system over time. The section on Feedback and learning will dive into different ways to collect feedback from yourusers.The use case you address and the value you provide are at the core of your AI product strategy and positioning. They motivate your users to buy and use your application. But the trust muscle is built over time. On a day-to-day basis, your users will be interacting with your AI via the user interface, and this is where the real funbegins.User experienceThere are plenty of design patterns you can use to help users build and calibrate their trust. These relate to transparency, control, and feedback collection. At the end, they boil down to mitigating the uncertainty and the risks of errors made by theAI.Transparency: Aligning user expectationsUsers build mental models of the products they interact with [7]. Trust will break if it turns out that these models are not aligned with reality. Now, in AI, a lot of the action happens under the hood, away from the eyes of your users. An AI app that relies on ChatGPT will behave differently than one that uses a deeply customised LLMthough the interaction will look the same on the surface. To avoid mismatched expectations, you need to crack open the black box of your system and explain its relevant workings to your users. Here are some design patterns to achievethis:In-context explanations: Help users understand how an output was generated, right where they see it. For example, a system tracking emerging technologies might explain a trend by breaking down its inputslike recent patent spikes, venture capital activity, and competitor filings. In graphical interfaces, this can happen with interactive elements like tooltips. In conversational interfaces, you provide the explanation directly in the conversational flow.Footprints / chains-of-thought let users trace how the AI reached its conclusion. In our case, clicking on a suggested innovation revealed its source trail: the documents, events, and filters that led to its prioritization.Figure 4: Disclose the AIs thinking to show how the AI reached its conclusionCaveats and disclaimers can be used to highlight known limitations of an output or dataset. If data coverage is sparse, or signals conflict, clearly state that early so users dont make decisions based on flawed assumptions.Figure 5: Proactively communicate the limitations of yoursystemCitations: Link outputs to source datawhether those are documents, reports, or APIsso users can verify the AI output themselves.Figure 6: Display the raw sources, allowing users to check the data themselvesEverboarding and guidance: Keep users informed as the system evolves. When features or logic change, provide lightweight tooltips or embedded guidance to explain whats different and why itmatters.Verification-focused explanations: Instead of just explaining how the output was formed, help users evaluate its reliability. Include self-critiques (This signal may be inflated by cross-market overlap) or alternative interpretations to activate user judgment.When thinking about transparency, pay special attention to communicating the uncertainty of your AI. This will help users calibrate their trust and spot errors. Here are some techniques:Confidence scores: Use simple, visible indicatorslike percentages or a low/moderate/high scaleto signal how much confidence the system has in aresult.Figure 7: Confidence scores allow to highlight results that require further verificationFor text content, you can use both visual and linguistic indicators of uncertainty. For example, you can visually highlight phrases, numbers, or facts that need validation.Figure 8: Inline highlights can signal bits of uncertainty, keeping the rest of the insightintactYou can also prompt the generating LLM to use uncertain language to hedge questionable content (This may suggest, Data is inconclusive, Further validation recommended., etc.)AI explanations are a living topicthey will evolve quickly as you start getting feedback, questions, and complaints from your users. I prepared a cheatsheet that you can have at hand when updating your explanations, which you can downloadhere.Control: Putting users into the drivingseatTransparent outputs have little value if users cannot act on the additional information. To establish collaboration between users and AI, you need to give them control and a sense of responsibility. Here are some design patterns that can be applied before, during, and after the AIsjob:Pre-task setup: Let users configure the scope of AI taskse.g., data sources, timeframes, entity filtersbefore launching analysis. This increases relevance and avoids genericresults.Adjustable signal weighting: Let users decide how much weight to give different sources or criteria. In trend monitoring, one user might prioritize venture activity; another, academic citations. Explain the impact of thesignals.Figure 9: Allow users to decide how much weight they want to put on specific datasourcesInline editing and actions: Make parts of the output editable or replaceable in place. Users should be able to refine vague terms or tweak filters without rerunning the entiretask.Figure 10: Proficient users can be given the right to override AI suggestionsEmergency stop (abort generation): Allow users to halt generation processes if they notice early errors or misalignment. As most of us have experienced when using ChatGPT&Co., this can save a lot of time when the user sees that the AI got off-track.Intentional friction: Integrate friction to activate critical thinking. Challenge users with questions like: What assumptions does this result rely on? or Could this be explained by noise? These prompts for critical thinking are also called cognitive forcing functions (CFFs; cf. [5]). They help calibrate reliance, especially early in the trustjourney.Error management: Mitigate the risks of AImistakesEven if your engineers optimize AI performance to death, mistakes will still slip through to your users. You need to turn your users into collaborators who help you catch these failures and turn them into learning opportunities. Here are some best practices to keep them aware andalert:Highlight error potentials during onboarding: Set realistic expectations about model performance. About 1 in 10 results may need review is better than pretending the system is flawless. Communicate the strengths and weaknesses of the system, and introduce users to both strong and flawed outputs. This shapes realistic expectations and shows how collaboration with AI works in practice.Inline feedback mechanisms: Enable users to flag issues directly where they occurmisclassifications, false positives, outdateddata.Immediate acknowledgment and recovery: After feedback is submitted, respond visibly and offer a fix. Thanks for flaggingwould you like to regenerate the output without that item? Communicate the impact of the users feedback as precisely as possible: Your feedback will be integrated in the next release at the end of the month. encourage user to build feedbackmuscleFeedback and learning: Let the system grow with theuserTrust deepens when users feel they have a voice. If they know they can shape the systemand see that their input drives real improvementsthey stop seeing the AI as a black box and start treating it as a partner. Thats how you turn passive users into co-creators.Start by capturing implicit feedback: Where do users zoom in? What do they edit or delete? What do they consistently ignore? These behavioral signals are gold for tuning relevance. Then, layer in explicit feedback mechanisms:Binary feedback: A simple thumbs up/down gives you an instant signal on output quality. Its low-effort and useful atscale.Figure 11: The ubiquitous thumbs up/down widget allows for quick (though shallow) feedback collectionFree-text inputs: Especially useful early on, when youre still learning how your system is missing the mark. It requires more effort to parse, but chances are that the insights are worthit.Figure 12: Free-text feedback can be useful to discover new issues and failuremodesStructured feedback: As your system matures and common failure patterns emerge, offer users predefined categories to speed up feedback and reduce ambiguity.Figure 13: Use structured feedback when you are already aware of the major failure modes of yoursystemIntegrating these patterns is technically easy, but without clear incentives, many busy users might ignore them. Here are some tips to pull your users into the feedbackloop:Communicate the impact of the feedback clearly. This will show users that it helps improve the system, which is a reward initself.Consider constructive extrinsic rewards. One advanced mechanism we used was unlocking deeper customization and control for users who consistently engage with the system. This not only incentivizes them to provide feedback, but also supports power users without overwhelming novices.SummaryTrust builds gradually, across every interaction, every insight delivered (or missed), every decision your users make with your AI by their side. Time is importantyour AI will (hopefully) improve, producing more relevant insights and fewer errors. Your users will also changetheyll become more skilled, more reliant, and more demanding. If your system doesnt grow with them, confidence and trust canerode.Thats why trust in AI requires careful planning and a layered strategy:Use case and value communication create the strategic foundation: Is this solving a real problem? Can users clearly see thebenefit?UX handles the day-to-day work of trust: transparency, control, and feedback loops shape how users build their mental model of your product and calibrate trust in realtime.AI is tricky by designuncertainty and errors are part of the package. But if youre intentional and build your product for clarity and collaboration, trust can become your strongest asset and differentiator. It will fuel adoption, invite feedback, and turn cautious users into advocates.References and furtherreadingsHoward, A. (2024). In AI We TrustToo Much? MIT Sloan Management Review. Retrieved from https://sloanreview.mit.edu/article/in-ai-we-trust-too-much/Sponheim, C. (2024). When Should We Trust AI? Magic-8-Ball Thinking. Nielsen Norman Group. Retrieved from https://www.nngroup.com/articles/ai-magic-8-ball/McKinsey & Company (2023). Building AI trust: The key role of explainability. QuantumBlack, AI by McKinsey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/building-ai-trust-the-key-role-of-explainabilityMicrosoft Aether Working Group (2022). Overreliance on AI: Literature Review. Microsoft Research. https://www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdfMicrosoft Research (2025). Appropriate Reliance: Lessons Learned from Deploying AI Systems. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/03/Appropriate-Reliance-Lessons-Learned-Published-2025-3-3.pdfMicrosoft Learn (n.d.). Overreliance on AIGuidance for Product Teams. Microsoft AI Playbook. https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/overreliance-on-ai/overreliance-on-aiGoogle PAIR (2019). People + AI Guidebook: Designing human-centered AI products. https://pair.withgoogle.com/guidebook/Kniuksta, D., & Vedel, S. (2023). Deep dive: Engineering artificial intelligence for trust. Mind the Product. Retrieved from https://www.mindtheproduct.com/deep-dive-engineering-artificial-intelligence-for-trust/DNV (2023). Building trust in AI: Creating responsible AI systems through digital assurance. DNVFuture of Digital Assurance. https://www.dnv.com/research/future-of-digital-assurance/building-trust-in-ai/Janna Lipenkova (2025). The Art of AI Product Development, chapter 10. Manning Publications.Anacode GmbH (2025). Cheatsheet: Explaining your AIsystems.Note: Unless otherwise noted, all images are by theauthor.Building and calibrating trust in AI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·32 Views
  • Cursor, vibe coding, and Manus: the UX revolution that AI needs
    uxdesign.cc
    Its time to move past the command-line era ofAI.Photo by Brett Jordan onUnsplashWe keep hearing that AI is for everyone. No coding required. Your new coworker. An assistant for the masses. And yet, to get consistently useful results from todays best models, you still need to know the secret handshakes: the right phrases, the magic tags, the unspoken etiquette ofLLMs.The power is there. The intelligence is real. But the interface? Still stuck in thepast.And that disconnect became painfully clear one day, when a friend of mine tried to do something absurdlysimpleWhen ChatGPT needed a magicspellMy friendlets call him Johndecided to use ChatGPT to count the names in a big chunk of text. He pasted the text into ChatGPT and asked for a simple count. GPT-4o responded with a confident-sounding number that was laughably off. Confusion gave way to frustration as John tried new phrasing. Another round of nonsense numbers. By the fifth attempt, he looked ready to fling his keyboard out thewindow.I ambled over and suggested a trick: wrap the block of text in <text> tags, just as if we were feeding the model some neat snippet of XML. John pressed Enter and, with that tiny tweak, ChatGPT nailed the correct count. Same intelligence, but now it was cooperating.John was both relieved and annoyed. Why did a pair of angle brackets suddenly turn ChatGPT into a model of precision? The short answer: prompt engineering. That term might sound like a fancy discipline, but in practice, it can look like rummaging around for cryptic phrases, tags, or formatting hacks that will coax an LLM into giving the right answer. It is reminiscent of a fantasy novel where you have infinite magical power, yet must utter each syllable of the summoning chant or risk conjuring the wrongmonster.https://medium.com/media/54f0195b29bc89292f7db656514a657a/hrefIn an era when we supposedly have next-level AI, its hilarious that we still rely on cryptic prompts to ensure good results. If these models were truly intuitive, they would parse our intentions without requiring special incantations. Instead, were forced to memorize half-hidden hints, like travelers collecting obscure tourist phrases before visiting a foreign land. That is a design problem, not an intelligence problem.Sound familiar?If this rings a bell, its because weve seen a similar dynamic before in the early days of personal computing. As usability expert Jakob Nielsen explains, the very first user-interface paradigm was batch processing, where you had to submit an entire batch of instructions upfront (often as punched cards) and wait hours or even days for the results. This eventually gave way to command-based interfaces (like DOS or Unix) in the 1960s, where youd type in a single command, see an immediate result, and then type another command. Back then, command lines ruled the UI landscape, demanding meticulous keystrokes that only power users could navigate.Then the graphical user interface arrived, with clickable icons and drag-and-drop metaphors that pulled computing out of the realm of black-screen mysticism and into the daylight. You no longer had to type COPY A: C:\DOC.DAT; you just dragged a file into a folder. That wasnt a minor convenienceit was a revolution that let everyday people, not just specialists, feel in control of the machine for the firsttime.Early IBM PC with a command line like interface from WikipediaWe saw the same leap again and again. WYSIWYG editors let people format documents without memorizing tags or markup. Mosaic put the internet behind glass and made it something your mom could use. The smartphone touchscreen transformed phones from keypad-laden gadgets into intuitive portals you could navigate with a swipe and a tap. In every case, the breakthrough wasnt just better tech under the hoodit was better ways to interact withit.Mosaic browser gave a GUI to the Internet, from Web DirectionsThats what made ChatGPT the spark that set off the LLM explosion. Sure, the underlying tech was impressive, but GPT-style models had been kicking around for years. What ChatGPT nailed was the interface. A simple chatbox. Something anyone could type into. No setup. No API key. No notebook to clone. It didnt just show people that LLMs could be powerfulit made that power feel accessible. That was the GUI moment for large languagemodels.ChatGPT was a GUI revolution, yet its inaccessibility is remnisicent of command lineUIYet, as many people (Nielsen included) have noted, were still in the early days. Despite ChatGPTs accessibility, it still expects precise prompts and overly careful phrasing. Most people are left relying on Reddit threads, YouTube hacks, or half-remembered prompt formats to get consistent results. Were still stuck in a command line moment, wearing wizard hats just to ask a machine forhelp.The tech is here. The intelligence is here. Whats missing is the interface leap, the one that takes LLMs from impressive to indispensable. Once we design a way to speak to these models that feels natural, forgiving, and fluid, well stop talking about prompt engineering and start talking about AI as a true collaborator. Thats when the real revolution begins.Examples of emerging UX inAICursor = AI moves into the workspaceMost people still interact with LLMs the same way they did in 2022: open a chat window, write a prompt, copy the answer, paste it back into whatever tool they were actually using. It works, but its awkward. The AI sits off to the side like a consultant you have to brief every five minutes. Every interaction starts from zero. Every result requires translation.Cursor flips that model inside out. Instead of forcing you to visit the AI, it brings the AI into your workspace. Cursor is a code editor, an IDE, or integrated development environment, the digital workbench where developers write, test, and fix code. But unlike traditional editors, Cursor was built from the ground up to work hand-in-hand with an AI assistant. You can highlight a piece of code and say, Make this run faster, or Explain what this function does, and the model responds in place. No switching tabs. No copy-paste gymnastics.Demo of Cursors Changelog where codebase and AI assistant canco-existThe magic isnt in the model. Cursor uses off-the-shelf LLMs under the hood. The real innovation is how it lets humans talk to the machine: directly, intuitively, with no rituals or spellbooks. Cursor doesnt ask users to understand the quirks of the LLM. It absorbs the context of your project and adapts to your workflow, not the other wayaround.Its a textbook example of AI becoming more powerful not by getting smarter, but by becoming easier to work with. This is what a UX breakthrough looks like: intelligence that feels embedded, responsive, and natural, not something you summon from a command line with just the right phrasing.So what happens when we go one step further, and make the interaction evenlooser?Vibe coding = casual commandlineVibe coding, a term popularized by Andrej Karpathy, takes that seamless interaction and loosens it even more. Forget structured prompts. Forget formal requests. Vibe coding is all about giving the LLM rough direction in natural language, Fix this bug, Add a field for nickname, Reduce the sidebar padding, and trusting it to figure out thedetails.By Jay Thakur on HackerNoonAt its best, its fluid and exhilarating. LLMs like a talented engineers who have read your codebase and can respond to shorthand with useful action. That sense of flowof staying in the creative zone without stopping to translate your thoughts into machine-friendly commandsis powerful. It lets you focus on what you want to build, not how to phrase theask.But heres the catch: you still have to know how to talk to themachine.Vibe coding isnt magic. Its an interaction style built on top of a chat interface. If you dont already understand how LLMs behave, what theyre good at, where they stumble, how to steer them with subtle rewordings, the magic breaks down. Youre still typing into a text box, hoping your intent comes through. Its just that now, the prompts are written in lowercase and goodvibes.https://medium.com/media/06036727978af6a2ca6b2de416bbfd0f/hrefSo while vibe coding lowers the friction for seasoned users, it still relies on unspoken rules and learned behavior. The interface is friendlier, but its not yet accessible. It shows that AI can feel conversationalbut still speak a language only some people understand.If Cursor showed us what it looks like to embed AI into tools we already use, and vibe coding loosened the interaction into something more human, then whats next? What happens when you dont even need to steer, when AI takes thewheel?Manus = the interface is the intelligenceWhen Manus hit the scene, it made waves. Headlines hailed it as the first true AI agent. Twitter lit up. Demos showed the tool writing code, running it, fixing bugs, and trying again, all without needing constant human input. But heres the quiet truth: Manus runs on Sonnet 3.7, a Claude model you can already access elsewhere. The model wasntnew.What was new, and genuinely exciting, was the interface and interactions.Instead of prompting line by line, Manus lets users speak in goals: Analyze this dataset and generate a chart, or Build a basic login page. Then it takes over. It writes the code. It runs the code. If something breaks, it investigates. It doesnt ask you to prompt again. It acts like it knows what youmeant.Demo of Manus creating icons that match TechCrunch websiteThis is delegation as design. Youre no longer babysitting the model. Youre handing off intent and expecting results. Thats not about intelligence. Thats about trust in the interface. You dont need to wrangle it like a chatbot or vibe with it like a coder who knows the right lingo. You just ask and Manus handles thehow.And thats the point. The Manus moment wasnt about a smarter model. It was about making an existing model feel smarter through better interaction. Its leap forward wasnt technical it was experiential. It didnt beat other tools by out-reasoning them. It beat them by understanding the humanbetter.This is the future of AI: tools that dont just process our input, but interface with ourintent.All three examplesCursor, vibe coding, and Manusprove the same point: its not the size of the model that changes the game, its the shape of the interaction. The moment AI starts understanding people, instead of people learning how to talk to AI, everything shifts.The missing link between power andpeopleIf we return to Johns struggle with <text> tags, its clear his predicament wasnt just a glitch. It revealed how these advanced models still force us to speak their language rather than meeting us halfway. Even though powerful LLMs can write code and pass exams, many AI interactions still feel like sending Morse code to a starship. The real shortfall isnt intelligence; its an outdated, command-line-like user experience that demands ritual rather than collaboration.Tools like Cursor, Manus, and vibe coding show glimpses of a new reality. By embedding AI directly into our workflows and allowing more natural, goal-oriented conversations, they move us closer to what Jakob Nielsen calls a complete reversal of control, where people state what they want and let the computer figure out how to get there. Weve watched technology make leaps like this before, from command lines to GUIs, from styluses to touchscreens, and each time, the real revolution was about removing friction between people and possibility.AI will be no different. The next major step isnt just bigger models or higher accuracy: its designing interfaces that make AI feel like a partner rather than a tool. When that happens, well stop talking about prompts and simply use AI the way we use search bars or touchscreens. Intuitively and effortlessly.This shift is not only the future of AI, but also the future of computing itself.Cursor, vibe coding, and Manus: the UX revolution that AI needs was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Σχόλια ·0 Μοιράστηκε ·32 Views
  • My Favorite Amazon Deal of the Day: This Bose Smart Soundbar
    lifehacker.com
    We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.The Bose Smart Soundbar wasn't very well received by reviewers when it came out last fall, citing small improvements for a larger price tag. But now with a $100 discount, we can have a serious conversation about who it's good for. You can get the Bose Smart Soundbar for $399 (originally $499), the lowest price it has been, according to price-tracking tools. Bose Smart Soundbar Channels: 3.0.2, Physical Connections: HDMI, optical, IR, subwoofer, USB. $399.00 at Amazon $499.00 Save $100.00 Get Deal Get Deal $399.00 at Amazon $499.00 Save $100.00 The Bose Smart Soundbar is the successor to the Bose Smart Soundbar 600, which was already a very good soundbar, so needless to say, standards were high. Bose essentially made a new soundbar with more modern features, but didn't upgrade the hardware, bringing into question the steep price increase, according to PCMag's and many other reviews. If you already have a 600, it's not worth upgrading even with the discount. However, if you're looking for a modern premium soundbar that has enough bass to not need a subwoofer, then consider the new Bose Smart Soundbar.Like most premium speakers, the soundbar doesn't need a subwoofer, because it creates its own bass, but the output will never match that of a dedicated subwoofer. This works well for bedrooms or small apartments. But keep in mind you can always add a subwoofer and/or rear speakers. Speaking of which, if you own Bose Ultra Open Earbuds, you can connect them to the soundbar to work as rear satellite speakers, creating a pseudo-surround-sound system (only for you, though). As the name implies, this is a "smart" soundbar, with wifi, meaning as long as you're in the same wifi network, you can stream your media to it with AirPlay or Chromecast. They also work as a smart speaker since it comes with built-in Alexa. The instrument separation is what makes this soundbar shine, which goes hand-in-hand with the AI Dialogue Mode feature for dialogue enhancement. But, if you can afford to spend a bit more, the Sonos Beam Gen 2 is the best stand-alone soundbar you can buy.
    0 Σχόλια ·0 Μοιράστηκε ·35 Views
  • You Should Be Freezing Chickpea Liquid
    lifehacker.com
    When it comes to chickpeas and aquafaba, the liquid and the legume are two very different residents of the same can. Just because you used garbanzo beans in your salad doesnt mean that youre craving these fluffy pancakes on the same daybut you might on Saturday. In the event that you have leftover aquafabaor really, any time youre eating chickpeas and find yourself about to drain out the liquidstop your hand. It freezes incredibly well.How to use aquafaba Aquafaba has gained a lot of traction since it was discovered in 2014 as being a surprisingly effective replacement for making a fluffy and silky meringue (read more about the history here). However, it has more applications than being a pavlova party trick. Its an excellent vegan ingredient that can thicken soupsor replace eggs as a binder or aerating ingredient for baking. Use it as-is in muffins, whip it for fluffy pancakes, or try these other ways.Im not vegan, and I use eggs in most of my baking, but I do often use aquafaba to thicken soups and sauces. It has some similar components to eggs, like albumins and globulins, and its starches give soup stocks added viscosity. It took some practice for me to stop draining my chickpeas, but Ive built up a habit of freezing the leftover aquafaba. Whenever my soup could use a little somethin, now I can easily turn to my freezer. Should you store aquafaba in the fridge or freezer?It's tempting to pop leftovers into the fridge. But if you're like me, containers get pushed to the back and then you discover it three months later and realize you probably should have frozen said item. You can store aquafaba in the fridge, but only for up to five days. After that, the liquid can get a little funky and it's not worth trying to use it. Aquafaba is easy to thaw so the best storage method is almost always going to be freezing it. How to freeze aquafabaAmericas Test Kitchen suggests freezing the stuff in ice cube trays, but I usually only freeze one cans worth at a time, and slightly less than half a cup is not enough to command a whole tray. Instead, I freeze the aquafaba in a small plastic container (approximately a 4-ounce square), and then I tap out the brick into a freezer bag for long-term storage. A quarter cup brick is a great size for adding to soups and sauces. Credit: Allie Chanthorn Reinmann Sometimes Ill have two or three cubes in there at a time, ready for my next soup or sauce. If you decide to divide it up into an ice cube tray, be sure to whisk or lightly shake the chickpea liquid before pouring. Aquafaba is not naturally homogenous, so this helps to evenly distribute those albumins and globulin particles before freezing.Aquafaba keeps in the freezer for up to four months. If youre worried about the aquafaba not performing as well after its had some time in the ice chest, put your fears to rest. Freezing does not destroy or impair the starches that are responsible for its excellent whipping abilities, and according to Americas Test Kitchen, the frozen and thawed version works just as well as before. How to thaw aquafabaThaw aquafaba by putting the cubes in a bowl, covered, in the fridge overnight. If you need the viscous liquid sooner, simply microwave it for a minute or two, depending on how large the cube is. If youre using it in a soup or sauce, just throw a frozen slab directly in the pot with everything else.
    0 Σχόλια ·0 Μοιράστηκε ·35 Views
  • National Security Council adds Gmail to its list of bad decisions
    www.engadget.com
    The Washington Post reports that members of the White House's National Security Council have used personal Gmail accounts to conduct government business. National security advisor Michael Waltz and a senior aide of his both used their own accounts to discuss sensitive information with colleagues, according to the Post's review and interviews with government officials who spoke to the newspaper anonymously.Email is not the best approach for sharing information meant to be kept private. That covers sensitive data for individuals such as social security numbers or passwords, much less confidential or classified government documents. It simply has too many potential paths for a bad actor to access information they shouldn't. Government departments typically use business-grade email services, rather than relying on consumer email services. The federal government also has its own internal communications systems with additional layers of security, making it all the more baffling that current officials are being so cavalier with how they handle important information.Unless you are using GPG, email is not end-to-end encrypted, and the contents of a message can be intercepted and read at many points, including on Googles email servers," Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation told the Post.Additionally, there are regulations requiring that certain official government communications be preserved and archived. Using a personal account could allow some messages to slip through the cracks, accidentally or intentionally.This latest instance of dubious software use from the executive branch follows the discovery that several high-ranking national security leaders used Signal to discuss planned military actions in Yemen, then added a journalist from The Atlantic to the group chat. And while Signal is a more secure option than a public email client, even the encrypted messaging platform can be exploited, as the Pentagon warned its own team last week.As with last week's Signal debacle, there have been no repercussions thus far for any federal employees taking risky data privacy actions. NSC spokesman Brian Hughes told the Post he hasn't seen evidence of Waltz using a personal account for government correspondence.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/national-security-council-adds-gmail-to-its-list-of-bad-decisions-222648613.html?src=rss
    0 Σχόλια ·0 Μοιράστηκε ·30 Views
  • The Morning After: Get ready for Nintendos big Switch 2 reveal
    www.engadget.com
    Im sidestepping the desperate attempts at April 1 shenanigans and focusing on the imminent Nintendo Direct broadcast, which is likely to confirm rumors and sink others.The last few Switch 1 games have been revealed, meaning tomorrows Nintendo Direct: Switch 2 presentation, kicking off at 9AM ET / 6AM PT will be all about the new console no distractions. (Although, Id be cool with a Silksong release date, finally.)We already know the Switch 2 will be a bigger console, with a bigger screen and Joy-Cons. There also may be some sort-of-mouse functionality baked into the controllers this time, but Nintendos focus is on tech specs and the games. What does the company have cooking? Mat SmithThe biggest tech stories you missedGoogles new experimental AI model, Gemini 2.5 Pro, is now available to free users tooThe best midrange smartphonesApple is reportedly on track to launch the M5 iPad Pro and MacBook Pro later this yearGet this delivered direct to your inbox. Subscribe right here!The Light Phone III doesnt have apps or internet, but still costs $799$599 if you pre-order.Light PhoneThe company behind several minimalist handsets has just released the Light Phone III. It may be the perfect device for folks who brag about giving up smartphones and have the money to experiment with stripped-down phones that are the ultimate step back from modern convenience. Whoops, sorry, I let my mask slip there. Hey, at least theres no AI nonsense.There is a cool, crisp B&W OLED display, new for this third iteration, instead of e-ink paper. Theres still no internet, no apps, no email. There is, however, a place for your podcasts and a simple camera with a physical button. Theres also a Maps app, powered by Here, but its private, so theres no info shared on where youre trying to get to. Privacy like that, however, costs a heady $799, unless you can get the pre-order price of $599 at launch, with estimated delivery in July.Continue reading.xAI, Elon Musks AI company, just purchased X, Elon Musks social media companyConfused? You should be.Yes, xAI has purchased X, according to a post shared by Musk. Besides their owner and similar names, the companies are already connected through xAIs chatbot, Grok, so it makes some sense. The biggest surprise may be that X is still valued at $33 billion according to Musk and his companies, at least. X, once Twitter, was acquired by Musk in 2022 for $43 billion. xAI, like many leading AI companies, has been raising money as often and as quickly as possible. Combining the two companies may ease some of the debt Musk took on.The companies futures are intertwined, according to Musk. Financially, now, thats very true.Continue reading.iOS 18.4 is available nowIt adds new emoji, Apple News+ Food and priority notifications.iOS 18.4, iPadOS 18.4 and macOS 15.4 include a new Apple News+ Food section in the News app that collects recipes and food-oriented articles, including exclusive recipes for Apple News+ subscribers. The updates also introduce new emoji, AI-sorted Priority Notifications in the Notification Center and a new Ambient Music tool in the Control Center.After a bit of a delay, Apple Intelligence will be available in the European Union for the first time on iPhone and iPad. The suite of AI features will now also work in several new languages including French, German, Italian, Portuguese (Brazil), Spanish, Japanese, Korean and Chinese (simplified) as well as localized English for Singapore and India.Continue reading.This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-111414827.html?src=rss
    0 Σχόλια ·0 Μοιράστηκε ·31 Views
  • SpaceX and Apple reported spat could spell bad news for Starlink and your iPhones satellite communication features
    www.techradar.com
    Apple and SpaceX might be feuding over satellite communications which could end badly for everyone.
    0 Σχόλια ·0 Μοιράστηκε ·28 Views
  • 0 Σχόλια ·0 Μοιράστηκε ·30 Views