• Pokmon Mystery Dungeon is a criminally underrated roguelike
    www.polygon.com
    The Mystery Dungeon games share a core formula: you wake up as a human turned into a pocket monster, you choose a partner, and then descend through procedurally generated dungeons to rescue your fellow Pokmon and uncover the reason behind your transformation. Its straightforward and kid-friendly, but the series doesn't shy away from complex themes or challenging boss battles. All 11 entries in this spinoff franchise continue the loop of dungeon spelunking and rescue missions, while layering on a dramatic plot that appeals to both fans of roguelikes and the classic Pokmon titles.
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • Beyond The Hype: What AI Can Really Do For Product Design
    smashingmagazine.com
    These days, its easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. Whats much harder to find is a clear view of how AI is actually integrated into the everyday workflow of a product designer not for experimentation, but for real, meaningful outcomes.Ive gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, Ive built a simple, repeatable workflow that significantly boosts my productivity.In this article, Ill share whats already working and break down some of the most common objections Ive encountered many of which Ive faced personally.Stage 1: Idea Generation Without The ClichsPushback: Whenever I ask AI to suggest ideas, I just get a list of clichs. It cant produce the kind of creative thinking expected from a product designer.Thats a fair point. AI doesnt know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to feed it all the documentation you have. But thats a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AIs answers become vague and unfocused.Current-gen models can technically process thousands of words, but the longer the input, the higher the risk of missing something important, especially content buried in the middle. This is known as the lost in the middle problem.To get meaningful results, AI doesnt just need more information it needs the right information, delivered in the right way. Thats where the RAG approach comes in.How RAG WorksThink of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of card catalog, called a vector database.When you ask a question, the assistant doesnt reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.How Is This Different from Just Dumping a Doc into the Chat?Lets break it down:Typical chat interactionIts like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is in front of them, but its easy to miss something, especially if its in the middle. This is exactly what the lost in the middle issue refers to.RAG approachYou ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. Its faster and more accurate, but it introduces a few new risks:Ambiguous questionYou ask, How can we make the project safer? and the assistant brings you documents about cybersecurity, not finance.Mixed chunksA single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant cant tell what the core topic is.Semantic gapYou ask, How can we speed up the app? but the document says, Optimize API response time. For a human, thats obviously related. For a machine, not always.These arent reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?Start With Three Short, Focused DocumentsThese three short documents will give your AI assistant just enough context to be genuinely helpful:Product Overview & ScenariosA brief summary of what your product does and the core user scenarios.Target AudienceYour main user segments and their key needs or goals.Research & ExperimentsKey insights from interviews, surveys, user testing, or product analytics.Each document should focus on a single topic and ideally stay within 300500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.Language MattersIn practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:English prompt + English documents: Consistently accurate and relevant results.Non-English prompt + English documents: Quality dropped sharply. The AI struggled to match the query with the right content.Non-English prompt + non-English documents: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.Takeaway: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, youre free to use other languages. A challenge also highlighted in this 2024 study on multilingual retrieval.From Outsider to Teammate: Giving AI the Context It NeedsOnce your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas the way a mid-level or senior designer would.Heres an example of a prompt that works well for me:Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt).The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.Please include:Possible overlaps in user goals, actions, or scenarios;Potential confusion if both features are launched at the same time;Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;Onboarding screens or explanatory elements that might help users understand both features.If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.AI Needs Context, Not Just PromptsIf you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just more information, but better, more structured information.Building a usable knowledge base isnt difficult. And you dont need a full-blown RAG system to get started. Many of these principles work even in a regular chat: well-organized content and a clear question can dramatically improve how helpful and relevant the AIs responses are. Thats your first step in turning AI from a novelty into a practical tool in your product design workflow.Stage 2: Prototyping and Visual ExperimentsPushback: AI only generates obvious solutions and cant even build a proper user flow. Its faster to do it manually.Thats a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can flip to reveal a prize. I couldnt recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.At the prototyping stage, AI can be a strong creative partner in two areas:UI element ideationIt can generate dozens of interactive patterns, including ones you might not think of yourself.Micro-animation generationIt can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.AI can also be applied to multi-screen prototypes, but its not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks individual screens, elements, or animations where it can kick off the thinking process and save hours of trial and error.A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.Heres another valuable way to use AI in design as a stress-testing tool. Back in 2023, Google Research introduced PromptInfuser, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasnt to generate new UI, but to check how well AI could operate inside existing layouts placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input a clear gain in design accuracy, not just speed.That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.Stage 3: Finalizing The Interface And Visual StylePushback: AI cant match our visual style. Its easier to just do it by hand.This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often dont feel like they belong in your product. They tend to be either overly decorative or overly simplified.And this is a real limitation. In my experience, todays models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:Direct integration with a component library.I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. Other designers have run into similar issues, noting that library support in Figma Make is still limited and often unstable.Uploading styles as JSON.Instead of a full component library, I tried uploading only the exported styles colors, fonts in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied.Two-step approach: structure first, style second.What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result though still far from pixel-perfect.So yes, AI still cant help you finalize your UI. It doesnt replace hand-crafted design work. But its very useful in other ways:Quickly creating a visual concept for discussion.Generating what if alternatives to existing mockups.Exploring how your interface might look in a different style or direction.Acting as a second pair of eyes by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.AI wont save you five hours of high-fidelity design time, since youll probably spend that long fixing its output. But as a visual sparring partner, its already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.Stage 4: Product Feedback And Analytics: AI As A Thinking ExosuitProduct designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.As Vitaly Friedman rightly pointed out in one of his columns, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. The strength of AI isnt in inventing data but in processing it at scale.Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.Simply counting the percentages for each of the five predefined reasons wasnt enough. I wanted to know:Are there specific times of day when users churn more?Do the reasons differ by region?Is there a correlation between user exits and system load?The real challenge was... figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done for me by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldnt have been able to reach that level of insight on my own at all.AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for whats truly valuable: asking the right questions.A few practical notes: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.AI Is Not An Autopilot But A Co-PilotAI in design is only as good as the questions you ask it. It doesnt do the work for you. It doesnt replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes its still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer. But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Dont wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.Lets SummarizeIf you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. Thats the lost in the middle problem.The RAG approach helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.Clear, focused prompts work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.A well-structured knowledge bas makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.Use English for both your prompts and your documents. Even multilingual models are most reliable when working in English, especially for retrieval.Most importantly: treat AI as a creative partner. It wont replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.Further ReadingAI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality, Cindy BrummerThis piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on before you dive into RAG or idea generation.8 essential tips for using Figma Make, Alexia DantonWhile this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if youre experimenting with visual tools and structured prompting.What Is Retrieval-Augmented Generation aka RAG, Rick MerrittIf you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • F5: Nicki Gitlin Talks Iced Coffee, Her Daily Planner, a Tailored Pant + More
    design-milk.com
    When Nicki Gitlin was an intern at Snarkitecture she explored objects and spaces at all scales, setting the foundation for her own work. With an emphasis on materiality and the interplay of light, Gitlin was fascinated by the way in which elements could be layered and how they influenced an individuals experience in a particular setting.As an architectural designer for sportswear brand Theory, Gitlin was responsible for store layout and fixture development. The role was a perfect fit for this creative, who appreciates fashion as a means of expression and an art form all its own. I love the way clothing can shape how you feel moving through the world its design on a more personal, immediate scale, she says.Nicki GitlinGitlin earned a graduate degree from Columbia University, and continued to hone her skills via residential projects and thoughtful research. In 2022, she was ready to make her own mark when she founded her New York-based firm dang. This unforgettable moniker is what Gitlin wants a client to exclaim when they step into one of her signature spaces.Her philosophy is rooted in the belief that beauty is found in the everyday. And whether Gitlin envisions a residence or an eatery, she ensures that each interior is modern yet still deeply livable. Her environments offer an inviting combination of comfort and style that people look forward to returning to.Even with a full schedule of client meetings and site visits, Gitlin manages to carve out quality time away from her computer and mobile phone. Shell often turn her attention to something completely different, like playing with her son or cooking dinner. Its a chance to be fully present, and a reminder that not everything has to happen at once, she notes.Today, Nicki Gitlin joins us for Friday Five!Photo: Nicki Gitlin1. Finding Otis in a Sun SpotNo matter how hectic my day gets, catching Otis stretched out in a warm patch of sunlight instantly slows me down. He has a way of reminding me to pause, breathe, and enjoy the simple comforts something I try to bring into my work, too.Photo: Nicki Gitlin2. Tailored PantA perfectly cut pant is my version of armor. Its polished yet effortless, and it carries me through site visits, client meetings, and late nights at my desk. The structure grounds me, while the ease lets me move through my day feeling like the most put-together version of myself.Photo: Nicki Gitlin3. Daily PlannerMy daily planner is where big ideas and tiny to-dos live side by side. Theres something grounding about putting pen to paper seeing the day laid out makes even the busiest schedule feel manageable. Its my roadmap, my motivator, and sometimes, my excuse to use a really good pen.Photo: Nicki Gitlin4. Satin ScrunchieThe oversized satin scrunchie is my go-to for pulling my hair back without pulling myself out of the moment. Its practical, but it also feels a little indulgent soft, easy, and chic.Photo: Nicki Gitlin5. Iced Coffee in a To-Go CupAn iced coffee in a to-go cup is my constant companion, no matter the season. Theres something about the ritual the clink of ice, the first sip that signals its time to get things moving. Its equal parts fuel and comfort, keeping me energized through early mornings and late afternoons.Works by Nicki Gitlin and dang:Photo: Eric PetschekAfficionado Coffee RoastersFor this Hells Kitchen caf, the design draws from the brands roots in sourcing coffee directly from farmers around the world. Raw, tactile materials like plaster, terracotta floors, and patinated metal echo the landscapes where the beans are grown, creating a space that feels as grounded and authentic as the coffee itself.Photo: Sean Q. MunroSoho Pied-a-TerreThis 400-square-foot Soho apartment proves that small can still feel spacious. Every inch works hard the wardrobe doubles as a side table, a radiator cover transforms into a banquette and a media console, all concealing storage little moments of ingenuity that make the space feel effortless to live in.Photo: Stephen Kent JohnsonUpper West SideThis private home, a collaboration with Studio ST, was grounded in Alyssa Kapitos timeless interiors and brought to life through architecture that honors the buildings character while supporting a serene daily rhythm. The thoughtful detailing plasterwork, generous natural light, and sculptural millwork creates a layered backdrop where classic elegance meets lived-in comfort.Photo: Ryan NeevenGather Market and EateryIn the heart of the Lower East Side, this project was about more than designing a coffee shop it was about creating a series of pockets where people could gather. From the window bench to the intimate tables, every detail was meant to encourage connection and foster a sense of community.Photo: Nicki GitlinMidcentury Modern RevivalMy own home has been a labor of love bringing it back to life while keeping the midcentury character that drew me to it in the first place. The mix of warm wood, slate, and clean lines makes it feel both true to its roots and perfectly suited to how we live now.
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • Three hours of vibe design
    uxdesign.cc
    Thoughts on process, problems, and possibilities.Continue reading on UX Collective
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • Google Is Quietly Building AI Into the Pixel Camera App, and It Worries Me
    lifehacker.com
    Googles Pixel 10 phones made their official debut this week, and with them, a bunch of generative AI features baked directly into the camera app. Its normal for phones to use computational photography these days, a fancy term for all those lighting and post-processing effects they add to your pics as you snap them. But AI makes computational photography into another beast entirely, and it's one Im not sure were ready for.Tech nerds love to ask ourselves what is a photo? kind of joking that the more post-processing gets added to a picture, the less it resembles anything that actually happened in real life. Night skies being too bright, faces having fewer blemishes than a mirror would show, that sort of thing. Generative AI in the camera app is like the final boss of that moral conundrum.Thats not to say these features arent all useful, but at the end of the day, this is kind of a philosophical debate as much as a technical one.Are photos supposed to look like what the photographer was actually seeing with their eyes, or are they supposed to look as attractive as possible, realism be damned? Its been easy enough to keep these questions to the most nitpicky circles for nowwho really cares if the sky is a little too neon if it helps your pic pop more?but if AI is going to start adding whole new objects or backgrounds to your photos, before you even open the Gemini app, its time for everyone to start asking themselves what they want out of their phones cameras. And the way Google is using AI in its newest phones, its possible you could end up with an AI photo and not really know it.Pro Res ZoomMaybe the most egregious of Googles new AI camera additions is what its calling Pro Res Zoom. Google is advertising this as 100x zoom, and it works kind of like the wholly fictional zoom in and enhance tech you might see in old-school police procedurals.Essentially, on a Pixel 10 Pro or Pro XL, youll now be able to push the zoom lens in by 100 times, and on the surface, the experience will be no different than a regular software zoom (which relies on cropping, not AI). But inside your phones processor, itll still run into the same problems that make zoom in and enhance seem so ludicrous in shows like CSI.In short, the problem is that you cant invent resolution the camera didnt capture. If youve zoomed in so far that your camera lens only saw vague pixels, then it will never be able to know for sure what was actually there in real life. Credit: Google Thats why this feature, despite seeming like a normal, non-AI zoom on the surface, is more of an AI edit than an actual 100x zoom. When you use Pro Res Zoom, your phone will zoom in as much as it can, then use whatever blurry pixels it sees as a prompt for an on-device diffusion model. The model will then guess what the pixels are supposed to look like, and edit the result into your shot. It wont be capturing reality, but if youre lucky, it might be close enough.For certain details, like rock formations or other mundane inanimate objects, that might be fine. For faces or landmarks, though, you could leave with the impression that you just got a great close-up of, say, the lead singer at a concert, without knowing that your zoom was basically just a fancy Gemini request. Google says its trying to tamp down on hallucinations, but if a photo spat out by Gemini is something youre uncomfortable posting or including in a creative project, this will have the same issuesexcept that, because of the branding, you might not realize AI was involved.Luckily, Pro Res Zoom doesnt replace non-AI zoom entirelyzooming in past the usual 5x hardware zoom limit will now give you two results to pick from, one with Pro Res Zoom applied and one without. I wrote about this in more detail if you're interested, but even with non-AI options available, the AI one isnt clearly indicated while youre making your selection.Thats a much more casual approach to AI than Googles taken in the past. People might be used to AI altering their photos when they ask for it, but having it automatically applied through your camera lens is a new step.Ask to EditThe casual AI integration doesnt stop once youve taken your photo, though. With Pixel 10, you can now use natural language to ask AI to alter your photos for you, right from the Google Photos app. Simply open up the photo you want to change, tap the edit icon, and youll see a chat box that will let you use natural language to suggest tweaks to your photo. You can even speak your instructions rather than type them, if you want.On the surface, I dont mind this. Google Photos has dozens of different edit icons, and it can be difficult for the average person to know how to use them. If you want a simple crop or filter applied, this gives you an option to get that done without going through what could be an otherwise intimidating interface. Credit: Michelle Ehrhardt The problem is, in addition to using old-school Google Photos tools, Ask to Edit will also allow you to suggest more outlandish changes, and it wont clearly delineate when its using AI to accomplish those changes. You could ask the AI to swap out your photos background for an entirely new one, or if you want a less drastic change, you could ask it to remove reflections from a shot taken through a window. The issue? Plenty of these edits will require generative AI, even the seemingly less destructive ones like glare elimination, but youll have to use your intuition to know when its been applied.For example, while youll usually see an AI Enhance button among Google Photos' suggested edits, its not the only way to get AI in your shot. Ask to Edit will do its best to honor whatever request you make, with whatever tools it has access to, and given some hands-on experience I had with it at a demo with Google, this includes AI generation. It might be obvious that itll use AI to, say, add a Mercedes behind me in this selfie, but I could see a less tech savvy user assuming that they could ask the AI to zoom out without knowing that changing an aspect ratio without cropping also requires using generative AI. Specifically, it requires asking an AI to imagine what might have surrounded whatever was in your shot in real life. Since it has no way of knowing this, it comes with an inherently high risk of hallucination, no matter how humble zoom out sounds.Since were talking about a tool designed to help less tech-literate users, I worry theres a good chance they could accidentally wind up generating fiction, and think its a totally innocent, realistic shot.Camera CoachThen theres Camera Coach. This feature also bakes AI into your Camera app, but doesnt actually put AI in your photos. Instead, it uses AI to suggest alternate framing and angles for whatever your camera is seeing, and coaches you on how to achieve those shots. Credit: Michelle Ehrhardt In other words, its very what-you-see-is-what-you-get. Camera Coachs suggestions are just ideas, and even though following through on them takes more work on your end, you can be sure that whatever photo you snap is going to look exactly like what you saw in your viewfinder, with no AI added.That pretty much immediately erases most of my concerns about unreal photos being presented as absolute truth. There is the possibility that Camera Coach might suggest a photo thats not actually possible to take, say if it wants you to walk into a restricted area, but the worst youre going to get there is frustration, not a photo that passes off AI generation as if its the same as, say, zooming in.People should know when theyre using AIIm not going to solve the what is a photo? question in one afternoon. The truth is that some photos are meant to represent the real world, and some are just supposed to look aesthetically pleasing. I get it. If AI can help a photo look more visually appealing, even if its not fully true-to-life, I can see the appeal. That doesnt erase any potential ethical concerns about where training data comes from, so Id still ask you to be diligent with these tools. But I know that pointing at a photo and saying that never actually happened isnt a rhetorical magic bullet.What worries me is how casually Googles new AI features are being implemented, as if theyre identical to traditional computational photography, which still always uses your actual image as a base, rather than making stuff up. As someone whos still wary of AI, seeing AI image generation disguised as 100x zoom immediately raises my alarm bells. Not everyone pays attention to these tools the way I do, and its reasonable for them to expect that these features do what they say on the tin, rather than introducing the risk of hallucination.In other words, people should know when AI is being used in their photos, so that they can be confident when their shots are realistic, and when they're not. Referring to zoom using a telephoto lens as 5x zoom and zoom that layers AI over a bunch of pixels as 100x zoom doesnt do that, and neither does building a natural language editor into your Photos app that doesnt clearly tell you when its using generative AI and when it isnt.Googles aware of this problem. All photos taken on the Pixel 10 now come with C2PA content credentials built-in, which will say whether AI was used in the photos metadata. But whens the last time you actually checked a photos metadata? Tools like Ask to Edit are clearly being made to be foolproof, and expecting users to manually scrub through each of their photos to see which ones were edited with AI and which werent isnt realistic, especially if were making tools that are specifically supposed to let users take fewer steps before getting their final photo.Its normal for someone to expect AI will be used when they open the Gemini app, but including it in previously non-AI tools like the Camera app needs more fanfare than quiet C2PA credentials and one vague sentence in a press release. Notifying a user when theyre about to use AI should happen before they take their photo, or before they make their edit. It shouldnt be quietly marked down for them to find later, if they choose to go looking for it.Other AI photo tools, like those from Adobe, already do this, through a simple watermark applied to any project using AI generation. While I wont tell you what to think about AI generated images overall, I will say that you shouldnt be put in a position where youre making one by accident. Of Googles AI camera innovations, Id say Camera Coach is the only one that does that. For a big new launch from the creator of Android, an ecosystem Google proudly touted as "open" during this years Made by Google, a one out of three hit rate on transparency isnt what Id expect.
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • Ayn reveals a Nintendo DS-style handheld that comes in the classic Game Boy Color purple
    www.engadget.com
    Ayn added more than just a touch of nostalgia with its upcoming dual-screen handheld that gives us modern-day Nintendo DS vibes. After teasing the device in a YouTube video earlier this week, Ayn dropped the full spec sheet, price range and release date for its Thor handhelds. The Thor Lite base model will start at $249 for preorder pricing, but you can opt for the top-of-the-line Thor Max model that goes for $429. Besides the clear purple colorway, the Ayn Thor will come in black, white and rainbow, which colors its buttons like the SNES.AynAyn built all of its Thor models with a primary six-inch AMOLED display with a 120Hz refresh rate and a 1,920 x 1,080 resolution, while the secondary 3.92-inch AMOLED screen will have a 60Hz refresh rate and a smaller 1,240 x 1,080 resolution. The Thor Lite maxes out at 8GB of memory and 128GB of storage, but you can upgrade to 16GB of memory and 1TB of storage with the Thor Max. The Pro and Max models will pack a Snapdragon 8 Gen 2 processor, while the Lite will use the less powerful Snapdragon 865.Outside of the spec differences, all Thor models will run on a 6,000 mAh battery and Android 13. The dual-screen handheld will have video output capabilities, a USB-C port, a 3.5mm audio jack, a TF card slot and can connect via Wi-Fi and Bluetooth. As with all foldable devices, the hinge is often a point of failure, so Ayn built the Thor with a reinforced hinge, along with an active cooling system and Hall effect joysticks.Ayn isn't the only handheld maker getting into dual-screen devices. The market was previously dominated by the Ayaneo Flip DS, which currently starts at $1,139, but Ayaneo has announced a more affordable dual-screen handheld called the Pocket DS. Along with the Retroid Flip 2 that was released earlier this year, Retroid is selling an add-on accessory to make some of its other products into a dual-screen handheld. As for the Ayn Thor, preorders start August 25 at 10:30PM ET, with the first shipments expected in mid-October.This article originally appeared on Engadget at https://www.engadget.com/gaming/ayn-reveals-a-nintendo-ds-style-handheld-that-comes-in-the-classic-game-boy-color-purple-194416424.html?src=rss
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • Prusa CEO declares "open hardware desktop 3D printing is dead" - China blamed for causing the beginning of the end
    www.techradar.com
    Prusa declares open hardware 3D printing dead, blaming Chinas subsidies, patent imbalance, and disputes transformed collaborative innovation into a competitive and increasingly restrictive global industry.
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • Meta to unveil Hypernova smart glasses with a display, wristband at Connect next month
    www.cnbc.com
    Meta's glasses are codenamed Hypernova and will include a small display in the right lens of the device, people familiar with the matter told CNBC.
    Like
    Love
    Wow
    Angry
    Sad
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • One Week to Go: Unleash Creativity with Character Creator 5 & ActorMIXER
    beforesandafters.com
    See the official release preview video here.Launching August 27, 2025, Character Creator 5 arrives with more polish, performance, and creative flexibility than ever. At its heart is ActorMIXER, a groundbreaking tool for custom and random character generationmix heads, facial components, or full bodies, and seamlessly blend stylized with realistic designs. Dive in, ignite your imagination, and explore limitless character possibilities.Curious about whats coming next in CC5? Discover all the upcoming features here.Join the prelaunch and claim your bonuses: https://www.reallusion.com/character-creator/cc5-prelaunch.htmlAnd watch the special preview video, below.Brought to you by Reallusion:This article is part of the befores & afters VFX Insight series. If youd like to promote your VFX/animation/CG tech or service, you can find out more about the VFX Insight series here.The post One Week to Go: Unleash Creativity with Character Creator 5 & ActorMIXER appeared first on befores & afters.
    Like
    Love
    Wow
    Sad
    Angry
    2KB
    · 0 Comentários ·0 Compartilhamentos
  • [Event] March of Robots Challenge
    blog.cara.app
    About the EventBeep Boop! We're already in the third month of the new year and with that starting our new challenge: March of Robots! Show us the machinery you can come up with, no matter if big or small, by using #marchofrobots in your Cara posts. You can either do one prompt per day, or combine multiple prompts for a multi-day artwork! This challenge is as modular as the different parts a robot could contain are We are excited to see what you come up with and we hope you have fun creating and building your own constructs~ Happy creating! How to join March of Robots on Cara1. Participate by posting new work youve made by following the March of Robots list 2. Mention #marchofrobots in the description when posting! Cara Theme ListWe encourage everyone to take this chance to explore new ideas, experiment with the nature of your art and take a chance at growing your art skills within our budding theme. Most of all, we are excited to see everyone having fun while creating and nurturing each other's works in the community with discussions and conversations. Have fun! - The Cara Teamcara.app | twitter | instagram | buy cara a coffee
    Like
    Love
    Wow
    Sad
    Angry
    1KB
    · 0 Comentários ·0 Compartilhamentos
CGShares https://cgshares.com