• Mega Man Legends 2 Still Suffers From A 25-Year-Old Cliffhanger
    www.gamespot.com
    Mega Man Legends 2 is celebrating its 25-year anniversary today, October 25, 2025. Below, we look back at how its unresolved cliffhanger ending still overshadows its memory.Mega Man Legends 2 is an unfinished promise. The anime-inspired adventure series was beloved by fans of the blue bomber for its inventive, Zelda-like spin on the classic formula. It was filled with lovable characters who have endured long past the series itself, like Tron Bonne, the Servbots, and the hero, Mega Man Volnutt. But like a serialized cartoon series, the second game in the series ended with a whopper of a cliffhanger--and then never returned to it. The legacy of Legends for the last 25 years has been fans waiting for a resolution that never came.Spoilers for the Mega Man Legends series followContinue Reading at GameSpot
    0 Reacties ·0 aandelen
  • Best Beginner Tips & Tricks for Vampire The Masquerade Bloodlines 2
    gamerant.com
    Vampire - The Masquerade: Bloodlines 2 is an action RPG that puts players into the shoes of Phyre, a vampire who existed before the game's story. Players will hunt and prowl around the city of Seattle, which can present challenges of its own, even if you're a legendary vampire with cool abilities. This guide will share some of the best tips and tricks for players that will help with various in-game mechanics and make the overall experience a lot more fun.
    0 Reacties ·0 aandelen
  • Open Source AI Week How Developers and Contributors Are Advancing AI Innovation
    blogs.nvidia.com
    As Open Source AI Week comes to a close, were celebrating the innovation, collaboration and community driving open-source AI forward. Catch up on the highlights and stay tuned for more announcements coming next week at NVIDIA GTC Washington, D.C.Wrapping Up a Week of Open-Source Momentum From the stages of the PyTorch Conference to workshops across Open Source AI Week, this week spotlighted the creativity and progress defining the future of open AI.Here are some highlights from the event:Honoring open-source contributions: Jonathan Dekhtiar, senior deep learning framework engineer at NVIDIA, received the PyTorch Contributor Award for his key role in designing the release mechanisms and packaging solutions for Python software and libraries that enable GPU-accelerated computing.CEO of Modular visits the NVIDIA booth: Chris Lattner, CEO of Modular and founder and chief architect of the open-source LLVM Compiler Infrastructure project, picks up the NVIDIA DGX Spark.Seven questions with founding researcher at fast.ai: Jeremy Howard, founding researcher at fast.ai and advocate for accessible deep learning, shares his insights on the future of open-source AI.In his keynote at the PyTorch Conference, Howard also highlighted the growing strength of open-source communities, recognizing NVIDIA for its leadership in advancing openly available, high-performing AI models.The one company, actually, that has stood out, head and shoulders above the others, and that is two, he said. One is Metathe creators of PyTorch. The other is NVIDIA, who, just in recent months, has created some of the worlds best models and they are open source, and they are openly licensed.vLLM Adds Upstream Support for NVIDIA Nemotron Models Open-source innovation is accelerating. NVIDIA and the vLLM team are partnering to add vLLM upstream support for NVIDIA Nemotron models, transforming open large language model serving with lightning-fast performance, efficient scaling and simplified deployment across NVIDIA GPUs.vLLMs optimized inference engine empowers developers to run Nemotron models like the new Nemotron Nano 2 a highly efficient small language reasoning model with a hybrid Transformer-Mamba architecture and a configurable thinking budget.Learn more about how vLLM is accelerating open model innovation.NVIDIA Expands Open Access to Nemotron RAG Models NVIDIA is making eight NVIDIA Nemotron RAG models openly available on Hugging Face, expanding access beyond research to include the full suite of commercial models.This release gives developers a wider range of tools to build retrieval-augmented generation (RAG) systems, improve search and ranking accuracy, and extract structured data from complex documents.The newly released models include Llama-Embed-Nemotron-8B, which provides multilingual text embeddings built on Llama 3.1, and Omni-Embed-Nemotron-3B, which supports cross-modal retrieval for text, images, audio and video.Developers can also access six production-grade models for text embedding, reranking and PDF data extraction key components for real-world retrieval and document intelligence applications.With these open-source models, developers, researchers and organizations can more easily integrate and experiment with RAG-based systems.Developers can get started with Nemotron RAG on Hugging Face.Building and Training AI Models With the Latest Open Datasets NVIDIA is expanding access to high-quality open datasets that help developers overcome the challenges of large-scale data collection and focus on building advanced AI systems.The latest release includes a collection of Nemotron-Personas datasets for Sovereign AI. Each dataset is fully synthetic and grounded in real-world demographic, geographic and cultural data with no personally identifiable information. The growing collection, which features personas from the U.S., Japan and India, enables model builders to design AI agents and systems that reflect the linguistic, social and contextual nuance of the nations they serve.NVIDIA earlier this year released the NVIDIA PhysicalAI Open Datasets onHuggingFace, featuringmore than 7 million robotics trajectories and 1,000OpenUSD SimReady assets. Downloaded more than 6 million times, the datasets combines realworld and synthetic data from the NVIDIACosmos, Isaac, DRIVE andMetropolis platforms to kickstart physical AI development.NVIDIA Inception Startups Highlight AI Innovation At the PyTorch Conferences Startup Showcase, 11 startups including members from the NVIDIA Inception program are sharing their work developing practical AI applications and connecting with investors, potential customers and peers.Runhouse, an AI infrastructure startup optimizing model deployment and orchestration, was crowned the 2025 PyTorch Startup Showcase Award Winner. The Community Choice Award was presented to CuraVoice, with CEO Sakhi Patel, CTO Shrey Modi, and advisor Rahul Vishwakarma accepting the award on behalf of the team.CuraVoice provides an AI-powered voice simulation platform powered by NVIDIA Riva for speech recognition and text-to-speech, and NVIDIA NeMo for conversational AI models for healthcare students and professionals, offering interactive exercises and adaptive feedback to improve patient communication skills.Shrey Modi, CTO of CuraVoice, accepts the PyTorch Startup Showcase Community Choice Award.In addition to CuraVoice, other Inception members, including Backfield AI, Graphsignal, Okahu AI, Snapshot AI and XOR, were featured participants in the Startup Showcase.Snapshot AI delivers actionable, real-time insights to engineering teams using recursive retrieval-augmented generation (RAG), transformers and multimodal AI. The companys platform taps into the NVIDIA CUDA Toolkit to deliver high-performance analysis and rapid insights at scale.XOR is a cybersecurity startup offering AI agents that automatically fix vulnerabilities in the supply chain of other AIs. The company helps enterprises eliminate vulnerabilities while complying with regulatory requirements. XORs agentic technology uses NVIDIA cuVS vector search for indexing, real-time retrieval and code analysis. The company also uses GPU-based machine learning to train models to detect hidden backdoor patterns and prioritize of high-value security outcomes.From left to right: Dmitri Melikyan (Graphsignal, Inc.), Tobias Heldt (XOR), Youssef Harkati (BrightOnLABS), Vidhi Kothari (Seer Systems), Jonah Sargent (Node One) and Scott Suchyta (NVIDIA) at the Startup Showcase.Highlights From Open Source AI Week Attendees of Open Source AI Week are getting a peek at the latest advancements and creative projects that are shaping the future of open technology.Heres a look at whats happening onsite:The worlds smallest AI supercomputer: NVIDIA DGX Spark represents the cutting edge of AI computing hardware for enterprise and research applications.Humanoids and robot dogs, up close: Unitree robots are on display, captivating attendees with advanced mobility powered by the latest robotics technology.Why open source is important: Learn how it can empower developers to build stronger communities, iterate on features, and seamlessly integrate the best of open source AI.Accelerating AI Research Through Open Models A study from the Center for Security and Emerging Technology (CSET) published today shows how access to open model weights unlocks more opportunities for experimentation, customization and collaboration across the global research community.The report outlines seven high-impact research use cases where open models are making a difference including fine-tuning, continued pretraining, model compression and interpretability.With access to weights, developers can adapt models for new domains, explore new architectures and extend functionality to meet their specific needs. This also supports trust and reproducibility. When teams can run experiments on their own hardware, share updates and revisit earlier versions, they gain control and confidence in their results.Additionally, the study found that nearly all open model users share their data, weights and code, building a fast-growing culture of collaboration. This open exchange of tools and knowledge strengthens partnerships between academia, startups and enterprises, facilitating innovation.NVIDIA is committed to empowering the research community through the NVIDIA Nemotron family of open models featuring not just open weights, but also pretraining and post-training datasets, detailed training recipes, and research papers that share the latest breakthroughs.Read the full CSET study to learn how open models are helping the AI community move forward.Advancing Embodied Intelligence Through Open-Source Innovation At the PyTorch Conference, Jim Fan, director of robotics and distinguished research scientist at NVIDIA, discussed the Physical Turing Test a way of measuring the performance of intelligent machines in the physical world.With conversational AI now capable of fluent, lifelike communication, Fan noted that the next challenge is enabling machines to act with similar naturalism. The Physical Turing Test asks: can an intelligent machine perform a real-world task so fluidly that a human cannot tell whether a person or a robot completed it?Fan highlighted that progress in embodied AI and physical AI depends on generating large amounts of diverse data, access to open robot foundation models and simulation frameworks and walked through a unified workflow for developing embodied AI.With synthetic data workflows like NVIDIA Isaac GR00T-Dreams built on NVIDIA Cosmos world foundation models developers can generate virtual worlds from images and prompts, speeding the creation of large sets of diverse and physically accurate data.That data can then be used to post-train NVIDIA Isaac GR00T N open foundation models for generalized humanoid robot reasoning and skills. But before the models are deployed in the real world, these new robot skills need to be tested in simulation.Open simulation and learning frameworks such as NVIDIA Isaac Sim and Isaac Lab allow robots to practice countless times across millions of virtual environments before operating in the real world, dramatically accelerating learning and deployment cycles.Plus, with Newton, an open-source, differentiable physics engine built on NVIDIA Warp and OpenUSD, developers can bring high-fidelity simulation to complex robotic dynamics such as motion, balance and contact reducing the simulation-to-real gap.This accelerates the creation of physically capable AI systems that learn faster, perform more safely and operate effectively in real-world environments.However, scaling embodied intelligence isnt just about compute its about access. Fan reaffirmed NVIDIAs commitment to open source, emphasizing how the companys frameworks and foundation models are shared to empower developers and researchers globally.Developers can get started with NVIDIAs open embodied and physical AI models on Hugging Face.LlamaEmbedNemotron8B Ranks Among Top Open Models for Multilingual Retrieval NVIDIAs LlamaEmbedNemotron8B model has been recognized as the top open and portable model on the Multilingual Text Embedding Benchmark leaderboard.Built on the metallama/Llama3.18B architecture, LlamaEmbedNemotron8B is a research text embedding model that converts text into 4,096dimensional vector representations. Designed for flexibility, it supports a wide range of use cases, including retrieval, reranking, semantic similarity and classification across more than 1,000 languages.Trained on a diverse collection of 16 million querydocument pairs half from public sources and half synthetically generated the model benefits from refined data generation techniques, hardnegative mining and modelmerging approaches that contribute to its broad generalization capabilities.This result builds on NVIDIAs ongoing research in open, highperforming AI models. Following earlier leaderboard recognition for the LlamaNeMoRetrieverColEmbed model, the success of LlamaEmbedNemotron8B highlights the value of openness, transparency and collaboration in advancing AI for the developer community.Check out Llama-Embed-Nemotron-8B on Hugging Face, and learn more about the model, including architectural highlights, training methodology and performance evaluation.What Open Source Teaches Us About Making AI BetterOpen models are shaping the future of AI, enabling developers, enterprises and governments to innovate with transparency, customization and trust. In the latest episode of the NVIDIA AI Podcast, NVIDIAs Bryan Catanzaro and Jonathan Cohen discuss how open models, datasets and research are laying the foundation for shared progress across the AI ecosystem.The NVIDIA Nemotron family of open models represents a full-stack approach to AI development, connecting model design to the underlying hardware and software that power it. By releasing Nemotron models, data and training methodologies openly, NVIDIA aims to help others refine, adapt and build upon its work, resulting in a faster exchange of ideas and more efficient systems.When we as a community come together contributing ideas, data and models we all move faster, said Catanzaro in the episode. Open technologies make that possible.Theres more happening this week at Open Source AI Week, including the start of the PyTorch Conference bringing together developers, researchers and innovators pushing the boundaries of open AI.Attendees can tune in to the special keynote address by Jim Fan, director of robotics and distinguished research scientist at NVIDIA, to hear the latest advancements in robotics from simulation and synthetic data to accelerated computing. The keynote, titled The Physical Turing Test: Solving General Purpose Robotics, will take place on Wednesday, Oct. 22, from 9:50-10:05 a.m. PT.Andrej Karpathys Nanochat Teaches Developers How to Train LLMs in Four Hours Computer scientist Andrej Karpathy recently introduced Nanochat, calling it the best ChatGPT that $100 can buy. Nanochat is an open-source, full-stack large language model (LLM) implementation built for transparency and experimentation. In about 8,000 lines of minimal, dependency-light code, Nanochat runs the entire LLM pipeline from tokenization and pretraining to fine-tuning, inference and chat all through a simple web user interface.NVIDIA is supporting Karpathys open-source Nanochat project by releasing two NVIDIA Launchables, making it easy to deploy and experiment with Nanochat across various NVIDIA GPUs.With NVIDIA Launchables, developers can train and interact with their own conversational model in hours with a single click. The Launchables dynamically support different-sized GPUs including NVIDIA H100 and L40S GPUs on various clouds without need for modification. They also automatically work on any eight-GPU instance on NVIDIA Brev, so developers can get compute access immediately.The first 10 users to deploy these Launchables will also receive free compute access to NVIDIA H100 or L40S GPUs.Start training with Nanochat by deploying a Launchable:Nanochat Speedrun on NVIDIA H100Nanochat Speedrun on NVIDIA L40SAndrej Karpathys Next Experiments Begin With NVIDIA DGX SparkToday, Karpathy received an NVIDIA DGX Spark the worlds smallest AI supercomputer, designed to bring the power of Blackwell right to a developers desktop. With up to a petaflop of AI processing power and 128GB of unified memory in a compact form factor, DGX Spark empowers innovators like Karpathy to experiment, fine-tune and run massive models locally.Building the Future of AI With PyTorch and NVIDIA PyTorch, the fastest-growing AI framework, derives its performance from the NVIDIA CUDA platform and uses the Python programming language to unlock developer productivity. This year, NVIDIA added Python as a first-class language to the CUDA platform, giving the PyTorch developer community greater access to CUDA.CUDA Python includes key components that make GPU acceleration in Python easier than ever, with built-in support for kernel fusion, extension module integration and simplified packaging for fast deployment.Following PyTorchs open collaboration model, CUDA Python is available on GitHub and PyPI.According to PyPI Stats, PyTorch averaged overtwomillion daily downloads, peaking at2,303,217onOctober14,andhad 65million total downloads last month.Every month, developers worldwide download hundreds of millions of NVIDIA libraries including CUDA, cuDNN, cuBLAS and CUTLASS mostly within Python and PyTorch environments. CUDA Python provides nvmath-python, a new library that acts as the bridge between Python code and these highly optimized GPU libraries.Plus, kernel enhancements and support for next-generation frameworks make NVIDIA accelerated computing more efficient, adaptable and widely accessible.NVIDIA maintains a long-standing collaboration with the PyTorch community through open-source contributions and technical leadership, as well as by sponsoring and participating in community events and activations.At PyTorch Conference 2025 in San Francisco, NVIDIA will host a keynote address, five technical sessions and nine poster presentations.NVIDIAs on the ground at Open Source AI Week. Stay tuned for a celebration highlighting the spirit of innovation, collaboration and community that drives open-source AI forward. Follow NVIDIA AI Developer on social channels for additional news and insights.NVIDIA Spotlights Open Source Innovation Open Source AI Week kicks off on Monday with a series of hackathons, workshops and meetups spotlighting the latest advances in AI, machine learning and open-source innovation.The event brings together leading organizations, researchers and open-source communities to share knowledge, collaborate on tools and explore how openness accelerates AI development.NVIDIA continues to expand access to advanced AI innovation by providing open-source tools, models and datasets designed to empower developers. With more than 1,000 open-source tools on NVIDIA GitHub repositories and over 500 models and 100 datasets on the NVIDIA Hugging Face collections, NVIDIA is accelerating the pace of open, collaborative AI development.Over the past year, NVIDIA has become the top contributor in Hugging Face repositories, reflecting a deep commitment to sharing models, frameworks and research that empower the community.https://blogs.nvidia.com/wp-content/uploads/2025/10/1016.mp4Openly available models, tools and datasets are essential to driving innovation and progress. By empowering anyone to use, modify and share technology, it fosters transparency and accelerates discovery, fueling breakthroughs that benefit both industry and communities alike. Thats why NVIDIA is committed to supporting the open source ecosystem.Were on the ground all week stay tuned for a celebration highlighting the spirit of innovation, collaboration and community that drives open-source AI forward, with the PyTorch Conference serving as the flagship event.
    0 Reacties ·0 aandelen
  • DIGITAL DOMAIN DELIVERS THE DEVIL IN THE DETAILS FOR THE CONJURING: LAST RITES
    vfxvoice.com
    By TREVOR HOGGImages courtesy of Digital Domain and Warner Bros. Pictures.Bringing the paranormal investigative stories of the Ed and Lorraine Warren full circle is the haunting that started everything reemerging in The Conjuring: Last Rites under the direction of Michael Chaves and starring Vera Farmiga, Patrick Wilson, Mia Thomlinson and Ben Hardy. Brought on to heighten the supernatural horror was Scott Edelstein, who served as Production Visual Effects Supervisor and hired his colleagues at Digital Domain to create 425 shots that feature making the 12-foot tall Annabelle doll, crafting a disturbing smile for Abigail Arnold, producing the haunted Conjuring Mirror, destroying a hallway, and recreating the small mill-town setting. I know Scott, so in that sense it made things a lot easier and more comfortable from the get-go because you know who youre talking to and how to interpret the things he says, Alex Millet, Visual Effects Supervisor at Digital Domain. The first step was to try to not get scared watching the previous movies and absorb as much of that look, aesthetic and atmosphere, and to respect that in the new movie. An effort was made to upgrade effects while also honoring what had already been established. Were choosing to do it in 3D for this movie, and theres a lot of things that dont work anymore. But we found that we were able to recreate the 2D look with our 3D approach.Conveying a disturbing smile for Abigail Arnold was a fine line between being creepy or silly.The first step was to try to not get scared watching the previous movies and absorb as much of that look, aesthetic and atmosphere, and to respect that in the new movie.Alex Millet, Visual Effects Supervisor, Digital DomainRaising the level of difficulty for the environment work was a long oner. There was one plate in the street with the character coming out of the car. The camera zooms out and starts to fly above the house, looks around in the street, then comes back down, bursts through the door, and once we get into the house, we get into the next plate photography of the actress in there, Millet explains. The way we decided to work with this was to rebuild the entire street in CG because we found that was easier for us than trying to transition the plate street with our street and having to have every single detail of the street perfectly match one-to-one. We kept the car and character from the plate. As the camera pulls back, we transition toward the CG car and the digital double of the actor. Then were into a full CG version of the shot. We go up to the house, and most of the street is actually 3D because everything needed to work with parallax. Only the far background extension was a matte painting. Then we crash down. Were still full CG. We have rain and atmosphere. All of that is CG. We crash through the door, and once the door opens, we have a few frames of CG inside the house to help us with the transition. After that, were back to the plate.The supernatural aspect of the Annabelle doll becoming 12 feet tall gave Digital Domain the freedom to do whatever looked best without necessarily having to be confined to reality.Going into the realm of the supernatural and hallucinations is the appearance of 12-foot-tall version of the Annabelle doll. Theres a lot of things we started to think about like, what does the dress look like when its 12 feet tall? Is it a much thicker material? Do the wrinkles work differently because the material is that big, or is the material the same thickness but way bigger and looks like a curtain? The supernatural aspect of it gave us the freedom to do whatever we thought looked best without necessarily having to be like, This is what it would be like if you build a dress that was 12 feet tall, and it might not look super great, but thats actually how it would work and react. We didnt have to worry too much about that, and we were able to do what everyone thought was the best look. The transformation went through several iterations. The director wanted the doll to read as if the transformation was a painful process that evolved across the shots. He worked with the postvis department to figure out the timing, and then we took over. Once the animation started, we added that chunkiness you see in the growth.The hallway was modeled with the destruction in mind.Conveying a disturbing smile for Abigail Arnold was a fine line between being creepy or silly. We did a bunch of iterations to find the smile that worked the best, Millet remarks. We had some versions where it was definitely not creepy. It would make you laugh as soon as you see it. It was a combination of not just the mouth, but the look in the eyes and the way all of the little muscles in the face move. Everyone is an expert because we all look at faces every day, so its a much less forgiving thing to do than pretty much any other aspect of what we do in visual effects. Michael Chaves did concept art to illustrate what he wanted. That was incredibly helpful, because we had an exact target to match. The goal for us was to try to match his concept and give it life. The smile was exaggerated then rolled back. We did a bunch of iterations with a different range and smile, but all going much further than what we knew was needed. That helped us not to baby-step the process because the last thing you want is, Oh, a little bit more. And you spend weeks doing that. We went way out in terms of how wide that smile was and showed all of those versions. Once the director was happy with one of them, we actually selected a range to give us an idea of what to work within, and then we made that range as good as we could.[Director Michael Chaves concept art] was incredibly helpful, because we had an exact target to match. The goal for us was to try to match his concept and give it life.Alex Millet, Visual Effects Supervisor, Digital DomainA signature terrifying prop for the franchise is the Conjuring Mirror.A signature terrifying prop for the franchise is the Conjuring Mirror. The work for us was adding the crack in the mirror throughout a lot of different sequences in the movie, Millet explains. First, it was establishing the look of that crack and getting it to read the same across the various shots. The thing with the crack was that in some shots, its going to look awesome and perfect, but in a different lighting condition you dont get the same highlights and, incidentally, the shape feels different even though its the same. The sequence in the hallway was a whole different thing. We had to completely rebuild the mirror in CG so we could animate it. On set, they tried to avoid any reflection problems, but with a big mirror you cant avoid it. There were a bunch of shots where we had to get rid of reflections showing the part of the set that wasnt built or crew members or a giant camera rig. The work for us was to rebuild the entire hallway CG so we could take over any reflection and rebuild any reflection that we needed to do. The hallway gets destroyed. We built the entire hallway then animated the CG mirror. Something we didnt expect was the mirror on set was shorter than the actual mirror we needed to build, so getting the mirror to reach everywhere in the walls was a fun challenge for animation and effects. Once that was figured out, we added some extra geometry behind the walls and under the floor because when we destroy everything, we will see stuff there. Then we moved forward with our effects destruction, so there were different layers of drywall, studs, installation and smoke. That was cool.The factory had to be seen in every shot that takes place in the mill town.A LEGO approach was adopted when constructing the various houses that populate a neighborhood.Set dressing, like wet asphalt and lights in the distance, provided the desired scope and mood for the nighttime shots.Practical houses were constructed, which helped ground the digital extensions.The crack in the Conjuring Mirror had to appear as if there was no point of impact.The Conjuring franchise is based on the exploits of paranormal investigators Ed and Lorraine Warren, portrayed by Patrick Wilson and Vera Farmiga.Construction of the small mill town was tackled as if it was a LEGO set. We wanted to build a house in such a way that floors, colors and roofs could be easily changed, Millet remarks. A lot of our houses were built in three layers: ground floor, first floor and roof. The houses were built in a modular way so we could quickly populate the street, create a layout, and show something that gives you a feel for the street versus iterating on different houses. Geography had to be kept in mind to avoid discontinuity. When the characters drive on the bridge we needed to make sure that it feels like theyre going in the right direction. We had to have the factory looming above the town at all times, so it had to be in such a place that we would always see it when were at street level and in those shots where we go up a little bit. The set dressing can be unlimited. We had birds, cars, every plant has wind in it, and digital people looking around. There are lot of things going on, and as soon as you have to do that, it immediately gets rid of any 2D or matte painting approach. You have to build and animate it. But we also didnt want to take away from the action in the plate. Millet concludes, Overall, this was a small project, but ambitious with the work that had to be done within the given timeline. It was a great project to work on, and the challenges were interesting.
    0 Reacties ·0 aandelen
  • Share of the Week: Ghost of Ytei Landscapes
    blog.playstation.com
    Last week, we asked you to share beautiful landscapes in Ghost of Ytei using #PSshare #PSBlog. Here are this weeks highlights:ForgottenJasmin shares a forest of red-leaved trees framing a castle fortressSheikhSadi80 shares a sea cliffside viewDeathStalker131 shares a mountain shrine viewvaleria_ame shares horses running past Mount Ytei at sunsetcall_me_xavii shares Atsu strumming her shamisen in front of her homes ginkgo treeLeumir4 shares a shrine and Mount Yotei framed by cherry blossomsSearch #PSshare #PSBlog on X or Instagram to see more entries to this weeks theme, or be inspired by other great games featuring Photo Mode. Want to be featured in the next Share of the Week?THEME: SpookySUBMIT BY: 11:59 PM PT on October 29, 2025Next week were ready for a fright. Share moments from the spooky game of your choice using #PSshare #PSBlog for a chance to be featured.Need some inspiration? Explore PlayStation Plus October chills and thrills like Silent Hill 2, Alan Wake 2, Until Dawn, V Rising, and more.
    0 Reacties ·0 aandelen
  • Spooky Express is a Halloween delight, and you can try it for free
    www.polygon.com
    The Halloween season can be a bit of a challenge for gamers who want the vibes of the season minus the gore. If you try to find recommendations for spooky games fit for October, youll likely come across list after list of terrifying horror games with slasher levels of violence. Thats not exactly helpful if youre looking for something a little more family-friendly. Sometimes you just want cute vampires who arent trying to suck your blood.
    0 Reacties ·0 aandelen
  • AI In UX: Achieve More With Less
    smashingmagazine.com
    I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.How To Work With AIHere is the mental model that has been most helpful for me. Treat AI like an intern with zero experience.An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.This is exactly how you should work with AI.The Basics Of PromptingI am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.Define the role.Start with something like Act as a user researcher or Act as a copywriter. This gives the AI context for how to respond.Break it into steps.Do not just say Analyze these interview transcripts. Instead, say I want you to complete the following steps. One, identify recurring themes. Two, look for questions users are trying to answer. Three, note any objections that come up. Four, output a summary of each.Define success.Tell it what good looks like. I am looking for a report that gives a clear indication of recurring themes and questions in a format I can send to stakeholders. Do not use research terminology because they will not understand it.Make it think.Tell it to think deeply about its approach before responding. Get it to create a way to test for success (known as a rubric) and iterate on its work until it passes that test.Here is a real prompt I use for online research:Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.Learn When To Trust ItYou should never fully trust AI. Just like you would never fully trust an intern you have only just met.To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.But even after months of working with it daily, I still check its work. I still challenge it. I still make it cite sources and explain its reasoning.The key is that even with all that checking, it is still faster than doing it yourself. Much faster.Using AI For User ResearchThis is where AI has genuinely transformed my work. I use it constantly for five main things.Online ResearchI love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.This would have taken me days of trawling through social media and review sites. Now it takes minutes.I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.Analyzing Interviews And SurveysI used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.The quality is good. Really good. As long as you give it clear instructions about what you want.Making Sense Of DataI am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. What patterns do you see? Can you reformat this? Show me this data in a different way.Microsoft Clarity now has Copilot built in, so you can ask it questions about your analytics data. Triple Whale does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.Research ProjectsThis is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.Then I give it custom instructions. Here is one I use for my own business:Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my UX consultant business and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. Challenge my thinking. It is like having a co-worker who never gets tired and has a perfect memory.I do this for every client project now. It is invaluable.Creating PersonasAI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.Now I can create what I call functional personas. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.I upload all my research to a project and say:Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a hypothetical user. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.Using AI For Design And DevelopmentLet me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.Three reasons why:It is slow if you want something specific or complicated.It can be frustrating because it gets close but not quite there.And the quality is often subpar. Unpolished code, questionable design choices, that kind of thing.But that does not mean it is not useful. It absolutely is. Just not for final production work.Functional PrototypesIf you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. Its the biggest thing people do online other than click links and you cannot test it.Tools like Relume and Bolt can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.Small Coding TasksI use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.AI lets me create the little tools I need. A calculator that calculates the ROI of my UX work. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted 16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.If you are a developer, you should absolutely be using tools like Cursor by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.Reviewing Existing ServicesThere are some great tools for getting quick feedback on existing websites when budget and time are tight.If you need to conduct a UX audit, Wevo Pulse is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.For e-commerce sites, Baymard has UX Ray, which analyzes flaws based on their massive database of user research.Checking Your DesignsAttention Insight has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.It is great for dealing with stakeholders who say, People wont see that. You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cats face.I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.Creating The Perfect ImageI use AI all the time for creating images that do a specific job. My preferred tools are Midjourney and Gemini.I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. Make the guy reach here or Add glasses to this person. I can get pretty much exactly what I want.The other thing I love about Midjourney is that you can upload a photograph and say, Replicate this style. This keeps consistency across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.Using AI For ContentMost clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.I have completely stopped asking clients for copy since AI came along. Here is my process.Build Everything Around QuestionsOnce I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a top task analysis where people vote on which questions matter most.I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.Get Bullet Point Answers From StakeholdersI spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.Then I go to my clients and say: I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.That is their job done. Pretty much.Let AI Draft The CopyNow I take control. I feed ChatGPT the questions and bullet points and say:Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.I often upload a full style guide as well, with details about how I want it to be written.The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.Stakeholders Review And Provide FeedbackThat goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, Rewrite using these comments.Job done.The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.Tools That HelpIf your stakeholders are still giving you content, Hemingway Editor is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.What This Means for YouLet me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.But here is what I know from two years of using this stuff daily. It has made me faster. It has made me better. It has freed me up to do more strategic thinking and less grunt work.A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.That is where you come in. That is where you will always come in.Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.
    0 Reacties ·0 aandelen
  • CYKLO Eyewear From Vinylize Gives New Life to Old Bike Cables
    design-milk.com
    As sustainability and circular design gain more traction within the design space, greenwashing is also on the rise. Brands are riding the hype of new materials, with a head-in-the-sand approach to the unfortunate consequences of using new materials. Not so with CYKLO, an eyewear line from Vinylize that takes reclaimed bike cables and transforms them into eyeglasses and sunglasses. Breathing new life into material previously discarded (and notoriously hard to recycle), this line provokes welcome conversation around where things go when were done with them.The lineup of dynamic designs keeps CYKLO feeling modern and fresh. The temples are created with the bike cables and fitted with lightweight cellulose acetate fronts made from old vinyl records, offering cohesion within styles. With multiple finishes, including a delightfully striped variant, each offers adornment while also offering prescription and UV protection as well. Since the cables are reclaimed, this means no two pairs can be the same, offering a one-of-a-kind, luxury product that also does good for the earth in tandem.The Bowden cable brake, first invented in 1896, enabled a burgeoning cycling industry to really take off, providing reliability and much-needed handling navigating rocky terrains or city streets. It is ingenious in its simplicity, with a simple three-layered design, yet has had a huge impact on design at large, quickly lending itself to multiple industries, including transportation, manufacturing, and industrial design. This somewhat humble invention has touched an unquantifiable amount of products and processes, enabling freedom and innovation.Because of their ubiquity, an incredible amount of brake cables are made and discarded every year, difficult to recycle due to their layered design. A slender steel wire sits at the core, coiled tightly for tensile strength. A helically wound layer of flat steel wraps around the wire, while a layer of polyethylene holds the entire structure together. Instead of breaking down these components, using a lot of energy to do so, Vinylize takes another route: all discarded cables from local bicycle shops are sorted, cleaned, cut to size, and laser engraved before assembly. This process takes an often forgotten invention and brings this legacy of innovation full circle, reminding us that even modest inventions can have incredible impact.The results are stylish eyeglasses that are comfortable to wear, long-lasting, hypoallergenic, and easily adjustable. The collection consists of six models, including three designs named after key members of the band Queen Mercury, May, and Deacon which are available in five colors. The other three styles arenamed for famed cyclists Bottechia, Franzt, and Aimo and come in three colors each.Vinylize is an eyewear brand creating unique and innovative designs to adorn and inspire. The first Vinylize frame was carved by hand from a 12 Creedence Clearwater Revival album and held together with cigar box hinges. From this first prototype crafted in 2000, co-founder Zachary Tipton set out to find a way to produce frames from records. In 2004, he teamed up with his brother, Zoltan, and established their first factory in the EU. Since then, they have reclaimed tons of records from landfills, working to create a link between fans of sound and sight.To learn more about CYKLO by Vinylize, please visit vinylize.com.Photography courtesy of Vinylize.
    0 Reacties ·0 aandelen
  • Typography Basics
    uxdesign.cc
    Typography basicsA practical introduction to typographyfrom anatomy and spacing to legibility and alignmentfor designers who want to create type that reads beautifully and feels intentional.Typography isnt just about picking a pretty font. Its the craft of shaping written language into a visual experiencehow words look, breathe, and interact on a page or screen. Good typography quietly guides the reader, while poor typography shouts for attention in all the wrong ways. Every detailfrom letter spacing to line heightaffects how users read, feel, and engage. Lets unpack the essentials.Every letter has a structure, and understanding it helps you design with precision. Terms like ascender, descender, baseline, and x-height describe the invisible skeleton that keeps type coherent. The x-height, for instance, determines how big a typeface feels even at the same pointsize.Serif and sans-serif typefaces differ in tone partly because of these structures: serifs guide the eye along lines of text, while sans-serifs often feel cleaner and more modern. Knowing anatomy allows you to mix typefaces consciously rather than by vibealone.Typographic AnatomyKerning, tracking, leadingThese three terms define the rhythm and flow of text. Kerning adjusts the space between individual letters, tracking controls spacing across whole words or paragraphs, and leading (pronounced ledding) sets the space betweenlines.Think of them as the breathing room of type. Too tight, and words feel suffocating; too loose, and they drift apart like strangers at a party. Consistent, intentional spacing is one of the clearest markers of professional typographyand one of the easiest to overlook.Type alignmentLeft, right, center, or justifiedeach alignment changes how the reader experiences text. Left-aligned text feels natural for most Western readers; it mirrors how our eyes expect to move. Centered text adds elegance in small doses, like headings or invitations, but strains readability in long paragraphs.Justified text can look neat but often introduces awkward gaps between words. Theres no one right choiceonly what fits the tone and purpose of thecontent.Indents, outdents, and hanging punctuationsIndenting the first line of a paragraph subtly signals a new thought, while outdents (negative indents) can highlight lists or quotes. Hanging punctuation, where quotation marks or bullets sit outside the text block, preserves clean visual alignment.These are the small design manners that readers might never noticebut would feel their absence. They lend rhythm and grace to long-form content.Legibility and readabilityThey sound similar but mean different things. Legibility is about how clearly letters can be distinguisheda function of font design, size, and contrast. Readability is about how easy it feels to read longer passagesaffected by line length, spacing, and even the surrounding design.A beautifully legible font can still be unreadable in practice if the text is too dense or too wide. The goal isnt just to be seen but to be comfortably followed.Specialized usesTypography takes on unique forms in different contexts: interfaces, signage, packaging, books, motion graphics. In UI design, type must perform at small sizes and varying screens. In print, it must hold character at high resolutions and long durations.Display type can afford drama; body text should feel invisible. Understanding how context shapes your typographic decisions is what separates art fromnoise.Typography isnt decorationits communication shaped with care. Once you see how subtle its impact is, you start noticing type everywhere: on screens, in streets, on receipts. Every font choice tells a story. The more fluently you speak the language of type, the more precisely you can design experiences that speakback.Typography has a strange power: its both invisible and unforgettable. The best type never demands your attention, yet it defines how every word feels. A letters curve can suggest warmth or precision, while spacing can create calm or urgency. This is why typography sits at the heart of designconnecting language withemotion.So when we adjust a line height or choose between Helvetica and Garamond, were not just picking styles; were shaping how people interpret meaning. Typography is the quiet storyteller behind every interface, poster, and page. Mastering it means learning to speak softly but leave a lasting impression.Further reading &viewing The Elements of Typographic Style by Robert BringhurstWidely regarded as a typography bibleit covers type anatomy, spacing, history, meaning and practice. Just My Type: A Book About Fonts by Simon GarfieldA lively, readable exploration of how fonts shape culture, emotion and everyday designchoices. The Anatomy of Type: A Graphic Guide to 100 Typefaces by Stephen ColesHighly visual and practicala great reference for seeing how anatomy, spacing and alignment vary across real typefaces. Helvetica (2007, dir. Gary Hustwit)A design-documentary classic that shows how a single typeface (Helvetica) touches legibility, meaning, alignment and culturalhistory. Typeface (2009, dir. Justine Nagan)Looks at wood-type printing and the material roots of typographyhelpful to ground the anatomy and spacing discussions in tangibleform. Graphic design for filmmaking, prop design workshop by Annie Atkins.Learn how to start designing a collection of graphic props that can tell a directors story, as well as contributing to the genre, period, and visual aesthetic of afilm.Visuals by @oscarsunFigma Community Thanks forreading.Typography Basics was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Reacties ·0 aandelen
  • Google Pixels Are Still Having Problems Calling 911
    lifehacker.com
    For the past few years, Google's Pixel phones have had recurring problems with calling 911, and the issue has once again reared its ugly head. Over the past 24 hours, multiple users on Reddit have complained about being unable to call 911, while the Bell carrier in Canada issued a warning that the Pixel 6 and up was also having problems contacting emergency services on its network.According to user Fabulous_Disaster730, who posted yesterday about difficulties contacting emergency services during a gas leak, her Pixel 9 Pro repeatedly prompted her to turn on wifi calling or turn off airplane mode before she could call 911, despite her having full signal on both 5G and wifi. The phone would then freeze and restart. After multiple attempts, she resorted to asking a friend to place the call for her instead.Multiple replies mentioned facing similar problems yesterday as well, across multiple networks and models of Pixel. Bell, however, was the only carrier to issue an official notice.Aside from the obvious danger, the problem with this bug is that it's not consistent. Despite seemingly having had a surge yesterday, it's been a known issue as far back as 2021. However, it doesn't affect all users, nor is it entirely predictable when or how it will pop up. While recent reports are of calls simply not going through, one user posted two months ago that their call did go through, but they only heard screeching and static on the other end of the line. These recent reports are only the latest in an ever-evolving concern.It's also not clear what's causing the problem. In 2021, the problem was attributed to Microsoft Teams, but even with that error patched up, users are still facing problems. To Google's credit, Bell said it reached out to the company shortly after it learned about yesterday's uptick in cases, and afterwards said that a fix had been issued. However, no other carriers have issued similar notices, and I wouldn't be surprised to see other users making their own complaints in the future.It's worth noting that Bell mentioned that Pixel 6 users and up were the ones affected this time around, and that the Pixel 6 also happened to release in 2021, which is when the bug first started making news. Whatever is at the core of the problem here, I wouldn't be surprised if it worked its way in on the Pixel 6 and just hasn't yet been addressed.However, given the severity of the issue, it's something that needs looking at sooner rather than later. I've reached out to Google for comment on this issue, and will update if I hear back. In the meantime, it's best to be prepared in case the worst happens. If you're on a Pixel phone, here's what you can do to contact 911 in an emergency:First, try to place a 911 callIf you have time and the danger is not immediate, it's worth trying to call 911 on your Pixel despite the issues. While users reporting problems have increased as of late, it's still not an everyone or every time problem. It's possible your call will still go through without issue.You can also text 911If, however, you are unable to call 911, it's worth remembering that in certain jurisdictions (check this regularly updated list to see if where you live is supported), you can also contact 911 via text. This is a slower method of reaching out for help, but is still better than nothing. To text 911, open your texting app, put in 911 as the recipient, and write a concise message with your issue, your location, and any necessary specifics, like cross streets, landmarks, a specific hiding location, or whether you're able to talk.Use a backup phoneWhile I don't expect someone who isn't constantly reviewing tech to have multiple phones on hand, the surest method to ensure you'll be able to call 911 if you're on a Pixel is to keep a backup phone that isn't a Pixel handy. This could be a landline or another cell phone. Crucially, it can be an old cell phone, even one that isn't actively attached to a phone plan. So long as your phone is able to connect to a network, it's legally required to be able to call 911, so if you have an old phone you've upgraded from and haven't traded in laying around, it may be smart to keep it charged up in case you need it in an emergency.
    0 Reacties ·0 aandelen
CGShares https://cgshares.com