• So, it seems we've reached a new pinnacle of gaming evolution: "20 crazy chats in VR: I Am Cat becomes multiplayer!" Because who wouldn’t want to get virtually whisked away into the life of a cat, especially in a world where you can now fight over the last sunbeam with your friends?

    Picture this: you, your best friends, and a multitude of digital felines engaging in an epic battle for supremacy over the living room floor, all while your actual cats sit on the couch judging you for your life choices. Yes, that's right! Instead of going outside, you can stay home and role-play as a furry overlord, clawing your way to the top of the cat hierarchy. Truly, the pinnacle of human achievement.

    Let’s be real—this is what we’ve all been training for. Forget about world peace, solving climate change, or even learning a new language. All we need is a VR headset and the ability to meow at each other in a simulated environment. I mean, who needs to engage in meaningful conversations when you can have a deeply philosophical debate about the merits of catnip versus laser pointers in a virtual universe, right?

    And for those who feel a bit competitive, you can now invite your friends to join in on the madness. Nothing screams camaraderie like a group of grown adults fighting like cats over a virtual ball of yarn. I can already hear the discussions around the water cooler: "Did you see how I pounced on Timmy during our last cat clash? Pure feline finesse!"

    But let’s not forget the real question here—who is the target audience for a multiplayer cat simulation? Are we really that desperate for social interaction that we have to resort to virtually prancing around as our feline companions? Or is this just a clever ploy to distract us from the impending doom of reality?

    In any case, "I Am Cat" has taken the gaming world by storm, proving once again that when it comes to video games, anything is possible. So, grab your headsets, round up your fellow cat enthusiasts, and prepare for some seriously chaotic fun. Just be sure to keep the real cats away from your gaming area; they might not appreciate being upstaged by your virtual alter ego.

    Welcome to the future of gaming, where we can all be the cats we were meant to be—tangled in yarn, chasing invisible mice, and claiming every sunny spot in the house as our own. Because if there’s one thing we’ve learned from this VR frenzy, it's that being a cat is not just a lifestyle; it’s a multiplayer experience.

    #ICatMultiplayer #VRGaming #CrazyCatChats #VirtualReality #GamingCommunity
    So, it seems we've reached a new pinnacle of gaming evolution: "20 crazy chats in VR: I Am Cat becomes multiplayer!" Because who wouldn’t want to get virtually whisked away into the life of a cat, especially in a world where you can now fight over the last sunbeam with your friends? Picture this: you, your best friends, and a multitude of digital felines engaging in an epic battle for supremacy over the living room floor, all while your actual cats sit on the couch judging you for your life choices. Yes, that's right! Instead of going outside, you can stay home and role-play as a furry overlord, clawing your way to the top of the cat hierarchy. Truly, the pinnacle of human achievement. Let’s be real—this is what we’ve all been training for. Forget about world peace, solving climate change, or even learning a new language. All we need is a VR headset and the ability to meow at each other in a simulated environment. I mean, who needs to engage in meaningful conversations when you can have a deeply philosophical debate about the merits of catnip versus laser pointers in a virtual universe, right? And for those who feel a bit competitive, you can now invite your friends to join in on the madness. Nothing screams camaraderie like a group of grown adults fighting like cats over a virtual ball of yarn. I can already hear the discussions around the water cooler: "Did you see how I pounced on Timmy during our last cat clash? Pure feline finesse!" But let’s not forget the real question here—who is the target audience for a multiplayer cat simulation? Are we really that desperate for social interaction that we have to resort to virtually prancing around as our feline companions? Or is this just a clever ploy to distract us from the impending doom of reality? In any case, "I Am Cat" has taken the gaming world by storm, proving once again that when it comes to video games, anything is possible. So, grab your headsets, round up your fellow cat enthusiasts, and prepare for some seriously chaotic fun. Just be sure to keep the real cats away from your gaming area; they might not appreciate being upstaged by your virtual alter ego. Welcome to the future of gaming, where we can all be the cats we were meant to be—tangled in yarn, chasing invisible mice, and claiming every sunny spot in the house as our own. Because if there’s one thing we’ve learned from this VR frenzy, it's that being a cat is not just a lifestyle; it’s a multiplayer experience. #ICatMultiplayer #VRGaming #CrazyCatChats #VirtualReality #GamingCommunity
    20 chats déchaînés en VR : I Am Cat devient multijoueur !
    Le jeu de réalité virtuelle le plus déjanté du moment vient d’ouvrir la porte aux […] Cet article 20 chats déchaînés en VR : I Am Cat devient multijoueur ! a été publié sur REALITE-VIRTUELLE.COM.
    Like
    Love
    Wow
    Sad
    Angry
    290
    1 Yorumlar 0 hisse senetleri
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Yorumlar 0 hisse senetleri
  • Nobody understands gambling, especially in video games

    In 2025, it’s very difficult not to see gambling advertised everywhere. It’s on billboards and sports broadcasts. It’s on podcasts and printed on the turnbuckle of AEW’s pay-per-view shows. And it’s on app stores, where you can find the FanDuel and DraftKings sportsbooks, alongside glitzy digital slot machines. These apps all have the highest age ratings possible on Apple’s App Store and Google Play. But earlier this year, a different kind of app nearly disappeared from the Play Store entirely.Luck Be A Landlord is a roguelite deckbuilder from solo developer Dan DiIorio. DiIorio got word from Google in January 2025 that Luck Be A Landlord was about to be pulled, globally, because DiIorio had not disclosed the game’s “gambling themes” in its rating.In Luck Be a Landlord, the player takes spins on a pixel art slot machine to earn coins to pay their ever-increasing rent — a nightmare gamification of our day-to-day grind to remain housed. On app stores, it’s a one-time purchase of and it’s on Steam. On the Play Store page, developer Dan DiIorio notes, “This game does not contain any real-world currency gambling or microtransactions.”And it doesn’t. But for Google, that didn’t matter. First, the game was removed from the storefront in a slew of countries that have strict gambling laws. Then, at the beginning of 2025, Google told Dilorio that Luck Be A Landlord would be pulled globally because of its rating discrepancy, as it “does not take into account references to gambling”.DiIorio had gone through this song and dance before — previously, when the game was blocked, he would send back a message saying “hey, the game doesn’t have gambling,” and then Google would send back a screenshot of the game and assert that, in fact, it had.DiIorio didn’t agree, but this time they decided that the risk of Landlord getting taken down permanently was too great. They’re a solo developer, and Luck Be a Landlord had just had its highest 30-day revenue since release. So, they filled out the form confirming that Luck Be A Landlord has “gambling themes,” and are currently hoping that this will be the end of it.This is a situation that sucks for an indie dev to be in, and over email DiIorio told Polygon it was “very frustrating.”“I think it can negatively affect indie developers if they fall outside the norm, which indies often do,” they wrote. “It also makes me afraid to explore mechanics like this further. It stifles creativity, and that’s really upsetting.”In late 2024, the hit game Balatro was in a similar position. It had won numerous awards, and made in its first week on mobile platforms. And then overnight, the PEGI ratings board declared that the game deserved an adult rating.The ESRB had already rated it E10+ in the US, noting it has gambling themes. And the game was already out in Europe, making its overnight ratings change a surprise. Publisher PlayStack said the rating was given because Balatro has “prominent gambling imagery and material that instructs about gambling.”Balatro is basically Luck Be A Landlord’s little cousin. Developer LocalThunk was inspired by watching streams of Luck Be A Landlord, and seeing the way DiIorio had implemented deck-building into his slot machine. And like Luck Be A Landlord, Balatro is a one-time purchase, with no microtransactions.But the PEGI board noted that because the game uses poker hands, the skills the player learns in Balatro could translate to real-world poker.In its write-up, GameSpot noted that the same thing happened to a game called Sunshine Shuffle. It was temporarily banned from the Nintendo eShop, and also from the entire country of South Korea. Unlike Balatro, Sunshine Shuffle actually is a poker game, except you’re playing Texas Hold ‘Em — again for no real money — with cute animals.It’s common sense that children shouldn’t be able to access apps that allow them to gamble. But none of these games contain actual gambling — or do they?Where do we draw the line? Is it gambling to play any game that is also played in casinos, like poker or blackjack? Is it gambling to play a game that evokes the aesthetics of a casino, like cards, chips, dice, or slot machines? Is it gambling to wager or earn fictional money?Gaming has always been a lightning rod for controversy. Sex, violence, misogyny, addiction — you name it, video games have been accused of perpetrating or encouraging it. But gambling is gaming’s original sin. And it’s the one we still can’t get a grip on.The original link between gambling and gamingGetty ImagesThe association between video games and gambling all goes back to pinball. Back in the ’30s and ’40s, politicians targeted pinball machines for promoting gambling. Early pinball machines were less skill-based, and some gave cash payouts, so the comparison wasn’t unfair. Famously, mob-hating New York City mayor Fiorello LaGuardia banned pinball in the city, and appeared in a newsreel dumping pinball and slot machines into the Long Island Sound. Pinball machines spent some time relegated to the back rooms of sex shops and dive bars. But after some lobbying, the laws relaxed.By the 1970s, pinball manufacturers were also making video games, and the machines were side-by-side in arcades. Arcade machines, like pinball, took small coin payments, repeatedly, for short rounds of play. The disreputable funk of pinball basically rubbed off onto video games.Ever since video games rocked onto the scene, concerned and sometimes uneducated parties have been asking if they’re dangerous. And in general, studies have shown that they’re not. The same can’t be said about gambling — the practice of putting real money down to bet on an outcome.It’s a golden age for gambling2025 in the USA is a great time for gambling, which has been really profitable for gambling companies — to the tune of billion dollars of revenue in 2023.To put this number in perspective, the American Gaming Association, which is the casino industry’s trade group and has nothing to do with video games, reports that 2022’s gambling revenue was billion. It went up billion in a year.And this increase isn’t just because of sportsbooks, although sports betting is a huge part of it. Online casinos and brick-and-mortar casinos are both earning more, and as a lot of people have pointed out, gambling is being normalized to a pretty disturbing degree.Much like with alcohol, for a small percentage of people, gambling can tip from occasional leisure activity into addiction. The people who are most at risk are, by and large, already vulnerable: researchers at the Yale School of Medicine found that 96% of problem gamblers are also wrestling with other disorders, such as “substance use, impulse-control disorders, mood disorders, and anxiety disorders.”Even if you’re not in that group, there are still good reasons to be wary of gambling. People tend to underestimate their own vulnerability to things they know are dangerous for others. Someone else might bet beyond their means. But I would simply know when to stop.Maybe you do! But being blithely confident about it can make it hard to notice if you do develop a problem. Or if you already have one.Addiction changes the way your brain works. When you’re addicted to something, your participation in it becomes compulsive, at the expense of other interests and responsibilities. Someone might turn to their addiction to self-soothe when depressed or anxious. And speaking of those feelings, people who are depressed and anxious are already more vulnerable to addiction. Given the entire state of the world right now, this predisposition shines an ugly light on the numbers touted by the AGA. Is it good that the industry is reporting billion in additional earnings, when the economy feels so frail, when the stock market is ping ponging through highs and lows daily, when daily expenses are rising? It doesn’t feel good. In 2024, the YouTuber Drew Gooden turned his critical eye to online gambling. One of the main points he makes in his excellent video is that gambling is more accessible than ever. It’s on all our phones, and betting companies are using decades of well-honed app design and behavioral studies to manipulate users to spend and spend.Meanwhile, advertising on podcasts, billboards, TV, radio, and websites – it’s literally everywhere — tells you that this is fun, and you don’t even need to know what you’re doing, and you’re probably one bet away from winning back those losses.Where does Luck Be a Landlord come into this?So, are there gambling themes in Luck Be A Landlord? The game’s slot machine is represented in simple pixel art. You pay one coin to use it, and among the more traditional slot machine symbols are silly ones like a snail that only pays out after 4 spins.When I started playing it, my primary emotion wasn’t necessarily elation at winning coins — it was stress and disbelief when, in the third round of the game, the landlord increased my rent by 100%. What the hell.I don’t doubt that getting better at it would produce dopamine thrills akin to gambling — or playing any video game. But it’s supposed to be difficult, because that’s the joke. If you beat the game you unlock more difficulty modes where, as you keep paying rent, your landlord gets furious, and starts throwing made-up rules at you: previously rare symbols will give you less of a payout, and the very mechanics of the slot machine change.It’s a manifestation of the golden rule of casinos, and all of capitalism writ large: the odds are stacked against you. The house always wins. There is luck involved, to be sure, but because Luck Be A Landlord is a deck-builder, knowing the different ways you can design your slot machine to maximize payouts is a skill! You have some influence over it, unlike a real slot machine. The synergies that I’ve seen high-level players create are completely nuts, and obviously based on a deep understanding of the strategies the game allows.IMAGE: TrampolineTales via PolygonBalatro and Luck Be a Landlord both distance themselves from casino gambling again in the way they treat money. In Landlord, the money you earn is gold coins, not any currency we recognize. And the payouts aren’t actually that big. By the end of the core game, the rent money you’re struggling and scraping to earn… is 777 coins. In the post-game endless mode, payouts can get massive. But the thing is, to get this far, you can’t rely on chance. You have to be very good at Luck Be a Landlord.And in Balatro, the numbers that get big are your points. The actual dollar payments in a round of Balatro are small. These aren’t games about earning wads and wads of cash. So, do these count as “gambling themes”?We’ll come back to that question later. First, I want to talk about a closer analog to what we colloquially consider gambling: loot boxes and gacha games.Random rewards: from Overwatch to the rise of gachaRecently, I did something that I haven’t done in a really long time: I thought about Overwatch. I used to play Overwatch with my friends, and I absolutely made a habit of dropping 20 bucks here or there for a bunch of seasonal loot boxes. This was never a problem behavior for me, but in hindsight, it does sting that over a couple of years, I dropped maybe on cosmetics for a game that now I primarily associate with squandered potential.Loot boxes grew out of free-to-play mobile games, where they’re the primary method of monetization. In something like Overwatch, they functioned as a way to earn additional revenue in an ongoing game, once the player had already dropped 40 bucks to buy it.More often than not, loot boxes are a random selection of skins and other cosmetics, but games like Star Wars: Battlefront 2 were famously criticized for launching with loot crates that essentially made it pay-to-win – if you bought enough of them and got lucky.It’s not unprecedented to associate loot boxes with gambling. A 2021 study published in Addictive Behaviors showed that players who self-reported as problem gamblers also tended to spend more on loot boxes, and another study done in the UK found a similar correlation with young adults.While Overwatch certainly wasn’t the first game to feature cosmetic loot boxes or microtransactions, it’s a reference point for me, and it also got attention worldwide. In 2018, Overwatch was investigated by the Belgian Gaming Commission, which found it “in violation of gambling legislation” alongside FIFA 18 and Counter-Strike: Global Offensive. Belgium’s response was to ban the sale of loot boxes without a gambling license. Having a paid random rewards mechanic in a game is a criminal offense there. But not really. A 2023 study showed that 82% of iPhone games sold on the App Store in Belgium still use random paid monetization, as do around 80% of games that are rated 12+. The ban wasn’t effectively enforced, if at all, and the study recommends that a blanket ban wouldn’t actually be a practical solution anyway.Overwatch was rated T for Teen by the ESRB, and 12 by PEGI. When it first came out, its loot boxes were divisive. Since the mechanic came from F2P mobile games, which are often seen as predatory, people balked at seeing it in a big action game from a multi-million dollar publisher.At the time, the rebuttal was, “Well, at least it’s just cosmetics.” Nobody needs to buy loot boxes to be good at Overwatch.A lot has changed since 2016. Now we have a deeper understanding of how these mechanics are designed to manipulate players, even if they don’t affect gameplay. But also, they’ve been normalized. While there will always be people expressing disappointment when a AAA game has a paid random loot mechanic, it is no longer shocking.And if anything, these mechanics have only become more prevalent, thanks to the growth of gacha games. Gacha is short for “gachapon,” the Japanese capsule machines where you pay to receive one of a selection of random toys. Getty ImagesIn gacha games, players pay — not necessarily real money, but we’ll get to that — for a chance to get something. Maybe it’s a character, or a special weapon, or some gear — it depends on the game. Whatever it is, within that context, it’s desirable — and unlike the cosmetics of Overwatch, gacha pulls often do impact the gameplay.For example, in Infinity Nikki, you can pull for clothing items in these limited-time events. You have a chance to get pieces of a five-star outfit. But you also might pull one of a set of four-star items, or a permanent three-star piece. Of course, if you want all ten pieces of the five-star outfit, you have to do multiple pulls, each costing a handful of limited resources that you can earn in-game or purchase with money.Gacha was a fixture of mobile gaming for a long time, but in recent years, we’ve seen it go AAA, and global. MiHoYo’s Genshin Impact did a lot of that work when it came out worldwide on consoles and PC alongside its mobile release. Genshin and its successors are massive AAA games of a scale that, for your Nintendos and Ubisofts, would necessitate selling a bajillion copies to be a success. And they’re free.Genshin is an action game, whose playstyle changes depending on what character you’re playing — characters you get from gacha pulls, of course. In Zenless Zone Zero, the characters you can pull have different combo patterns, do different kinds of damage, and just feel different to play. And whereas in an early mobile gacha game like Love Nikki Dress UP! Queen the world was rudimentary, its modern descendant Infinity Nikki is, like Genshin, Breath of the Wild-esque. It is a massive open world, with collectibles and physics puzzles, platforming challenges, and a surprisingly involved storyline. Genshin Impact was the subject of an interesting study where researchers asked young adults in Hong Kong to self-report on their gacha spending habits. They found that, like with gambling, players who are not feeling good tend to spend more. “Young adult gacha gamers experiencing greater stress and anxiety tend to spend more on gacha purchases, have more motives for gacha purchases, and participate in more gambling activities,” they wrote. “This group is at a particularly higher risk of becoming problem gamblers.”One thing that is important to note is that Genshin Impact came out in 2020. The study was self-reported, and it was done during the early stages of the COVID-19 pandemic. It was a time when people were experiencing a lot of stress, and also fewer options to relieve that stress. We were all stuck inside gaming.But the fact that stress can make people more likely to spend money on gacha shows that while the gacha model isn’t necessarily harmful to everyone, it is exploitative to everyone. Since I started writing this story, another self-reported study came out in Japan, where 18.8% of people in their 20s say they’ve spent money on gacha rather than on things like food or rent.Following Genshin Impact’s release, MiHoYo put out Honkai: Star Rail and Zenless Zone Zero. All are shiny, big-budget games that are free to play, but dangle the lure of making just one purchase in front of the player. Maybe you could drop five bucks on a handful of in-game currency to get one more pull. Or maybe just this month you’ll get the second tier of rewards on the game’s equivalent of a Battle Pass. The game is free, after all — but haven’t you enjoyed at least ten dollars’ worth of gameplay? Image: HoyoverseI spent most of my December throwing myself into Infinity Nikki. I had been so stressed, and the game was so soothing. I logged in daily to fulfill my daily wishes and earn my XP, diamonds, Threads of Purity, and bling. I accumulated massive amounts of resources. I haven’t spent money on the game. I’m trying not to, and so far, it’s been pretty easy. I’ve been super happy with how much stuff I can get for free, and how much I can do! I actually feel really good about that — which is what I said to my boyfriend, and he replied, “Yeah, that’s the point. That’s how they get you.”And he’s right. Currently, Infinity Nikki players are embroiled in a war with developer Infold, after Infold introduced yet another currency type with deep ties to Nikki’s gacha system. Every one of these gacha games has its own tangled system of overlapping currencies. Some can only be used on gacha pulls. Some can only be used to upgrade items. Many of them can be purchased with human money.Image: InFold Games/Papergames via PolygonAll of this adds up. According to Sensor Towers’ data, Genshin Impact earned over 36 million dollars on mobile alone in a single month of 2024. I don’t know what Dan DiIorio’s peak monthly revenue for Luck Be A Landlord was, but I’m pretty sure it wasn’t that.A lot of the spending guardrails we see in games like these are actually the result of regulations in other territories, especially China, where gacha has been a big deal for a lot longer. For example, gacha games have a daily limit on loot boxes, with the number clearly displayed, and a system collectively called “pity,” where getting the banner item is guaranteed after a certain number of pulls. Lastly, developers have to be clear about what the odds are. When I log in to spend the Revelation Crystals I’ve spent weeks hoarding in my F2P Infinity Nikki experience, I know that I have a 1.5% chance of pulling a 5-star piece, and that the odds can go up to 6.06%, and that I am guaranteed to get one within 20 pulls, because of the pity system.So, these odds are awful. But it is not as merciless as sitting down at a Vegas slot machine, an experience best described as “oh… that’s it?”There’s not a huge philosophical difference between buying a pack of loot boxes in Overwatch, a pull in Genshin Impact, or even a booster of Pokémon cards. You put in money, you get back randomized stuff that may or may not be what you want. In the dictionary definition, it’s a gamble. But unlike the slot machine, it’s not like you’re trying to win money by doing it, unless you’re selling those Pokémon cards, which is a topic for another time.But since even a game where you don’t get anything, like Balatro or Luck Be A Landlord, can come under fire for promoting gambling to kids, it would seem appropriate for app stores and ratings boards to take a similarly hardline stance with gacha.Instead, all these games are rated T for Teen by the ESRB, and PEGI 12 in the EU.The ESRB ratings for these games note that they contain in-game purchases, including random items. Honkai: Star Rail’s rating specifically calls out a slot machine mechanic, where players spend tokens to win a prize. But other than calling out Honkai’s slot machine, app stores are not slapping Genshin or Nikki with an 18+ rating. Meanwhile, Balatro had a PEGI rating of 18 until a successful appeal in February 2025, and Luck Be a Landlord is still 17+ on Apple’s App Store.Nobody knows what they’re doingWhen I started researching this piece, I felt very strongly that it was absurd that Luck Be A Landlord and Balatro had age ratings this high.I still believe that the way both devs have been treated by ratings boards is bad. Threatening an indie dev with a significant loss of income by pulling their game is bad, not giving them a way to defend themself or help them understand why it’s happening is even worse. It’s an extension of the general way that too-big-to-fail companies like Google treat all their customers.DiIorio told me that while it felt like a human being had at least looked at Luck Be A Landlord to make the determination that it contained gambling themes, the emails he was getting were automatic, and he doesn’t have a contact at Google to ask why this happened or how he can avoid it in the future — an experience that will be familiar to anyone who has ever needed Google support. But what’s changed for me is that I’m not actually sure anymore that games that don’t have gambling should be completely let off the hook for evoking gambling.Exposing teens to simulated gambling without financial stakes could spark an interest in the real thing later on, according to a study in the International Journal of Environmental Research and Public Health. It’s the same reason you can’t mosey down to the drug store to buy candy cigarettes. Multiple studies were done that showed kids who ate candy cigarettes were more likely to take up smokingSo while I still think rating something like Balatro 18+ is nuts, I also think that describing it appropriately might be reasonable. As a game, it’s completely divorced from literally any kind of play you would find in a casino — but I can see the concern that the thrill of flashy numbers and the shiny cards might encourage young players to try their hand at poker in a real casino, where a real house can take their money.Maybe what’s more important than doling out high age ratings is helping people think about how media can affect us. In the same way that, when I was 12 and obsessed with The Matrix, my parents gently made sure that I knew that none of the violence was real and you can’t actually cartwheel through a hail of bullets in real life. Thanks, mom and dad!But that’s an answer that’s a lot more abstract and difficult to implement than a big red 18+ banner. When it comes to gacha, I think we’re even less equipped to talk about these game mechanics, and I’m certain they’re not being age-rated appropriately. On the one hand, like I said earlier, gacha exploits the player’s desire for stuff that they are heavily manipulated to buy with real money. On the other hand, I think it’s worth acknowledging that there is a difference between gacha and casino gambling.Problem gamblers aren’t satisfied by winning — the thing they’re addicted to is playing, and the risk that comes with it. In gacha games, players do report satisfaction when they achieve the prize they set out to get. And yes, in the game’s next season, the developer will be dangling a shiny new prize in front of them with the goal of starting the cycle over. But I think it’s fair to make the distinction, while still being highly critical of the model.And right now, there is close to no incentive for app stores to crack down on gacha in any way. They get a cut of in-app purchases. Back in 2023, miHoYo tried a couple of times to set up payment systems that circumvented Apple’s 30% cut of in-app spending. Both times, it was thwarted by Apple, whose App Store generated trillion in developer billings and sales in 2022.According to Apple itself, 90% of that money did not include any commission to Apple. Fortunately for Apple, ten percent of a trillion dollars is still one hundred billion dollars, which I would also like to have in my bank account. Apple has zero reason to curb spending on games that have been earning millions of dollars every month for years.And despite the popularity of Luck Be A Landlord and Balatro’s massive App Store success, these games will never be as lucrative. They’re one-time purchases, and they don’t have microtransactions. To add insult to injury, like most popular games, Luck Be A Landlord has a lot of clones. And from what I can tell, it doesn’t look like any of them have been made to indicate that their games contain the dreaded “gambling themes” that Google was so worried about in Landlord.In particular, a game called SpinCraft: Roguelike from Sneaky Panda Games raised million in seed funding for “inventing the Luck-Puzzler genre,” which it introduced in 2022, while Luck Be A Landlord went into early access in 2021.It’s free-to-play, has ads and in-app purchases, looks like Fisher Price made a slot machine, and it’s rated E for everyone, with no mention of gambling imagery in its rating. I reached out to the developers to ask if they had also been contacted by the Play Store to disclose that their game has gambling themes, but I haven’t heard back.Borrowing mechanics in games is as old as time, and it’s something I in no way want to imply shouldn’t happen because copyright is the killer of invention — but I think we can all agree that the system is broken.There is no consistency in how games with random chance are treated. We still do not know how to talk about gambling, or gambling themes, and at the end of the day, the results of this are the same: the house always wins.See More:
    #nobody #understands #gambling #especially #video
    Nobody understands gambling, especially in video games
    In 2025, it’s very difficult not to see gambling advertised everywhere. It’s on billboards and sports broadcasts. It’s on podcasts and printed on the turnbuckle of AEW’s pay-per-view shows. And it’s on app stores, where you can find the FanDuel and DraftKings sportsbooks, alongside glitzy digital slot machines. These apps all have the highest age ratings possible on Apple’s App Store and Google Play. But earlier this year, a different kind of app nearly disappeared from the Play Store entirely.Luck Be A Landlord is a roguelite deckbuilder from solo developer Dan DiIorio. DiIorio got word from Google in January 2025 that Luck Be A Landlord was about to be pulled, globally, because DiIorio had not disclosed the game’s “gambling themes” in its rating.In Luck Be a Landlord, the player takes spins on a pixel art slot machine to earn coins to pay their ever-increasing rent — a nightmare gamification of our day-to-day grind to remain housed. On app stores, it’s a one-time purchase of and it’s on Steam. On the Play Store page, developer Dan DiIorio notes, “This game does not contain any real-world currency gambling or microtransactions.”And it doesn’t. But for Google, that didn’t matter. First, the game was removed from the storefront in a slew of countries that have strict gambling laws. Then, at the beginning of 2025, Google told Dilorio that Luck Be A Landlord would be pulled globally because of its rating discrepancy, as it “does not take into account references to gambling”.DiIorio had gone through this song and dance before — previously, when the game was blocked, he would send back a message saying “hey, the game doesn’t have gambling,” and then Google would send back a screenshot of the game and assert that, in fact, it had.DiIorio didn’t agree, but this time they decided that the risk of Landlord getting taken down permanently was too great. They’re a solo developer, and Luck Be a Landlord had just had its highest 30-day revenue since release. So, they filled out the form confirming that Luck Be A Landlord has “gambling themes,” and are currently hoping that this will be the end of it.This is a situation that sucks for an indie dev to be in, and over email DiIorio told Polygon it was “very frustrating.”“I think it can negatively affect indie developers if they fall outside the norm, which indies often do,” they wrote. “It also makes me afraid to explore mechanics like this further. It stifles creativity, and that’s really upsetting.”In late 2024, the hit game Balatro was in a similar position. It had won numerous awards, and made in its first week on mobile platforms. And then overnight, the PEGI ratings board declared that the game deserved an adult rating.The ESRB had already rated it E10+ in the US, noting it has gambling themes. And the game was already out in Europe, making its overnight ratings change a surprise. Publisher PlayStack said the rating was given because Balatro has “prominent gambling imagery and material that instructs about gambling.”Balatro is basically Luck Be A Landlord’s little cousin. Developer LocalThunk was inspired by watching streams of Luck Be A Landlord, and seeing the way DiIorio had implemented deck-building into his slot machine. And like Luck Be A Landlord, Balatro is a one-time purchase, with no microtransactions.But the PEGI board noted that because the game uses poker hands, the skills the player learns in Balatro could translate to real-world poker.In its write-up, GameSpot noted that the same thing happened to a game called Sunshine Shuffle. It was temporarily banned from the Nintendo eShop, and also from the entire country of South Korea. Unlike Balatro, Sunshine Shuffle actually is a poker game, except you’re playing Texas Hold ‘Em — again for no real money — with cute animals.It’s common sense that children shouldn’t be able to access apps that allow them to gamble. But none of these games contain actual gambling — or do they?Where do we draw the line? Is it gambling to play any game that is also played in casinos, like poker or blackjack? Is it gambling to play a game that evokes the aesthetics of a casino, like cards, chips, dice, or slot machines? Is it gambling to wager or earn fictional money?Gaming has always been a lightning rod for controversy. Sex, violence, misogyny, addiction — you name it, video games have been accused of perpetrating or encouraging it. But gambling is gaming’s original sin. And it’s the one we still can’t get a grip on.The original link between gambling and gamingGetty ImagesThe association between video games and gambling all goes back to pinball. Back in the ’30s and ’40s, politicians targeted pinball machines for promoting gambling. Early pinball machines were less skill-based, and some gave cash payouts, so the comparison wasn’t unfair. Famously, mob-hating New York City mayor Fiorello LaGuardia banned pinball in the city, and appeared in a newsreel dumping pinball and slot machines into the Long Island Sound. Pinball machines spent some time relegated to the back rooms of sex shops and dive bars. But after some lobbying, the laws relaxed.By the 1970s, pinball manufacturers were also making video games, and the machines were side-by-side in arcades. Arcade machines, like pinball, took small coin payments, repeatedly, for short rounds of play. The disreputable funk of pinball basically rubbed off onto video games.Ever since video games rocked onto the scene, concerned and sometimes uneducated parties have been asking if they’re dangerous. And in general, studies have shown that they’re not. The same can’t be said about gambling — the practice of putting real money down to bet on an outcome.It’s a golden age for gambling2025 in the USA is a great time for gambling, which has been really profitable for gambling companies — to the tune of billion dollars of revenue in 2023.To put this number in perspective, the American Gaming Association, which is the casino industry’s trade group and has nothing to do with video games, reports that 2022’s gambling revenue was billion. It went up billion in a year.And this increase isn’t just because of sportsbooks, although sports betting is a huge part of it. Online casinos and brick-and-mortar casinos are both earning more, and as a lot of people have pointed out, gambling is being normalized to a pretty disturbing degree.Much like with alcohol, for a small percentage of people, gambling can tip from occasional leisure activity into addiction. The people who are most at risk are, by and large, already vulnerable: researchers at the Yale School of Medicine found that 96% of problem gamblers are also wrestling with other disorders, such as “substance use, impulse-control disorders, mood disorders, and anxiety disorders.”Even if you’re not in that group, there are still good reasons to be wary of gambling. People tend to underestimate their own vulnerability to things they know are dangerous for others. Someone else might bet beyond their means. But I would simply know when to stop.Maybe you do! But being blithely confident about it can make it hard to notice if you do develop a problem. Or if you already have one.Addiction changes the way your brain works. When you’re addicted to something, your participation in it becomes compulsive, at the expense of other interests and responsibilities. Someone might turn to their addiction to self-soothe when depressed or anxious. And speaking of those feelings, people who are depressed and anxious are already more vulnerable to addiction. Given the entire state of the world right now, this predisposition shines an ugly light on the numbers touted by the AGA. Is it good that the industry is reporting billion in additional earnings, when the economy feels so frail, when the stock market is ping ponging through highs and lows daily, when daily expenses are rising? It doesn’t feel good. In 2024, the YouTuber Drew Gooden turned his critical eye to online gambling. One of the main points he makes in his excellent video is that gambling is more accessible than ever. It’s on all our phones, and betting companies are using decades of well-honed app design and behavioral studies to manipulate users to spend and spend.Meanwhile, advertising on podcasts, billboards, TV, radio, and websites – it’s literally everywhere — tells you that this is fun, and you don’t even need to know what you’re doing, and you’re probably one bet away from winning back those losses.Where does Luck Be a Landlord come into this?So, are there gambling themes in Luck Be A Landlord? The game’s slot machine is represented in simple pixel art. You pay one coin to use it, and among the more traditional slot machine symbols are silly ones like a snail that only pays out after 4 spins.When I started playing it, my primary emotion wasn’t necessarily elation at winning coins — it was stress and disbelief when, in the third round of the game, the landlord increased my rent by 100%. What the hell.I don’t doubt that getting better at it would produce dopamine thrills akin to gambling — or playing any video game. But it’s supposed to be difficult, because that’s the joke. If you beat the game you unlock more difficulty modes where, as you keep paying rent, your landlord gets furious, and starts throwing made-up rules at you: previously rare symbols will give you less of a payout, and the very mechanics of the slot machine change.It’s a manifestation of the golden rule of casinos, and all of capitalism writ large: the odds are stacked against you. The house always wins. There is luck involved, to be sure, but because Luck Be A Landlord is a deck-builder, knowing the different ways you can design your slot machine to maximize payouts is a skill! You have some influence over it, unlike a real slot machine. The synergies that I’ve seen high-level players create are completely nuts, and obviously based on a deep understanding of the strategies the game allows.IMAGE: TrampolineTales via PolygonBalatro and Luck Be a Landlord both distance themselves from casino gambling again in the way they treat money. In Landlord, the money you earn is gold coins, not any currency we recognize. And the payouts aren’t actually that big. By the end of the core game, the rent money you’re struggling and scraping to earn… is 777 coins. In the post-game endless mode, payouts can get massive. But the thing is, to get this far, you can’t rely on chance. You have to be very good at Luck Be a Landlord.And in Balatro, the numbers that get big are your points. The actual dollar payments in a round of Balatro are small. These aren’t games about earning wads and wads of cash. So, do these count as “gambling themes”?We’ll come back to that question later. First, I want to talk about a closer analog to what we colloquially consider gambling: loot boxes and gacha games.Random rewards: from Overwatch to the rise of gachaRecently, I did something that I haven’t done in a really long time: I thought about Overwatch. I used to play Overwatch with my friends, and I absolutely made a habit of dropping 20 bucks here or there for a bunch of seasonal loot boxes. This was never a problem behavior for me, but in hindsight, it does sting that over a couple of years, I dropped maybe on cosmetics for a game that now I primarily associate with squandered potential.Loot boxes grew out of free-to-play mobile games, where they’re the primary method of monetization. In something like Overwatch, they functioned as a way to earn additional revenue in an ongoing game, once the player had already dropped 40 bucks to buy it.More often than not, loot boxes are a random selection of skins and other cosmetics, but games like Star Wars: Battlefront 2 were famously criticized for launching with loot crates that essentially made it pay-to-win – if you bought enough of them and got lucky.It’s not unprecedented to associate loot boxes with gambling. A 2021 study published in Addictive Behaviors showed that players who self-reported as problem gamblers also tended to spend more on loot boxes, and another study done in the UK found a similar correlation with young adults.While Overwatch certainly wasn’t the first game to feature cosmetic loot boxes or microtransactions, it’s a reference point for me, and it also got attention worldwide. In 2018, Overwatch was investigated by the Belgian Gaming Commission, which found it “in violation of gambling legislation” alongside FIFA 18 and Counter-Strike: Global Offensive. Belgium’s response was to ban the sale of loot boxes without a gambling license. Having a paid random rewards mechanic in a game is a criminal offense there. But not really. A 2023 study showed that 82% of iPhone games sold on the App Store in Belgium still use random paid monetization, as do around 80% of games that are rated 12+. The ban wasn’t effectively enforced, if at all, and the study recommends that a blanket ban wouldn’t actually be a practical solution anyway.Overwatch was rated T for Teen by the ESRB, and 12 by PEGI. When it first came out, its loot boxes were divisive. Since the mechanic came from F2P mobile games, which are often seen as predatory, people balked at seeing it in a big action game from a multi-million dollar publisher.At the time, the rebuttal was, “Well, at least it’s just cosmetics.” Nobody needs to buy loot boxes to be good at Overwatch.A lot has changed since 2016. Now we have a deeper understanding of how these mechanics are designed to manipulate players, even if they don’t affect gameplay. But also, they’ve been normalized. While there will always be people expressing disappointment when a AAA game has a paid random loot mechanic, it is no longer shocking.And if anything, these mechanics have only become more prevalent, thanks to the growth of gacha games. Gacha is short for “gachapon,” the Japanese capsule machines where you pay to receive one of a selection of random toys. Getty ImagesIn gacha games, players pay — not necessarily real money, but we’ll get to that — for a chance to get something. Maybe it’s a character, or a special weapon, or some gear — it depends on the game. Whatever it is, within that context, it’s desirable — and unlike the cosmetics of Overwatch, gacha pulls often do impact the gameplay.For example, in Infinity Nikki, you can pull for clothing items in these limited-time events. You have a chance to get pieces of a five-star outfit. But you also might pull one of a set of four-star items, or a permanent three-star piece. Of course, if you want all ten pieces of the five-star outfit, you have to do multiple pulls, each costing a handful of limited resources that you can earn in-game or purchase with money.Gacha was a fixture of mobile gaming for a long time, but in recent years, we’ve seen it go AAA, and global. MiHoYo’s Genshin Impact did a lot of that work when it came out worldwide on consoles and PC alongside its mobile release. Genshin and its successors are massive AAA games of a scale that, for your Nintendos and Ubisofts, would necessitate selling a bajillion copies to be a success. And they’re free.Genshin is an action game, whose playstyle changes depending on what character you’re playing — characters you get from gacha pulls, of course. In Zenless Zone Zero, the characters you can pull have different combo patterns, do different kinds of damage, and just feel different to play. And whereas in an early mobile gacha game like Love Nikki Dress UP! Queen the world was rudimentary, its modern descendant Infinity Nikki is, like Genshin, Breath of the Wild-esque. It is a massive open world, with collectibles and physics puzzles, platforming challenges, and a surprisingly involved storyline. Genshin Impact was the subject of an interesting study where researchers asked young adults in Hong Kong to self-report on their gacha spending habits. They found that, like with gambling, players who are not feeling good tend to spend more. “Young adult gacha gamers experiencing greater stress and anxiety tend to spend more on gacha purchases, have more motives for gacha purchases, and participate in more gambling activities,” they wrote. “This group is at a particularly higher risk of becoming problem gamblers.”One thing that is important to note is that Genshin Impact came out in 2020. The study was self-reported, and it was done during the early stages of the COVID-19 pandemic. It was a time when people were experiencing a lot of stress, and also fewer options to relieve that stress. We were all stuck inside gaming.But the fact that stress can make people more likely to spend money on gacha shows that while the gacha model isn’t necessarily harmful to everyone, it is exploitative to everyone. Since I started writing this story, another self-reported study came out in Japan, where 18.8% of people in their 20s say they’ve spent money on gacha rather than on things like food or rent.Following Genshin Impact’s release, MiHoYo put out Honkai: Star Rail and Zenless Zone Zero. All are shiny, big-budget games that are free to play, but dangle the lure of making just one purchase in front of the player. Maybe you could drop five bucks on a handful of in-game currency to get one more pull. Or maybe just this month you’ll get the second tier of rewards on the game’s equivalent of a Battle Pass. The game is free, after all — but haven’t you enjoyed at least ten dollars’ worth of gameplay? Image: HoyoverseI spent most of my December throwing myself into Infinity Nikki. I had been so stressed, and the game was so soothing. I logged in daily to fulfill my daily wishes and earn my XP, diamonds, Threads of Purity, and bling. I accumulated massive amounts of resources. I haven’t spent money on the game. I’m trying not to, and so far, it’s been pretty easy. I’ve been super happy with how much stuff I can get for free, and how much I can do! I actually feel really good about that — which is what I said to my boyfriend, and he replied, “Yeah, that’s the point. That’s how they get you.”And he’s right. Currently, Infinity Nikki players are embroiled in a war with developer Infold, after Infold introduced yet another currency type with deep ties to Nikki’s gacha system. Every one of these gacha games has its own tangled system of overlapping currencies. Some can only be used on gacha pulls. Some can only be used to upgrade items. Many of them can be purchased with human money.Image: InFold Games/Papergames via PolygonAll of this adds up. According to Sensor Towers’ data, Genshin Impact earned over 36 million dollars on mobile alone in a single month of 2024. I don’t know what Dan DiIorio’s peak monthly revenue for Luck Be A Landlord was, but I’m pretty sure it wasn’t that.A lot of the spending guardrails we see in games like these are actually the result of regulations in other territories, especially China, where gacha has been a big deal for a lot longer. For example, gacha games have a daily limit on loot boxes, with the number clearly displayed, and a system collectively called “pity,” where getting the banner item is guaranteed after a certain number of pulls. Lastly, developers have to be clear about what the odds are. When I log in to spend the Revelation Crystals I’ve spent weeks hoarding in my F2P Infinity Nikki experience, I know that I have a 1.5% chance of pulling a 5-star piece, and that the odds can go up to 6.06%, and that I am guaranteed to get one within 20 pulls, because of the pity system.So, these odds are awful. But it is not as merciless as sitting down at a Vegas slot machine, an experience best described as “oh… that’s it?”There’s not a huge philosophical difference between buying a pack of loot boxes in Overwatch, a pull in Genshin Impact, or even a booster of Pokémon cards. You put in money, you get back randomized stuff that may or may not be what you want. In the dictionary definition, it’s a gamble. But unlike the slot machine, it’s not like you’re trying to win money by doing it, unless you’re selling those Pokémon cards, which is a topic for another time.But since even a game where you don’t get anything, like Balatro or Luck Be A Landlord, can come under fire for promoting gambling to kids, it would seem appropriate for app stores and ratings boards to take a similarly hardline stance with gacha.Instead, all these games are rated T for Teen by the ESRB, and PEGI 12 in the EU.The ESRB ratings for these games note that they contain in-game purchases, including random items. Honkai: Star Rail’s rating specifically calls out a slot machine mechanic, where players spend tokens to win a prize. But other than calling out Honkai’s slot machine, app stores are not slapping Genshin or Nikki with an 18+ rating. Meanwhile, Balatro had a PEGI rating of 18 until a successful appeal in February 2025, and Luck Be a Landlord is still 17+ on Apple’s App Store.Nobody knows what they’re doingWhen I started researching this piece, I felt very strongly that it was absurd that Luck Be A Landlord and Balatro had age ratings this high.I still believe that the way both devs have been treated by ratings boards is bad. Threatening an indie dev with a significant loss of income by pulling their game is bad, not giving them a way to defend themself or help them understand why it’s happening is even worse. It’s an extension of the general way that too-big-to-fail companies like Google treat all their customers.DiIorio told me that while it felt like a human being had at least looked at Luck Be A Landlord to make the determination that it contained gambling themes, the emails he was getting were automatic, and he doesn’t have a contact at Google to ask why this happened or how he can avoid it in the future — an experience that will be familiar to anyone who has ever needed Google support. But what’s changed for me is that I’m not actually sure anymore that games that don’t have gambling should be completely let off the hook for evoking gambling.Exposing teens to simulated gambling without financial stakes could spark an interest in the real thing later on, according to a study in the International Journal of Environmental Research and Public Health. It’s the same reason you can’t mosey down to the drug store to buy candy cigarettes. Multiple studies were done that showed kids who ate candy cigarettes were more likely to take up smokingSo while I still think rating something like Balatro 18+ is nuts, I also think that describing it appropriately might be reasonable. As a game, it’s completely divorced from literally any kind of play you would find in a casino — but I can see the concern that the thrill of flashy numbers and the shiny cards might encourage young players to try their hand at poker in a real casino, where a real house can take their money.Maybe what’s more important than doling out high age ratings is helping people think about how media can affect us. In the same way that, when I was 12 and obsessed with The Matrix, my parents gently made sure that I knew that none of the violence was real and you can’t actually cartwheel through a hail of bullets in real life. Thanks, mom and dad!But that’s an answer that’s a lot more abstract and difficult to implement than a big red 18+ banner. When it comes to gacha, I think we’re even less equipped to talk about these game mechanics, and I’m certain they’re not being age-rated appropriately. On the one hand, like I said earlier, gacha exploits the player’s desire for stuff that they are heavily manipulated to buy with real money. On the other hand, I think it’s worth acknowledging that there is a difference between gacha and casino gambling.Problem gamblers aren’t satisfied by winning — the thing they’re addicted to is playing, and the risk that comes with it. In gacha games, players do report satisfaction when they achieve the prize they set out to get. And yes, in the game’s next season, the developer will be dangling a shiny new prize in front of them with the goal of starting the cycle over. But I think it’s fair to make the distinction, while still being highly critical of the model.And right now, there is close to no incentive for app stores to crack down on gacha in any way. They get a cut of in-app purchases. Back in 2023, miHoYo tried a couple of times to set up payment systems that circumvented Apple’s 30% cut of in-app spending. Both times, it was thwarted by Apple, whose App Store generated trillion in developer billings and sales in 2022.According to Apple itself, 90% of that money did not include any commission to Apple. Fortunately for Apple, ten percent of a trillion dollars is still one hundred billion dollars, which I would also like to have in my bank account. Apple has zero reason to curb spending on games that have been earning millions of dollars every month for years.And despite the popularity of Luck Be A Landlord and Balatro’s massive App Store success, these games will never be as lucrative. They’re one-time purchases, and they don’t have microtransactions. To add insult to injury, like most popular games, Luck Be A Landlord has a lot of clones. And from what I can tell, it doesn’t look like any of them have been made to indicate that their games contain the dreaded “gambling themes” that Google was so worried about in Landlord.In particular, a game called SpinCraft: Roguelike from Sneaky Panda Games raised million in seed funding for “inventing the Luck-Puzzler genre,” which it introduced in 2022, while Luck Be A Landlord went into early access in 2021.It’s free-to-play, has ads and in-app purchases, looks like Fisher Price made a slot machine, and it’s rated E for everyone, with no mention of gambling imagery in its rating. I reached out to the developers to ask if they had also been contacted by the Play Store to disclose that their game has gambling themes, but I haven’t heard back.Borrowing mechanics in games is as old as time, and it’s something I in no way want to imply shouldn’t happen because copyright is the killer of invention — but I think we can all agree that the system is broken.There is no consistency in how games with random chance are treated. We still do not know how to talk about gambling, or gambling themes, and at the end of the day, the results of this are the same: the house always wins.See More: #nobody #understands #gambling #especially #video
    WWW.POLYGON.COM
    Nobody understands gambling, especially in video games
    In 2025, it’s very difficult not to see gambling advertised everywhere. It’s on billboards and sports broadcasts. It’s on podcasts and printed on the turnbuckle of AEW’s pay-per-view shows. And it’s on app stores, where you can find the FanDuel and DraftKings sportsbooks, alongside glitzy digital slot machines. These apps all have the highest age ratings possible on Apple’s App Store and Google Play. But earlier this year, a different kind of app nearly disappeared from the Play Store entirely.Luck Be A Landlord is a roguelite deckbuilder from solo developer Dan DiIorio. DiIorio got word from Google in January 2025 that Luck Be A Landlord was about to be pulled, globally, because DiIorio had not disclosed the game’s “gambling themes” in its rating.In Luck Be a Landlord, the player takes spins on a pixel art slot machine to earn coins to pay their ever-increasing rent — a nightmare gamification of our day-to-day grind to remain housed. On app stores, it’s a one-time purchase of $4.99, and it’s $9.99 on Steam. On the Play Store page, developer Dan DiIorio notes, “This game does not contain any real-world currency gambling or microtransactions.”And it doesn’t. But for Google, that didn’t matter. First, the game was removed from the storefront in a slew of countries that have strict gambling laws. Then, at the beginning of 2025, Google told Dilorio that Luck Be A Landlord would be pulled globally because of its rating discrepancy, as it “does not take into account references to gambling (including real or simulated gambling)”.DiIorio had gone through this song and dance before — previously, when the game was blocked, he would send back a message saying “hey, the game doesn’t have gambling,” and then Google would send back a screenshot of the game and assert that, in fact, it had.DiIorio didn’t agree, but this time they decided that the risk of Landlord getting taken down permanently was too great. They’re a solo developer, and Luck Be a Landlord had just had its highest 30-day revenue since release. So, they filled out the form confirming that Luck Be A Landlord has “gambling themes,” and are currently hoping that this will be the end of it.This is a situation that sucks for an indie dev to be in, and over email DiIorio told Polygon it was “very frustrating.”“I think it can negatively affect indie developers if they fall outside the norm, which indies often do,” they wrote. “It also makes me afraid to explore mechanics like this further. It stifles creativity, and that’s really upsetting.”In late 2024, the hit game Balatro was in a similar position. It had won numerous awards, and made $1,000,000 in its first week on mobile platforms. And then overnight, the PEGI ratings board declared that the game deserved an adult rating.The ESRB had already rated it E10+ in the US, noting it has gambling themes. And the game was already out in Europe, making its overnight ratings change a surprise. Publisher PlayStack said the rating was given because Balatro has “prominent gambling imagery and material that instructs about gambling.”Balatro is basically Luck Be A Landlord’s little cousin. Developer LocalThunk was inspired by watching streams of Luck Be A Landlord, and seeing the way DiIorio had implemented deck-building into his slot machine. And like Luck Be A Landlord, Balatro is a one-time purchase, with no microtransactions.But the PEGI board noted that because the game uses poker hands, the skills the player learns in Balatro could translate to real-world poker.In its write-up, GameSpot noted that the same thing happened to a game called Sunshine Shuffle. It was temporarily banned from the Nintendo eShop, and also from the entire country of South Korea. Unlike Balatro, Sunshine Shuffle actually is a poker game, except you’re playing Texas Hold ‘Em — again for no real money — with cute animals (who are bank robbers).It’s common sense that children shouldn’t be able to access apps that allow them to gamble. But none of these games contain actual gambling — or do they?Where do we draw the line? Is it gambling to play any game that is also played in casinos, like poker or blackjack? Is it gambling to play a game that evokes the aesthetics of a casino, like cards, chips, dice, or slot machines? Is it gambling to wager or earn fictional money?Gaming has always been a lightning rod for controversy. Sex, violence, misogyny, addiction — you name it, video games have been accused of perpetrating or encouraging it. But gambling is gaming’s original sin. And it’s the one we still can’t get a grip on.The original link between gambling and gamingGetty ImagesThe association between video games and gambling all goes back to pinball. Back in the ’30s and ’40s, politicians targeted pinball machines for promoting gambling. Early pinball machines were less skill-based (they didn’t have flippers), and some gave cash payouts, so the comparison wasn’t unfair. Famously, mob-hating New York City mayor Fiorello LaGuardia banned pinball in the city, and appeared in a newsreel dumping pinball and slot machines into the Long Island Sound. Pinball machines spent some time relegated to the back rooms of sex shops and dive bars. But after some lobbying, the laws relaxed.By the 1970s, pinball manufacturers were also making video games, and the machines were side-by-side in arcades. Arcade machines, like pinball, took small coin payments, repeatedly, for short rounds of play. The disreputable funk of pinball basically rubbed off onto video games.Ever since video games rocked onto the scene, concerned and sometimes uneducated parties have been asking if they’re dangerous. And in general, studies have shown that they’re not. The same can’t be said about gambling — the practice of putting real money down to bet on an outcome.It’s a golden age for gambling2025 in the USA is a great time for gambling, which has been really profitable for gambling companies — to the tune of $66.5 billion dollars of revenue in 2023.To put this number in perspective, the American Gaming Association, which is the casino industry’s trade group and has nothing to do with video games, reports that 2022’s gambling revenue was $60.5 billion. It went up $6 billion in a year.And this increase isn’t just because of sportsbooks, although sports betting is a huge part of it. Online casinos and brick-and-mortar casinos are both earning more, and as a lot of people have pointed out, gambling is being normalized to a pretty disturbing degree.Much like with alcohol, for a small percentage of people, gambling can tip from occasional leisure activity into addiction. The people who are most at risk are, by and large, already vulnerable: researchers at the Yale School of Medicine found that 96% of problem gamblers are also wrestling with other disorders, such as “substance use, impulse-control disorders, mood disorders, and anxiety disorders.”Even if you’re not in that group, there are still good reasons to be wary of gambling. People tend to underestimate their own vulnerability to things they know are dangerous for others. Someone else might bet beyond their means. But I would simply know when to stop.Maybe you do! But being blithely confident about it can make it hard to notice if you do develop a problem. Or if you already have one.Addiction changes the way your brain works. When you’re addicted to something, your participation in it becomes compulsive, at the expense of other interests and responsibilities. Someone might turn to their addiction to self-soothe when depressed or anxious. And speaking of those feelings, people who are depressed and anxious are already more vulnerable to addiction. Given the entire state of the world right now, this predisposition shines an ugly light on the numbers touted by the AGA. Is it good that the industry is reporting $6 billion in additional earnings, when the economy feels so frail, when the stock market is ping ponging through highs and lows daily, when daily expenses are rising? It doesn’t feel good. In 2024, the YouTuber Drew Gooden turned his critical eye to online gambling. One of the main points he makes in his excellent video is that gambling is more accessible than ever. It’s on all our phones, and betting companies are using decades of well-honed app design and behavioral studies to manipulate users to spend and spend.Meanwhile, advertising on podcasts, billboards, TV, radio, and websites – it’s literally everywhere — tells you that this is fun, and you don’t even need to know what you’re doing, and you’re probably one bet away from winning back those losses.Where does Luck Be a Landlord come into this?So, are there gambling themes in Luck Be A Landlord? The game’s slot machine is represented in simple pixel art. You pay one coin to use it, and among the more traditional slot machine symbols are silly ones like a snail that only pays out after 4 spins.When I started playing it, my primary emotion wasn’t necessarily elation at winning coins — it was stress and disbelief when, in the third round of the game, the landlord increased my rent by 100%. What the hell.I don’t doubt that getting better at it would produce dopamine thrills akin to gambling — or playing any video game. But it’s supposed to be difficult, because that’s the joke. If you beat the game you unlock more difficulty modes where, as you keep paying rent, your landlord gets furious, and starts throwing made-up rules at you: previously rare symbols will give you less of a payout, and the very mechanics of the slot machine change.It’s a manifestation of the golden rule of casinos, and all of capitalism writ large: the odds are stacked against you. The house always wins. There is luck involved, to be sure, but because Luck Be A Landlord is a deck-builder, knowing the different ways you can design your slot machine to maximize payouts is a skill! You have some influence over it, unlike a real slot machine. The synergies that I’ve seen high-level players create are completely nuts, and obviously based on a deep understanding of the strategies the game allows.IMAGE: TrampolineTales via PolygonBalatro and Luck Be a Landlord both distance themselves from casino gambling again in the way they treat money. In Landlord, the money you earn is gold coins, not any currency we recognize. And the payouts aren’t actually that big. By the end of the core game, the rent money you’re struggling and scraping to earn… is 777 coins. In the post-game endless mode, payouts can get massive. But the thing is, to get this far, you can’t rely on chance. You have to be very good at Luck Be a Landlord.And in Balatro, the numbers that get big are your points. The actual dollar payments in a round of Balatro are small. These aren’t games about earning wads and wads of cash. So, do these count as “gambling themes”?We’ll come back to that question later. First, I want to talk about a closer analog to what we colloquially consider gambling: loot boxes and gacha games.Random rewards: from Overwatch to the rise of gachaRecently, I did something that I haven’t done in a really long time: I thought about Overwatch. I used to play Overwatch with my friends, and I absolutely made a habit of dropping 20 bucks here or there for a bunch of seasonal loot boxes. This was never a problem behavior for me, but in hindsight, it does sting that over a couple of years, I dropped maybe $150 on cosmetics for a game that now I primarily associate with squandered potential.Loot boxes grew out of free-to-play mobile games, where they’re the primary method of monetization. In something like Overwatch, they functioned as a way to earn additional revenue in an ongoing game, once the player had already dropped 40 bucks to buy it.More often than not, loot boxes are a random selection of skins and other cosmetics, but games like Star Wars: Battlefront 2 were famously criticized for launching with loot crates that essentially made it pay-to-win – if you bought enough of them and got lucky.It’s not unprecedented to associate loot boxes with gambling. A 2021 study published in Addictive Behaviors showed that players who self-reported as problem gamblers also tended to spend more on loot boxes, and another study done in the UK found a similar correlation with young adults.While Overwatch certainly wasn’t the first game to feature cosmetic loot boxes or microtransactions, it’s a reference point for me, and it also got attention worldwide. In 2018, Overwatch was investigated by the Belgian Gaming Commission, which found it “in violation of gambling legislation” alongside FIFA 18 and Counter-Strike: Global Offensive. Belgium’s response was to ban the sale of loot boxes without a gambling license. Having a paid random rewards mechanic in a game is a criminal offense there. But not really. A 2023 study showed that 82% of iPhone games sold on the App Store in Belgium still use random paid monetization, as do around 80% of games that are rated 12+. The ban wasn’t effectively enforced, if at all, and the study recommends that a blanket ban wouldn’t actually be a practical solution anyway.Overwatch was rated T for Teen by the ESRB, and 12 by PEGI. When it first came out, its loot boxes were divisive. Since the mechanic came from F2P mobile games, which are often seen as predatory, people balked at seeing it in a big action game from a multi-million dollar publisher.At the time, the rebuttal was, “Well, at least it’s just cosmetics.” Nobody needs to buy loot boxes to be good at Overwatch.A lot has changed since 2016. Now we have a deeper understanding of how these mechanics are designed to manipulate players, even if they don’t affect gameplay. But also, they’ve been normalized. While there will always be people expressing disappointment when a AAA game has a paid random loot mechanic, it is no longer shocking.And if anything, these mechanics have only become more prevalent, thanks to the growth of gacha games. Gacha is short for “gachapon,” the Japanese capsule machines where you pay to receive one of a selection of random toys. Getty ImagesIn gacha games, players pay — not necessarily real money, but we’ll get to that — for a chance to get something. Maybe it’s a character, or a special weapon, or some gear — it depends on the game. Whatever it is, within that context, it’s desirable — and unlike the cosmetics of Overwatch, gacha pulls often do impact the gameplay.For example, in Infinity Nikki, you can pull for clothing items in these limited-time events. You have a chance to get pieces of a five-star outfit. But you also might pull one of a set of four-star items, or a permanent three-star piece. Of course, if you want all ten pieces of the five-star outfit, you have to do multiple pulls, each costing a handful of limited resources that you can earn in-game or purchase with money.Gacha was a fixture of mobile gaming for a long time, but in recent years, we’ve seen it go AAA, and global. MiHoYo’s Genshin Impact did a lot of that work when it came out worldwide on consoles and PC alongside its mobile release. Genshin and its successors are massive AAA games of a scale that, for your Nintendos and Ubisofts, would necessitate selling a bajillion copies to be a success. And they’re free.Genshin is an action game, whose playstyle changes depending on what character you’re playing — characters you get from gacha pulls, of course. In Zenless Zone Zero, the characters you can pull have different combo patterns, do different kinds of damage, and just feel different to play. And whereas in an early mobile gacha game like Love Nikki Dress UP! Queen the world was rudimentary, its modern descendant Infinity Nikki is, like Genshin, Breath of the Wild-esque. It is a massive open world, with collectibles and physics puzzles, platforming challenges, and a surprisingly involved storyline. Genshin Impact was the subject of an interesting study where researchers asked young adults in Hong Kong to self-report on their gacha spending habits. They found that, like with gambling, players who are not feeling good tend to spend more. “Young adult gacha gamers experiencing greater stress and anxiety tend to spend more on gacha purchases, have more motives for gacha purchases, and participate in more gambling activities,” they wrote. “This group is at a particularly higher risk of becoming problem gamblers.”One thing that is important to note is that Genshin Impact came out in 2020. The study was self-reported, and it was done during the early stages of the COVID-19 pandemic. It was a time when people were experiencing a lot of stress, and also fewer options to relieve that stress. We were all stuck inside gaming.But the fact that stress can make people more likely to spend money on gacha shows that while the gacha model isn’t necessarily harmful to everyone, it is exploitative to everyone. Since I started writing this story, another self-reported study came out in Japan, where 18.8% of people in their 20s say they’ve spent money on gacha rather than on things like food or rent.Following Genshin Impact’s release, MiHoYo put out Honkai: Star Rail and Zenless Zone Zero. All are shiny, big-budget games that are free to play, but dangle the lure of making just one purchase in front of the player. Maybe you could drop five bucks on a handful of in-game currency to get one more pull. Or maybe just this month you’ll get the second tier of rewards on the game’s equivalent of a Battle Pass. The game is free, after all — but haven’t you enjoyed at least ten dollars’ worth of gameplay? Image: HoyoverseI spent most of my December throwing myself into Infinity Nikki. I had been so stressed, and the game was so soothing. I logged in daily to fulfill my daily wishes and earn my XP, diamonds, Threads of Purity, and bling. I accumulated massive amounts of resources. I haven’t spent money on the game. I’m trying not to, and so far, it’s been pretty easy. I’ve been super happy with how much stuff I can get for free, and how much I can do! I actually feel really good about that — which is what I said to my boyfriend, and he replied, “Yeah, that’s the point. That’s how they get you.”And he’s right. Currently, Infinity Nikki players are embroiled in a war with developer Infold, after Infold introduced yet another currency type with deep ties to Nikki’s gacha system. Every one of these gacha games has its own tangled system of overlapping currencies. Some can only be used on gacha pulls. Some can only be used to upgrade items. Many of them can be purchased with human money.Image: InFold Games/Papergames via PolygonAll of this adds up. According to Sensor Towers’ data, Genshin Impact earned over 36 million dollars on mobile alone in a single month of 2024. I don’t know what Dan DiIorio’s peak monthly revenue for Luck Be A Landlord was, but I’m pretty sure it wasn’t that.A lot of the spending guardrails we see in games like these are actually the result of regulations in other territories, especially China, where gacha has been a big deal for a lot longer. For example, gacha games have a daily limit on loot boxes, with the number clearly displayed, and a system collectively called “pity,” where getting the banner item is guaranteed after a certain number of pulls. Lastly, developers have to be clear about what the odds are. When I log in to spend the Revelation Crystals I’ve spent weeks hoarding in my F2P Infinity Nikki experience, I know that I have a 1.5% chance of pulling a 5-star piece, and that the odds can go up to 6.06%, and that I am guaranteed to get one within 20 pulls, because of the pity system.So, these odds are awful. But it is not as merciless as sitting down at a Vegas slot machine, an experience best described as “oh… that’s it?”There’s not a huge philosophical difference between buying a pack of loot boxes in Overwatch, a pull in Genshin Impact, or even a booster of Pokémon cards. You put in money, you get back randomized stuff that may or may not be what you want. In the dictionary definition, it’s a gamble. But unlike the slot machine, it’s not like you’re trying to win money by doing it, unless you’re selling those Pokémon cards, which is a topic for another time.But since even a game where you don’t get anything, like Balatro or Luck Be A Landlord, can come under fire for promoting gambling to kids, it would seem appropriate for app stores and ratings boards to take a similarly hardline stance with gacha.Instead, all these games are rated T for Teen by the ESRB, and PEGI 12 in the EU.The ESRB ratings for these games note that they contain in-game purchases, including random items. Honkai: Star Rail’s rating specifically calls out a slot machine mechanic, where players spend tokens to win a prize. But other than calling out Honkai’s slot machine, app stores are not slapping Genshin or Nikki with an 18+ rating. Meanwhile, Balatro had a PEGI rating of 18 until a successful appeal in February 2025, and Luck Be a Landlord is still 17+ on Apple’s App Store.Nobody knows what they’re doingWhen I started researching this piece, I felt very strongly that it was absurd that Luck Be A Landlord and Balatro had age ratings this high.I still believe that the way both devs have been treated by ratings boards is bad. Threatening an indie dev with a significant loss of income by pulling their game is bad, not giving them a way to defend themself or help them understand why it’s happening is even worse. It’s an extension of the general way that too-big-to-fail companies like Google treat all their customers.DiIorio told me that while it felt like a human being had at least looked at Luck Be A Landlord to make the determination that it contained gambling themes, the emails he was getting were automatic, and he doesn’t have a contact at Google to ask why this happened or how he can avoid it in the future — an experience that will be familiar to anyone who has ever needed Google support. But what’s changed for me is that I’m not actually sure anymore that games that don’t have gambling should be completely let off the hook for evoking gambling.Exposing teens to simulated gambling without financial stakes could spark an interest in the real thing later on, according to a study in the International Journal of Environmental Research and Public Health. It’s the same reason you can’t mosey down to the drug store to buy candy cigarettes. Multiple studies were done that showed kids who ate candy cigarettes were more likely to take up smoking (of course, the candy is still available — just without the “cigarette” branding.)So while I still think rating something like Balatro 18+ is nuts, I also think that describing it appropriately might be reasonable. As a game, it’s completely divorced from literally any kind of play you would find in a casino — but I can see the concern that the thrill of flashy numbers and the shiny cards might encourage young players to try their hand at poker in a real casino, where a real house can take their money.Maybe what’s more important than doling out high age ratings is helping people think about how media can affect us. In the same way that, when I was 12 and obsessed with The Matrix, my parents gently made sure that I knew that none of the violence was real and you can’t actually cartwheel through a hail of bullets in real life. Thanks, mom and dad!But that’s an answer that’s a lot more abstract and difficult to implement than a big red 18+ banner. When it comes to gacha, I think we’re even less equipped to talk about these game mechanics, and I’m certain they’re not being age-rated appropriately. On the one hand, like I said earlier, gacha exploits the player’s desire for stuff that they are heavily manipulated to buy with real money. On the other hand, I think it’s worth acknowledging that there is a difference between gacha and casino gambling.Problem gamblers aren’t satisfied by winning — the thing they’re addicted to is playing, and the risk that comes with it. In gacha games, players do report satisfaction when they achieve the prize they set out to get. And yes, in the game’s next season, the developer will be dangling a shiny new prize in front of them with the goal of starting the cycle over. But I think it’s fair to make the distinction, while still being highly critical of the model.And right now, there is close to no incentive for app stores to crack down on gacha in any way. They get a cut of in-app purchases. Back in 2023, miHoYo tried a couple of times to set up payment systems that circumvented Apple’s 30% cut of in-app spending. Both times, it was thwarted by Apple, whose App Store generated $1.1 trillion in developer billings and sales in 2022.According to Apple itself, 90% of that money did not include any commission to Apple. Fortunately for Apple, ten percent of a trillion dollars is still one hundred billion dollars, which I would also like to have in my bank account. Apple has zero reason to curb spending on games that have been earning millions of dollars every month for years.And despite the popularity of Luck Be A Landlord and Balatro’s massive App Store success, these games will never be as lucrative. They’re one-time purchases, and they don’t have microtransactions. To add insult to injury, like most popular games, Luck Be A Landlord has a lot of clones. And from what I can tell, it doesn’t look like any of them have been made to indicate that their games contain the dreaded “gambling themes” that Google was so worried about in Landlord.In particular, a game called SpinCraft: Roguelike from Sneaky Panda Games raised $6 million in seed funding for “inventing the Luck-Puzzler genre,” which it introduced in 2022, while Luck Be A Landlord went into early access in 2021.It’s free-to-play, has ads and in-app purchases, looks like Fisher Price made a slot machine, and it’s rated E for everyone, with no mention of gambling imagery in its rating. I reached out to the developers to ask if they had also been contacted by the Play Store to disclose that their game has gambling themes, but I haven’t heard back.Borrowing mechanics in games is as old as time, and it’s something I in no way want to imply shouldn’t happen because copyright is the killer of invention — but I think we can all agree that the system is broken.There is no consistency in how games with random chance are treated. We still do not know how to talk about gambling, or gambling themes, and at the end of the day, the results of this are the same: the house always wins.See More:
    0 Yorumlar 0 hisse senetleri
  • Guillermo del Toro’s Frankenstein Adapts Most Ignored (and Scary) Part of the Book

    Frankenstein, the post-Enlightenment novel written by a teenage girl that invented modern science fiction, has long been Guillermo del Toro’s white whale. The Mexican filmmaker has eyed adapting Mary Shelley’s story of a modern day Prometheus since the 1990s. And now it’s almost here.
    It’s a good feeling for the filmmaker and his admirers… but it also an opportunity of mounting excitement for fans of Shelley, too, since so much of her 1818 masterpiece remains mostly associated with the page in spite of the countless film adaptations based on the story of a man and his monster. And as judged by the first remarkable teaser trailer of Frankenstein introduced by del Toro and stars Oscar Isaac and Mia Goth at Netflix’s Tudum event Saturday night, it’s safe to stay that del Toro is pulling from Shelley directly… including a wrap-around story of hers that is seldom ever attempted on the screen.

    “What manner of creature is that?” a shaken voice whispers in the new Frankenstein trailer. “What manner of devil made him?” We never exactly see what countenance could earn the dehumanizing term “creature” in the trailer, but we feel his presence. He is a silhouette, a shadow—a vengeful wraith—walking across a sheet of ice with the sunset to his back. And he is approaching what is demonstrably a half-mad, frostbitten Victor Frankenstein, who can only say in his frozen delirium “I did.” Victor is the devil who made that.
    For fans of Shelley’s novel, or just those with a good memory of Kenneth Branagh’s now mostly forgotten 1994 adaptation of the book, this framing device should send a chill of anticipation through the spine as giddy as any more familiar promises of gods and monsters. That’s because del Toro is adapting the cruel framing device Shelley used to introduce both Victor and the creature he pursues. Indeed, most of Frankenstein on the page is told in flashback and relayed by our protagonist Victor as a kind of last rites confession as he dies from fever and starvation after years and years of chasing his creation north. Always north.

    Whereas most of the novel takes place actually at the end of the Enlightenment era of the 19th century—the glory days of Mary’s famous philosophical and activist parents—the only “modern” part of the story is to compare the zeal for discovery in Victor with what was only a dawning fascination in the 19th century with discovering the North Pole.
    In the book, Victor’s tale of obsession for greatness causes a captain who has led his men to becoming stuck in the Arctic ice to reflect on the potentially lethal consequences of his ambitions—especially after he meets the Monster who later verifies Victor’s story by mourning over the scientist’s body.
    The framing device is fascinating because of where it places the story in history, but also because it elevates the tragedy of the so-called Monster and his Creator. Who was really hunting who at the end of the world in the North Pole, and who is truly the monster? The Creature did terrible things, but how much of that is Victor’s fault for abandoning his progeny to a lifetime of loneliness hatred, and despair, including by that which gave him life? Both suffer tragic fates in the end in the cold. Unloved and unremembered, except by one sea captain no one will believe.
    While it remains to be seen if del Toro is doing a straight-ahead faithful adaptation of the novel—in fact we can assume he is not since Isaac’s Victor dresses more like a Victorian of the mid-19th century than a contemporary of Voltaire or Thomas Jefferson, and we also know that Burn Gorman appears in the movie as Fritz, a character created by Universal Pictures in the iconic 1931 film adaptation starring Boris Karloff—it is fascinating to see the master filmmaker returning to the source material.
    It also raises questions of just where he will go with Jacob Elordi’s intentionally obscured and hidden Monster. We know from the trailer’s end with the Monster attacking the crew of the North Pole-bound shipthat he has the power of speech. It will be curious indeed to learn if he proves to be a Milton-esque philosopher demon, which is also a largely ignored element of Shelley’s original story.
    Frankenstein is expected to premiere in November on Netflix.
    #guillermo #del #toros #frankenstein #adapts
    Guillermo del Toro’s Frankenstein Adapts Most Ignored (and Scary) Part of the Book
    Frankenstein, the post-Enlightenment novel written by a teenage girl that invented modern science fiction, has long been Guillermo del Toro’s white whale. The Mexican filmmaker has eyed adapting Mary Shelley’s story of a modern day Prometheus since the 1990s. And now it’s almost here. It’s a good feeling for the filmmaker and his admirers… but it also an opportunity of mounting excitement for fans of Shelley, too, since so much of her 1818 masterpiece remains mostly associated with the page in spite of the countless film adaptations based on the story of a man and his monster. And as judged by the first remarkable teaser trailer of Frankenstein introduced by del Toro and stars Oscar Isaac and Mia Goth at Netflix’s Tudum event Saturday night, it’s safe to stay that del Toro is pulling from Shelley directly… including a wrap-around story of hers that is seldom ever attempted on the screen. “What manner of creature is that?” a shaken voice whispers in the new Frankenstein trailer. “What manner of devil made him?” We never exactly see what countenance could earn the dehumanizing term “creature” in the trailer, but we feel his presence. He is a silhouette, a shadow—a vengeful wraith—walking across a sheet of ice with the sunset to his back. And he is approaching what is demonstrably a half-mad, frostbitten Victor Frankenstein, who can only say in his frozen delirium “I did.” Victor is the devil who made that. For fans of Shelley’s novel, or just those with a good memory of Kenneth Branagh’s now mostly forgotten 1994 adaptation of the book, this framing device should send a chill of anticipation through the spine as giddy as any more familiar promises of gods and monsters. That’s because del Toro is adapting the cruel framing device Shelley used to introduce both Victor and the creature he pursues. Indeed, most of Frankenstein on the page is told in flashback and relayed by our protagonist Victor as a kind of last rites confession as he dies from fever and starvation after years and years of chasing his creation north. Always north. Whereas most of the novel takes place actually at the end of the Enlightenment era of the 19th century—the glory days of Mary’s famous philosophical and activist parents—the only “modern” part of the story is to compare the zeal for discovery in Victor with what was only a dawning fascination in the 19th century with discovering the North Pole. In the book, Victor’s tale of obsession for greatness causes a captain who has led his men to becoming stuck in the Arctic ice to reflect on the potentially lethal consequences of his ambitions—especially after he meets the Monster who later verifies Victor’s story by mourning over the scientist’s body. The framing device is fascinating because of where it places the story in history, but also because it elevates the tragedy of the so-called Monster and his Creator. Who was really hunting who at the end of the world in the North Pole, and who is truly the monster? The Creature did terrible things, but how much of that is Victor’s fault for abandoning his progeny to a lifetime of loneliness hatred, and despair, including by that which gave him life? Both suffer tragic fates in the end in the cold. Unloved and unremembered, except by one sea captain no one will believe. While it remains to be seen if del Toro is doing a straight-ahead faithful adaptation of the novel—in fact we can assume he is not since Isaac’s Victor dresses more like a Victorian of the mid-19th century than a contemporary of Voltaire or Thomas Jefferson, and we also know that Burn Gorman appears in the movie as Fritz, a character created by Universal Pictures in the iconic 1931 film adaptation starring Boris Karloff—it is fascinating to see the master filmmaker returning to the source material. It also raises questions of just where he will go with Jacob Elordi’s intentionally obscured and hidden Monster. We know from the trailer’s end with the Monster attacking the crew of the North Pole-bound shipthat he has the power of speech. It will be curious indeed to learn if he proves to be a Milton-esque philosopher demon, which is also a largely ignored element of Shelley’s original story. Frankenstein is expected to premiere in November on Netflix. #guillermo #del #toros #frankenstein #adapts
    WWW.DENOFGEEK.COM
    Guillermo del Toro’s Frankenstein Adapts Most Ignored (and Scary) Part of the Book
    Frankenstein, the post-Enlightenment novel written by a teenage girl that invented modern science fiction, has long been Guillermo del Toro’s white whale. The Mexican filmmaker has eyed adapting Mary Shelley’s story of a modern day Prometheus since the 1990s. And now it’s almost here. It’s a good feeling for the filmmaker and his admirers… but it also an opportunity of mounting excitement for fans of Shelley, too, since so much of her 1818 masterpiece remains mostly associated with the page in spite of the countless film adaptations based on the story of a man and his monster. And as judged by the first remarkable teaser trailer of Frankenstein introduced by del Toro and stars Oscar Isaac and Mia Goth at Netflix’s Tudum event Saturday night, it’s safe to stay that del Toro is pulling from Shelley directly… including a wrap-around story of hers that is seldom ever attempted on the screen. “What manner of creature is that?” a shaken voice whispers in the new Frankenstein trailer. “What manner of devil made him?” We never exactly see what countenance could earn the dehumanizing term “creature” in the trailer, but we feel his presence. He is a silhouette, a shadow—a vengeful wraith—walking across a sheet of ice with the sunset to his back. And he is approaching what is demonstrably a half-mad, frostbitten Victor Frankenstein (Oscar Isaac), who can only say in his frozen delirium “I did.” Victor is the devil who made that. For fans of Shelley’s novel, or just those with a good memory of Kenneth Branagh’s now mostly forgotten 1994 adaptation of the book, this framing device should send a chill of anticipation through the spine as giddy as any more familiar promises of gods and monsters. That’s because del Toro is adapting the cruel framing device Shelley used to introduce both Victor and the creature he pursues. Indeed, most of Frankenstein on the page is told in flashback and relayed by our protagonist Victor as a kind of last rites confession as he dies from fever and starvation after years and years of chasing his creation north. Always north. Whereas most of the novel takes place actually at the end of the Enlightenment era of the 19th century—the glory days of Mary’s famous philosophical and activist parents—the only “modern” part of the story is to compare the zeal for discovery in Victor with what was only a dawning fascination in the 19th century with discovering the North Pole (a feat that wouldn’t actually be accomplished until the early 20th century). In the book, Victor’s tale of obsession for greatness causes a captain who has led his men to becoming stuck in the Arctic ice to reflect on the potentially lethal consequences of his ambitions—especially after he meets the Monster who later verifies Victor’s story by mourning over the scientist’s body. The framing device is fascinating because of where it places the story in history, but also because it elevates the tragedy of the so-called Monster and his Creator. Who was really hunting who at the end of the world in the North Pole, and who is truly the monster? The Creature did terrible things, but how much of that is Victor’s fault for abandoning his progeny to a lifetime of loneliness hatred, and despair, including by that which gave him life? Both suffer tragic fates in the end in the cold. Unloved and unremembered, except by one sea captain no one will believe. While it remains to be seen if del Toro is doing a straight-ahead faithful adaptation of the novel—in fact we can assume he is not since Isaac’s Victor dresses more like a Victorian of the mid-19th century than a contemporary of Voltaire or Thomas Jefferson, and we also know that Burn Gorman appears in the movie as Fritz, a character created by Universal Pictures in the iconic 1931 film adaptation starring Boris Karloff—it is fascinating to see the master filmmaker returning to the source material. It also raises questions of just where he will go with Jacob Elordi’s intentionally obscured and hidden Monster. We know from the trailer’s end with the Monster attacking the crew of the North Pole-bound ship (a beat also, we might add, is not in the novel) that he has the power of speech. It will be curious indeed to learn if he proves to be a Milton-esque philosopher demon, which is also a largely ignored element of Shelley’s original story. Frankenstein is expected to premiere in November on Netflix.
    0 Yorumlar 0 hisse senetleri
  • Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design

    Transparent design is the intentional use of see-through or translucent materials and visual strategies to evoke openness, honesty, and fluidity in both spatial and product design. It enhances light flow, visibility, and interaction, blurring boundaries between spaces or revealing inner layers of products.
    In interiors, this manifests through glass walls, acrylic dividers, and open layouts that invite natural light and visual connection. Transparency in product design often exposes internal mechanisms in products, fostering trust and curiosity by making functions visible. It focuses on simplicity, clarity, and minimalist form, creating seamless connections between objects and their environments. Let’s now explore how transparency shapes the function, experience, and emotional impact of spatial and product design.
    Transparent Spatial Design
    Transparency in spatial design serves as a powerful architectural language that transcends mere material choice, creating profound connections between spaces and their inhabitants. By employing translucent or clear elements, designers can dissolve traditional boundaries, allowing light to penetrate deeply into interiors while establishing visual relationships between previously separated areas. This permeability creates a dynamic spatial experience where environments flow into one another, expanding perceived dimensions and fostering a sense of openness. The strategic use of transparent elements – whether through glass partitions, open floor plans, or permeable screens – transforms rigid spatial hierarchies into fluid, interconnected zones that respond to contemporary needs for flexibility and connection with both surrounding spaces and natural environments.
    Beyond its physical manifestations, transparency embodies deeper philosophical principles in design, representing honesty, clarity, and accessibility. It democratizes space by removing visual barriers that traditionally signaled exclusion or privacy, instead promoting inclusivity and shared experience. In public buildings, transparent features invite engagement and participation, while in residential contexts, they nurture connection to nature and enhance wellbeing through abundant natural light. This approach challenges designers to thoughtfully balance openness with necessary privacy, creating nuanced spatial sequences that can reveal or conceal as needed. When skillfully implemented, transparency becomes more than an aesthetic choice, it becomes a fundamental design strategy that shapes how we experience, navigate, and emotionally respond to our built environment.
    1. Expands Perception of Space
    Transparency in spatial design enhances how people perceive space by blurring the boundaries between rooms and creating a seamless connection between the indoors and the outdoors. Materials like glass and acrylic create visual continuity, making interiors feel larger, more open, and seamlessly integrated.
    This approach encourages a fluid transition between spaces, eliminates confinement, and promotes spatial freedom. As a result, transparent design contributes to an inviting atmosphere while maximising natural views and light penetration throughout the environment.

    Nestled in St. Donat near Montreal, the Apple Tree House by ACDF Architecture is a striking example of transparent design rooted in emotional memory. Wrapped around a central courtyard with a symbolic apple tree, the low-slung home features expansive glass walls that create continuous visual access to nature. The transparent layout not only blurs the boundaries between indoors and outdoors but also transforms the apple tree into a living focal point and is visible from multiple angles and spaces within the house.

    This thoughtful transparency allows natural light to flood the interiors while connecting the home’s occupants with the changing seasons outside. The home’s square-shaped plan includes three black-clad volumes that house bedrooms, a lounge, and service areas. Despite the openness, privacy is preserved through deliberate wall placements. Wooden ceilings and concrete floors add warmth and texture, but it’s the full-height glazing that defines the home that frames nature as a permanent, ever-evolving artwork at its heart.
    2. Enhances the Feeling of Openness
    One of the core benefits of transparent design is its ability to harness natural light, transforming enclosed areas into luminous, uplifting environments. By using translucent or clear materials, designers reduce the need for artificial lighting and minimize visual barriers.
    This not only improves energy efficiency but also fosters emotional well-being by connecting occupants to daylight and exterior views. Ultimately, transparency promotes a feeling of openness and calm, aligning with minimalist and modern architectural principles.

    The Living O’Pod by UN10 Design Studio is a transparent, two-story pod designed as a minimalist retreat that fully immerses its occupants in nature. Built with a steel frame and glass panels all around, this glass bubble offers uninterrupted panoramic views of the Finnish wilderness. Its remote location provides the privacy needed to embrace transparency, allowing residents to enjoy stunning sunrises, sunsets, and starry nights from within. The open design blurs the line between indoors and outdoors, creating a unique connection with the environment.

    Located in Repovesi, Finland, the pod’s interiors feature warm plywood floors and walls that complement the natural setting. A standout feature is its 360° rotation, which allows the entire structure to turn and capture optimal light and views throughout the day. Equipped with thermal insulation and heating, the Living O’Pod ensures comfort year-round and builds a harmonious relationship between people and nature.
    3. Encourages Interaction
    Transparent design reimagines interiors as active participants in the user experience, rather than passive backgrounds. Open sightlines and clear partitions encourage movement, visibility, and spontaneous interaction among occupants. This layout strategy fosters social connectivity, enhances spatial navigation, and aligns with contemporary needs for collaboration and flexibility.
    Whether in residential, commercial, or public spaces, transparency supports an intuitive spatial flow that strengthens the emotional and functional relationship between people and their environment.

    The Beach Cabin on the Baltic Sea, designed by Peter Kuczia, is a striking architectural piece located near Gdansk in northern Poland. This small gastronomy facility combines simplicity with bold design, harmoniously fitting into the beach environment while standing out through its innovative form. The structure is composed of two distinct parts: an enclosed space and an expansive open living and dining area that maximizes natural light and offers shelter. This dual arrangement creates a balanced yet dynamic architectural composition that respects the surrounding landscape.

    A defining feature of the cabin is its open dining area, which is divided into two sections—one traditional cabin-style and the other constructed entirely of glass. The transparent glass facade provides uninterrupted panoramic views of the Baltic Sea, the shoreline, and the sky, enhancing the connection between interior and nature. Elevated on stilts, the building appears to float above the sand, minimizing environmental impact and contributing to its ethereal, dreamlike quality.
    Transparent Product Design
    In product design, transparency serves as both a functional strategy and a powerful communicative tool that transforms the relationship between users and objects. By revealing internal components and operational mechanisms through clear or translucent materials, designers create an immediate visual understanding of how products function, demystifying technology and inviting engagement. This design approach establishes an honest dialogue with consumers, building trust through visibility rather than concealment. Beyond mere aesthetics, transparent design celebrates the beauty of engineering, turning circuit boards, gears, and mechanical elements into intentional visual features that tell the product’s story. From the nostalgic appeal of see-through gaming consoles to modern tech accessories, this approach satisfies our innate curiosity about how things work while creating a more informed user experience.
    The psychological impact of transparency in products extends beyond functional clarity to create deeper emotional connections. When users can observe a product’s inner workings, they develop increased confidence in its quality and craftsmanship, fostering a sense of reliability that opaque designs often struggle to convey. This visibility also democratizes understanding, making complex technologies more accessible and less intimidating to diverse users. Transparent design elements can evoke powerful nostalgic associations while simultaneously appearing futuristic and innovative, creating a timeless appeal that transcends trends. By embracing transparency, designers reject the notion that complexity should be hidden, instead celebrating the intricate engineering that powers our everyday objects. This philosophy aligns perfectly with contemporary values of authenticity and mindful consumption, where users increasingly seek products that communicate honesty in both form and function.
    1. Reveals Functionality
    Transparent product design exposes internal components like wiring, gears, or circuits, turning functional parts into visual features. This approach demystifies the object, inviting users to understand how it works rather than hiding its complexity. It fosters appreciation for craftsmanship and engineering while encouraging educational curiosity. By showcasing what lies beneath the surface, designers build an honest relationship with consumers that is based on clarity, trust, and visible function.

    Packing a backpack often means tossing everything in and hoping for the best—until you need something fast. This transparent modular backpack concept reimagines that daily hassle with a clear, compartmentalized design that lets you see all your gear at a glance. No more digging through a dark abyss—every item has its visible place. The bag features four detachable, differently sized boxes that snap together with straps, letting you customize what you carry. Grab just the tech module or gym gear block and go—simple, efficient, and streamlined. Unlike traditional organizers that hide contents in pouches, the transparent material keeps everything in plain sight, saving time and frustration.

    While it raises valid concerns around privacy and security, the clarity and convenience it offers make it ideal for fast-paced, on-the-go lifestyles. With form meeting function, this concept shows how transparent design can transform not just how a bag looks, but how it works.
    2. Enhances User Engagement
    When users can see how a product operates, they feel more confident using it. Transparent casings invite interaction by reducing uncertainty about internal processes. This visible clarity reassures users about the product’s integrity and quality, creating a psychological sense of openness and reliability.
    Especially in tech and appliances, this strategy deepens user trust and adds emotional value by allowing a more intimate connection with the design’s purpose and construction.

    The transparent Sony Glass Blue WF-C710N earbuds represent something more meaningful than a mere aesthetic choice, embodying a refreshing philosophy of technological honesty. While most devices conceal their inner workings behind opaque shells, Sony’s decision to reveal the intricate circuitry and precision components celebrates the engineering artistry that makes these tiny audio marvels possible.

    As you catch glimpses of copper coils and circuit boards through the crystal-clear housing, there’s a renewed appreciation for the invisible complexity that delivers your favorite music, serving as a visual reminder that sometimes the most beautiful designs are those that have nothing to hide.
    3. Celebrates Aesthetic Engineering
    Transparency turns utilitarian details into design features, allowing users to visually experience the beauty of inner mechanisms. This trend, seen in everything from vintage electronics to modern gadgets and watches, values technical artistry as much as outer form.
    Transparent design redefines aesthetics by focusing on the raw, mechanical truth of a product. It appeals to minimalism and industrial design lovers, offering visual depth and storytelling through exposed structure rather than decorative surface embellishment.

    DAB Motors’ 1α Transparent Edition brings retro tech flair into modern mobility with its striking transparent bodywork. Inspired by the see-through gadgets of the ”90s—like the Game Boy Color and clear Nintendo controllers—this electric motorcycle reveals its inner mechanics with style. The semi-translucent panels offer a rare peek at the bike’s intricate engineering, blending nostalgia with innovation. Carbon fiber elements, sourced from repurposed Airbus materials, complement the lightweight transparency, creating a visual experience that’s both futuristic and rooted in classic design aesthetics.

    The see-through design isn’t just for looks—it enhances the connection between rider and machine. Exposed components like the integrated LCD dashboard, lenticular headlight, and visible frame structure emphasize function and precision. This openness aligns with a broader transparent design philosophy, where clarity and honesty in construction are celebrated. The DAB 1α turns heads not by hiding complexity, but by proudly displaying it, making every ride a statement in motion.
    Beyond just materials, transparent design also reflects a deeper design philosophy that values clarity in purpose, function, and sustainability. It supports minimalist thinking by focusing on what’s essential, reducing visual clutter, and making spaces or products easier to understand and engage with. Whether in interiors or objects, transparency helps create a more honest, functional, and connected user experienceThe post Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design first appeared on Yanko Design.
    #transparent #design #how #seethrough #materials
    Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design
    Transparent design is the intentional use of see-through or translucent materials and visual strategies to evoke openness, honesty, and fluidity in both spatial and product design. It enhances light flow, visibility, and interaction, blurring boundaries between spaces or revealing inner layers of products. In interiors, this manifests through glass walls, acrylic dividers, and open layouts that invite natural light and visual connection. Transparency in product design often exposes internal mechanisms in products, fostering trust and curiosity by making functions visible. It focuses on simplicity, clarity, and minimalist form, creating seamless connections between objects and their environments. Let’s now explore how transparency shapes the function, experience, and emotional impact of spatial and product design. Transparent Spatial Design Transparency in spatial design serves as a powerful architectural language that transcends mere material choice, creating profound connections between spaces and their inhabitants. By employing translucent or clear elements, designers can dissolve traditional boundaries, allowing light to penetrate deeply into interiors while establishing visual relationships between previously separated areas. This permeability creates a dynamic spatial experience where environments flow into one another, expanding perceived dimensions and fostering a sense of openness. The strategic use of transparent elements – whether through glass partitions, open floor plans, or permeable screens – transforms rigid spatial hierarchies into fluid, interconnected zones that respond to contemporary needs for flexibility and connection with both surrounding spaces and natural environments. Beyond its physical manifestations, transparency embodies deeper philosophical principles in design, representing honesty, clarity, and accessibility. It democratizes space by removing visual barriers that traditionally signaled exclusion or privacy, instead promoting inclusivity and shared experience. In public buildings, transparent features invite engagement and participation, while in residential contexts, they nurture connection to nature and enhance wellbeing through abundant natural light. This approach challenges designers to thoughtfully balance openness with necessary privacy, creating nuanced spatial sequences that can reveal or conceal as needed. When skillfully implemented, transparency becomes more than an aesthetic choice, it becomes a fundamental design strategy that shapes how we experience, navigate, and emotionally respond to our built environment. 1. Expands Perception of Space Transparency in spatial design enhances how people perceive space by blurring the boundaries between rooms and creating a seamless connection between the indoors and the outdoors. Materials like glass and acrylic create visual continuity, making interiors feel larger, more open, and seamlessly integrated. This approach encourages a fluid transition between spaces, eliminates confinement, and promotes spatial freedom. As a result, transparent design contributes to an inviting atmosphere while maximising natural views and light penetration throughout the environment. Nestled in St. Donat near Montreal, the Apple Tree House by ACDF Architecture is a striking example of transparent design rooted in emotional memory. Wrapped around a central courtyard with a symbolic apple tree, the low-slung home features expansive glass walls that create continuous visual access to nature. The transparent layout not only blurs the boundaries between indoors and outdoors but also transforms the apple tree into a living focal point and is visible from multiple angles and spaces within the house. This thoughtful transparency allows natural light to flood the interiors while connecting the home’s occupants with the changing seasons outside. The home’s square-shaped plan includes three black-clad volumes that house bedrooms, a lounge, and service areas. Despite the openness, privacy is preserved through deliberate wall placements. Wooden ceilings and concrete floors add warmth and texture, but it’s the full-height glazing that defines the home that frames nature as a permanent, ever-evolving artwork at its heart. 2. Enhances the Feeling of Openness One of the core benefits of transparent design is its ability to harness natural light, transforming enclosed areas into luminous, uplifting environments. By using translucent or clear materials, designers reduce the need for artificial lighting and minimize visual barriers. This not only improves energy efficiency but also fosters emotional well-being by connecting occupants to daylight and exterior views. Ultimately, transparency promotes a feeling of openness and calm, aligning with minimalist and modern architectural principles. The Living O’Pod by UN10 Design Studio is a transparent, two-story pod designed as a minimalist retreat that fully immerses its occupants in nature. Built with a steel frame and glass panels all around, this glass bubble offers uninterrupted panoramic views of the Finnish wilderness. Its remote location provides the privacy needed to embrace transparency, allowing residents to enjoy stunning sunrises, sunsets, and starry nights from within. The open design blurs the line between indoors and outdoors, creating a unique connection with the environment. Located in Repovesi, Finland, the pod’s interiors feature warm plywood floors and walls that complement the natural setting. A standout feature is its 360° rotation, which allows the entire structure to turn and capture optimal light and views throughout the day. Equipped with thermal insulation and heating, the Living O’Pod ensures comfort year-round and builds a harmonious relationship between people and nature. 3. Encourages Interaction Transparent design reimagines interiors as active participants in the user experience, rather than passive backgrounds. Open sightlines and clear partitions encourage movement, visibility, and spontaneous interaction among occupants. This layout strategy fosters social connectivity, enhances spatial navigation, and aligns with contemporary needs for collaboration and flexibility. Whether in residential, commercial, or public spaces, transparency supports an intuitive spatial flow that strengthens the emotional and functional relationship between people and their environment. The Beach Cabin on the Baltic Sea, designed by Peter Kuczia, is a striking architectural piece located near Gdansk in northern Poland. This small gastronomy facility combines simplicity with bold design, harmoniously fitting into the beach environment while standing out through its innovative form. The structure is composed of two distinct parts: an enclosed space and an expansive open living and dining area that maximizes natural light and offers shelter. This dual arrangement creates a balanced yet dynamic architectural composition that respects the surrounding landscape. A defining feature of the cabin is its open dining area, which is divided into two sections—one traditional cabin-style and the other constructed entirely of glass. The transparent glass facade provides uninterrupted panoramic views of the Baltic Sea, the shoreline, and the sky, enhancing the connection between interior and nature. Elevated on stilts, the building appears to float above the sand, minimizing environmental impact and contributing to its ethereal, dreamlike quality. Transparent Product Design In product design, transparency serves as both a functional strategy and a powerful communicative tool that transforms the relationship between users and objects. By revealing internal components and operational mechanisms through clear or translucent materials, designers create an immediate visual understanding of how products function, demystifying technology and inviting engagement. This design approach establishes an honest dialogue with consumers, building trust through visibility rather than concealment. Beyond mere aesthetics, transparent design celebrates the beauty of engineering, turning circuit boards, gears, and mechanical elements into intentional visual features that tell the product’s story. From the nostalgic appeal of see-through gaming consoles to modern tech accessories, this approach satisfies our innate curiosity about how things work while creating a more informed user experience. The psychological impact of transparency in products extends beyond functional clarity to create deeper emotional connections. When users can observe a product’s inner workings, they develop increased confidence in its quality and craftsmanship, fostering a sense of reliability that opaque designs often struggle to convey. This visibility also democratizes understanding, making complex technologies more accessible and less intimidating to diverse users. Transparent design elements can evoke powerful nostalgic associations while simultaneously appearing futuristic and innovative, creating a timeless appeal that transcends trends. By embracing transparency, designers reject the notion that complexity should be hidden, instead celebrating the intricate engineering that powers our everyday objects. This philosophy aligns perfectly with contemporary values of authenticity and mindful consumption, where users increasingly seek products that communicate honesty in both form and function. 1. Reveals Functionality Transparent product design exposes internal components like wiring, gears, or circuits, turning functional parts into visual features. This approach demystifies the object, inviting users to understand how it works rather than hiding its complexity. It fosters appreciation for craftsmanship and engineering while encouraging educational curiosity. By showcasing what lies beneath the surface, designers build an honest relationship with consumers that is based on clarity, trust, and visible function. Packing a backpack often means tossing everything in and hoping for the best—until you need something fast. This transparent modular backpack concept reimagines that daily hassle with a clear, compartmentalized design that lets you see all your gear at a glance. No more digging through a dark abyss—every item has its visible place. The bag features four detachable, differently sized boxes that snap together with straps, letting you customize what you carry. Grab just the tech module or gym gear block and go—simple, efficient, and streamlined. Unlike traditional organizers that hide contents in pouches, the transparent material keeps everything in plain sight, saving time and frustration. While it raises valid concerns around privacy and security, the clarity and convenience it offers make it ideal for fast-paced, on-the-go lifestyles. With form meeting function, this concept shows how transparent design can transform not just how a bag looks, but how it works. 2. Enhances User Engagement When users can see how a product operates, they feel more confident using it. Transparent casings invite interaction by reducing uncertainty about internal processes. This visible clarity reassures users about the product’s integrity and quality, creating a psychological sense of openness and reliability. Especially in tech and appliances, this strategy deepens user trust and adds emotional value by allowing a more intimate connection with the design’s purpose and construction. The transparent Sony Glass Blue WF-C710N earbuds represent something more meaningful than a mere aesthetic choice, embodying a refreshing philosophy of technological honesty. While most devices conceal their inner workings behind opaque shells, Sony’s decision to reveal the intricate circuitry and precision components celebrates the engineering artistry that makes these tiny audio marvels possible. As you catch glimpses of copper coils and circuit boards through the crystal-clear housing, there’s a renewed appreciation for the invisible complexity that delivers your favorite music, serving as a visual reminder that sometimes the most beautiful designs are those that have nothing to hide. 3. Celebrates Aesthetic Engineering Transparency turns utilitarian details into design features, allowing users to visually experience the beauty of inner mechanisms. This trend, seen in everything from vintage electronics to modern gadgets and watches, values technical artistry as much as outer form. Transparent design redefines aesthetics by focusing on the raw, mechanical truth of a product. It appeals to minimalism and industrial design lovers, offering visual depth and storytelling through exposed structure rather than decorative surface embellishment. DAB Motors’ 1α Transparent Edition brings retro tech flair into modern mobility with its striking transparent bodywork. Inspired by the see-through gadgets of the ”90s—like the Game Boy Color and clear Nintendo controllers—this electric motorcycle reveals its inner mechanics with style. The semi-translucent panels offer a rare peek at the bike’s intricate engineering, blending nostalgia with innovation. Carbon fiber elements, sourced from repurposed Airbus materials, complement the lightweight transparency, creating a visual experience that’s both futuristic and rooted in classic design aesthetics. The see-through design isn’t just for looks—it enhances the connection between rider and machine. Exposed components like the integrated LCD dashboard, lenticular headlight, and visible frame structure emphasize function and precision. This openness aligns with a broader transparent design philosophy, where clarity and honesty in construction are celebrated. The DAB 1α turns heads not by hiding complexity, but by proudly displaying it, making every ride a statement in motion. Beyond just materials, transparent design also reflects a deeper design philosophy that values clarity in purpose, function, and sustainability. It supports minimalist thinking by focusing on what’s essential, reducing visual clutter, and making spaces or products easier to understand and engage with. Whether in interiors or objects, transparency helps create a more honest, functional, and connected user experienceThe post Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design first appeared on Yanko Design. #transparent #design #how #seethrough #materials
    WWW.YANKODESIGN.COM
    Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design
    Transparent design is the intentional use of see-through or translucent materials and visual strategies to evoke openness, honesty, and fluidity in both spatial and product design. It enhances light flow, visibility, and interaction, blurring boundaries between spaces or revealing inner layers of products. In interiors, this manifests through glass walls, acrylic dividers, and open layouts that invite natural light and visual connection. Transparency in product design often exposes internal mechanisms in products, fostering trust and curiosity by making functions visible. It focuses on simplicity, clarity, and minimalist form, creating seamless connections between objects and their environments. Let’s now explore how transparency shapes the function, experience, and emotional impact of spatial and product design. Transparent Spatial Design Transparency in spatial design serves as a powerful architectural language that transcends mere material choice, creating profound connections between spaces and their inhabitants. By employing translucent or clear elements, designers can dissolve traditional boundaries, allowing light to penetrate deeply into interiors while establishing visual relationships between previously separated areas. This permeability creates a dynamic spatial experience where environments flow into one another, expanding perceived dimensions and fostering a sense of openness. The strategic use of transparent elements – whether through glass partitions, open floor plans, or permeable screens – transforms rigid spatial hierarchies into fluid, interconnected zones that respond to contemporary needs for flexibility and connection with both surrounding spaces and natural environments. Beyond its physical manifestations, transparency embodies deeper philosophical principles in design, representing honesty, clarity, and accessibility. It democratizes space by removing visual barriers that traditionally signaled exclusion or privacy, instead promoting inclusivity and shared experience. In public buildings, transparent features invite engagement and participation, while in residential contexts, they nurture connection to nature and enhance wellbeing through abundant natural light. This approach challenges designers to thoughtfully balance openness with necessary privacy, creating nuanced spatial sequences that can reveal or conceal as needed. When skillfully implemented, transparency becomes more than an aesthetic choice, it becomes a fundamental design strategy that shapes how we experience, navigate, and emotionally respond to our built environment. 1. Expands Perception of Space Transparency in spatial design enhances how people perceive space by blurring the boundaries between rooms and creating a seamless connection between the indoors and the outdoors. Materials like glass and acrylic create visual continuity, making interiors feel larger, more open, and seamlessly integrated. This approach encourages a fluid transition between spaces, eliminates confinement, and promotes spatial freedom. As a result, transparent design contributes to an inviting atmosphere while maximising natural views and light penetration throughout the environment. Nestled in St. Donat near Montreal, the Apple Tree House by ACDF Architecture is a striking example of transparent design rooted in emotional memory. Wrapped around a central courtyard with a symbolic apple tree, the low-slung home features expansive glass walls that create continuous visual access to nature. The transparent layout not only blurs the boundaries between indoors and outdoors but also transforms the apple tree into a living focal point and is visible from multiple angles and spaces within the house. This thoughtful transparency allows natural light to flood the interiors while connecting the home’s occupants with the changing seasons outside. The home’s square-shaped plan includes three black-clad volumes that house bedrooms, a lounge, and service areas. Despite the openness, privacy is preserved through deliberate wall placements. Wooden ceilings and concrete floors add warmth and texture, but it’s the full-height glazing that defines the home that frames nature as a permanent, ever-evolving artwork at its heart. 2. Enhances the Feeling of Openness One of the core benefits of transparent design is its ability to harness natural light, transforming enclosed areas into luminous, uplifting environments. By using translucent or clear materials, designers reduce the need for artificial lighting and minimize visual barriers. This not only improves energy efficiency but also fosters emotional well-being by connecting occupants to daylight and exterior views. Ultimately, transparency promotes a feeling of openness and calm, aligning with minimalist and modern architectural principles. The Living O’Pod by UN10 Design Studio is a transparent, two-story pod designed as a minimalist retreat that fully immerses its occupants in nature. Built with a steel frame and glass panels all around, this glass bubble offers uninterrupted panoramic views of the Finnish wilderness. Its remote location provides the privacy needed to embrace transparency, allowing residents to enjoy stunning sunrises, sunsets, and starry nights from within. The open design blurs the line between indoors and outdoors, creating a unique connection with the environment. Located in Repovesi, Finland, the pod’s interiors feature warm plywood floors and walls that complement the natural setting. A standout feature is its 360° rotation, which allows the entire structure to turn and capture optimal light and views throughout the day. Equipped with thermal insulation and heating, the Living O’Pod ensures comfort year-round and builds a harmonious relationship between people and nature. 3. Encourages Interaction Transparent design reimagines interiors as active participants in the user experience, rather than passive backgrounds. Open sightlines and clear partitions encourage movement, visibility, and spontaneous interaction among occupants. This layout strategy fosters social connectivity, enhances spatial navigation, and aligns with contemporary needs for collaboration and flexibility. Whether in residential, commercial, or public spaces, transparency supports an intuitive spatial flow that strengthens the emotional and functional relationship between people and their environment. The Beach Cabin on the Baltic Sea, designed by Peter Kuczia, is a striking architectural piece located near Gdansk in northern Poland. This small gastronomy facility combines simplicity with bold design, harmoniously fitting into the beach environment while standing out through its innovative form. The structure is composed of two distinct parts: an enclosed space and an expansive open living and dining area that maximizes natural light and offers shelter. This dual arrangement creates a balanced yet dynamic architectural composition that respects the surrounding landscape. A defining feature of the cabin is its open dining area, which is divided into two sections—one traditional cabin-style and the other constructed entirely of glass. The transparent glass facade provides uninterrupted panoramic views of the Baltic Sea, the shoreline, and the sky, enhancing the connection between interior and nature. Elevated on stilts, the building appears to float above the sand, minimizing environmental impact and contributing to its ethereal, dreamlike quality. Transparent Product Design In product design, transparency serves as both a functional strategy and a powerful communicative tool that transforms the relationship between users and objects. By revealing internal components and operational mechanisms through clear or translucent materials, designers create an immediate visual understanding of how products function, demystifying technology and inviting engagement. This design approach establishes an honest dialogue with consumers, building trust through visibility rather than concealment. Beyond mere aesthetics, transparent design celebrates the beauty of engineering, turning circuit boards, gears, and mechanical elements into intentional visual features that tell the product’s story. From the nostalgic appeal of see-through gaming consoles to modern tech accessories, this approach satisfies our innate curiosity about how things work while creating a more informed user experience. The psychological impact of transparency in products extends beyond functional clarity to create deeper emotional connections. When users can observe a product’s inner workings, they develop increased confidence in its quality and craftsmanship, fostering a sense of reliability that opaque designs often struggle to convey. This visibility also democratizes understanding, making complex technologies more accessible and less intimidating to diverse users. Transparent design elements can evoke powerful nostalgic associations while simultaneously appearing futuristic and innovative, creating a timeless appeal that transcends trends. By embracing transparency, designers reject the notion that complexity should be hidden, instead celebrating the intricate engineering that powers our everyday objects. This philosophy aligns perfectly with contemporary values of authenticity and mindful consumption, where users increasingly seek products that communicate honesty in both form and function. 1. Reveals Functionality Transparent product design exposes internal components like wiring, gears, or circuits, turning functional parts into visual features. This approach demystifies the object, inviting users to understand how it works rather than hiding its complexity. It fosters appreciation for craftsmanship and engineering while encouraging educational curiosity. By showcasing what lies beneath the surface, designers build an honest relationship with consumers that is based on clarity, trust, and visible function. Packing a backpack often means tossing everything in and hoping for the best—until you need something fast. This transparent modular backpack concept reimagines that daily hassle with a clear, compartmentalized design that lets you see all your gear at a glance. No more digging through a dark abyss—every item has its visible place. The bag features four detachable, differently sized boxes that snap together with straps, letting you customize what you carry. Grab just the tech module or gym gear block and go—simple, efficient, and streamlined. Unlike traditional organizers that hide contents in pouches, the transparent material keeps everything in plain sight, saving time and frustration. While it raises valid concerns around privacy and security, the clarity and convenience it offers make it ideal for fast-paced, on-the-go lifestyles. With form meeting function, this concept shows how transparent design can transform not just how a bag looks, but how it works. 2. Enhances User Engagement When users can see how a product operates, they feel more confident using it. Transparent casings invite interaction by reducing uncertainty about internal processes. This visible clarity reassures users about the product’s integrity and quality, creating a psychological sense of openness and reliability. Especially in tech and appliances, this strategy deepens user trust and adds emotional value by allowing a more intimate connection with the design’s purpose and construction. The transparent Sony Glass Blue WF-C710N earbuds represent something more meaningful than a mere aesthetic choice, embodying a refreshing philosophy of technological honesty. While most devices conceal their inner workings behind opaque shells, Sony’s decision to reveal the intricate circuitry and precision components celebrates the engineering artistry that makes these tiny audio marvels possible. As you catch glimpses of copper coils and circuit boards through the crystal-clear housing, there’s a renewed appreciation for the invisible complexity that delivers your favorite music, serving as a visual reminder that sometimes the most beautiful designs are those that have nothing to hide. 3. Celebrates Aesthetic Engineering Transparency turns utilitarian details into design features, allowing users to visually experience the beauty of inner mechanisms. This trend, seen in everything from vintage electronics to modern gadgets and watches, values technical artistry as much as outer form. Transparent design redefines aesthetics by focusing on the raw, mechanical truth of a product. It appeals to minimalism and industrial design lovers, offering visual depth and storytelling through exposed structure rather than decorative surface embellishment. DAB Motors’ 1α Transparent Edition brings retro tech flair into modern mobility with its striking transparent bodywork. Inspired by the see-through gadgets of the ”90s—like the Game Boy Color and clear Nintendo controllers—this electric motorcycle reveals its inner mechanics with style. The semi-translucent panels offer a rare peek at the bike’s intricate engineering, blending nostalgia with innovation. Carbon fiber elements, sourced from repurposed Airbus materials, complement the lightweight transparency, creating a visual experience that’s both futuristic and rooted in classic design aesthetics. The see-through design isn’t just for looks—it enhances the connection between rider and machine. Exposed components like the integrated LCD dashboard, lenticular headlight, and visible frame structure emphasize function and precision. This openness aligns with a broader transparent design philosophy, where clarity and honesty in construction are celebrated. The DAB 1α turns heads not by hiding complexity, but by proudly displaying it, making every ride a statement in motion. Beyond just materials, transparent design also reflects a deeper design philosophy that values clarity in purpose, function, and sustainability. It supports minimalist thinking by focusing on what’s essential, reducing visual clutter, and making spaces or products easier to understand and engage with. Whether in interiors or objects, transparency helps create a more honest, functional, and connected user experienceThe post Transparent Design: How See-Through Materials Are Revolutionizing Architecture & Product Design first appeared on Yanko Design.
    0 Yorumlar 0 hisse senetleri
  • Microsoft and Google pursue differing AI agent approaches in M365 and Workspace

    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said.

    In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace.

    Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions.

    Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet.

    “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research.

    But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based.

    Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said.

    When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.”

    Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace.

    “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said.

    Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team.

    Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said.

    But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said.

    “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said.

    Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates.

    Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said.

    And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said.

    Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from per user per month for M365 Copilot to per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at per user per month for business plans with Gemini included.
    #microsoft #google #pursue #differing #agent
    Microsoft and Google pursue differing AI agent approaches in M365 and Workspace
    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said. In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace. Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions. Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet. “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research. But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based. Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said. When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.” Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace. “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said. Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team. Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said. But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said. “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said. Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates. Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said. And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said. Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from per user per month for M365 Copilot to per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at per user per month for business plans with Gemini included. #microsoft #google #pursue #differing #agent
    WWW.COMPUTERWORLD.COM
    Microsoft and Google pursue differing AI agent approaches in M365 and Workspace
    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said. In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace. Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions. Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet. “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research. But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based. Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said. When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.” Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace. “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said. Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team. Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said. But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said. “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said. Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates. Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said. And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said. Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from $30 per user per month for M365 Copilot to $200 per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at $14 per user per month for business plans with Gemini included.
    0 Yorumlar 0 hisse senetleri
  • Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    The turing test in reverse

    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI

    Kyle Orland



    May 31, 2025 7:08 am

    |

    13

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes.
    However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars.
    “This has to be real. There’s no way it's AI.”
    I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion."

    @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS

    After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention.
    Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade.

    Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea.

    @gameboi_pat This has got to be real. There’s no way it’s AI #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat

    I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke.
    The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that!

    Are we just prompts?
    Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia.
    On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling.

    @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings

    Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees.
    I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video."
    Which one is real?
    The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?"

    @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett

    After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos.

    There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly.
    There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns.
    Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally.
    For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling.

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

    13 Comments
    #real #tiktokers #are #pretending #veo
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke. The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling. @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments #real #tiktokers #are #pretending #veo
    ARSTECHNICA.COM
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to the (pretty good!) song. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI" (that last part is true, at least). I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke (or don't, based on some of the comments). The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling ("Goolgle’s [sic] New A.I. Veo 3 is at it again!! When will the prompts end?!" Cummings jokes in the caption). @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that [prompter], I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relatively (but not entirely) convinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longer (without an apparent change in camera angle) is almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments
    0 Yorumlar 0 hisse senetleri
  • Decade of Design Showcased in Colony’s Exhibition The Independents

    The Independents marks Colony’s 10th anniversary as a platform where founder Jean Lin’s personal vision and marketplace viability find rare equilibrium. The exhibition brings together 24 design studios from Colony’s orbit, each responding to what independence in design practice means to them. The resulting collection serves as both retrospective and manifesto – a declaration that independence in design isn’t merely aesthetic preference but philosophical stance.

    A paper cord chair with a single walnut along a corner hinge sits in the corner of Lin’s Tribeca gallery space. To the casual observer, it might register simply as a thoughtful detail of material juxtaposition. But Chen Chen & Kai Williams’ Walnut Corner Chair carries cultural memory within its form. The designers drew inspiration from the Chinese tradition of passing walnuts from one generation to the next, objects worn smooth by the hands of ancestors. This object-as-inheritance becomes a fitting metaphor for what Colony has cultivated over its decade of existence.

    “I’m very proud of the community of independent designers that we have built at Colony over the past decade,” says Colony founder Lin. “The Independents exhibition encapsulates my very own ‘why.’ My belief in the independent spirit is limitless, and so is my awe.”

    The exhibition reveals how Colony’s cooperative model has evolved beyond representation to becoming an incubator. Studios emerging from the gallery’s Designers’ Residency program – including Ember Studio, Thomas Yang Studio, and the freshly minted Studio BC Joshua from the 2025 class – demonstrate how Colony functions as both launch pad and ongoing support system.

    Materiality serves as a throughline connecting past and present. Current Colony designers like Hiroko Takeda, Moving Mountains, and SSS Atelier present new work that extends their material investigations. Takeda’s textiles in particular showcase how technical mastery creates spaces for expression – the constraints of the loom enabling greater creative freedom.

    For more information on The Independents, visit Colony at goodcolony.com.
    Photography by Brooke Holm.
    #decade #design #showcased #colonys #exhibition
    Decade of Design Showcased in Colony’s Exhibition The Independents
    The Independents marks Colony’s 10th anniversary as a platform where founder Jean Lin’s personal vision and marketplace viability find rare equilibrium. The exhibition brings together 24 design studios from Colony’s orbit, each responding to what independence in design practice means to them. The resulting collection serves as both retrospective and manifesto – a declaration that independence in design isn’t merely aesthetic preference but philosophical stance. A paper cord chair with a single walnut along a corner hinge sits in the corner of Lin’s Tribeca gallery space. To the casual observer, it might register simply as a thoughtful detail of material juxtaposition. But Chen Chen & Kai Williams’ Walnut Corner Chair carries cultural memory within its form. The designers drew inspiration from the Chinese tradition of passing walnuts from one generation to the next, objects worn smooth by the hands of ancestors. This object-as-inheritance becomes a fitting metaphor for what Colony has cultivated over its decade of existence. “I’m very proud of the community of independent designers that we have built at Colony over the past decade,” says Colony founder Lin. “The Independents exhibition encapsulates my very own ‘why.’ My belief in the independent spirit is limitless, and so is my awe.” The exhibition reveals how Colony’s cooperative model has evolved beyond representation to becoming an incubator. Studios emerging from the gallery’s Designers’ Residency program – including Ember Studio, Thomas Yang Studio, and the freshly minted Studio BC Joshua from the 2025 class – demonstrate how Colony functions as both launch pad and ongoing support system. Materiality serves as a throughline connecting past and present. Current Colony designers like Hiroko Takeda, Moving Mountains, and SSS Atelier present new work that extends their material investigations. Takeda’s textiles in particular showcase how technical mastery creates spaces for expression – the constraints of the loom enabling greater creative freedom. For more information on The Independents, visit Colony at goodcolony.com. Photography by Brooke Holm. #decade #design #showcased #colonys #exhibition
    DESIGN-MILK.COM
    Decade of Design Showcased in Colony’s Exhibition The Independents
    The Independents marks Colony’s 10th anniversary as a platform where founder Jean Lin’s personal vision and marketplace viability find rare equilibrium. The exhibition brings together 24 design studios from Colony’s orbit, each responding to what independence in design practice means to them. The resulting collection serves as both retrospective and manifesto – a declaration that independence in design isn’t merely aesthetic preference but philosophical stance. A paper cord chair with a single walnut along a corner hinge sits in the corner of Lin’s Tribeca gallery space. To the casual observer, it might register simply as a thoughtful detail of material juxtaposition. But Chen Chen & Kai Williams’ Walnut Corner Chair carries cultural memory within its form. The designers drew inspiration from the Chinese tradition of passing walnuts from one generation to the next, objects worn smooth by the hands of ancestors. This object-as-inheritance becomes a fitting metaphor for what Colony has cultivated over its decade of existence. “I’m very proud of the community of independent designers that we have built at Colony over the past decade,” says Colony founder Lin. “The Independents exhibition encapsulates my very own ‘why.’ My belief in the independent spirit is limitless, and so is my awe.” The exhibition reveals how Colony’s cooperative model has evolved beyond representation to becoming an incubator. Studios emerging from the gallery’s Designers’ Residency program – including Ember Studio, Thomas Yang Studio, and the freshly minted Studio BC Joshua from the 2025 class – demonstrate how Colony functions as both launch pad and ongoing support system. Materiality serves as a throughline connecting past and present. Current Colony designers like Hiroko Takeda, Moving Mountains, and SSS Atelier present new work that extends their material investigations. Takeda’s textiles in particular showcase how technical mastery creates spaces for expression – the constraints of the loom enabling greater creative freedom. For more information on The Independents, visit Colony at goodcolony.com. Photography by Brooke Holm.
    0 Yorumlar 0 hisse senetleri