• Game On With GeForce NOW, the Membership That Keeps on Delivering

    This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy.
    Gamers can score major discounts on the titles they’ve been eyeing — perfect for streaming in the cloud — during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT.
    This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week.
    The fun doesn’t stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim.
    And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices — including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming.
    Scroll Into Power
    GeForce NOW Premium members receive exclusive 24-hour early access to a new mythical reward in The Elder Scrolls Online — Bethesda’s award-winning role-playing game — before it opens to all members. Sharpen the sword, ready the staff and chase glory across the vast, immersive world of Tamriel.
    Fortune favors the bold.
    Claim the mythical Grand Gold Coast Experience Scrolls reward, a rare item that grants a bonus of 150% Experience Points from all sources for one hour. The scroll’s effect pauses while players are offline and resumes upon return, ensuring every minute counts. Whether tackling dungeon runs, completing epic quests or leveling a new character, the scrolls provide a powerful edge. Claim the reward, harness its power and scroll into the next adventure.
    Members who’ve opted into the GeForce NOW Rewards program can check their emails for redemption instructions. The offer runs through Saturday, July 26, while supplies last. Don’t miss this opportunity to become a legend in Tamriel.
    Steam Up Summer
    The Steam Summer Sale is in full swing. Snag games at discounted prices and stream them instantly from the cloud — no downloads, no waiting, just pure gaming bliss.
    Treat yourself.
    Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on the next adventure. With GeForce NOW, gaming favorites are always just a click away.
    While picking up discounted games, don’t miss the chance to get a GeForce NOW six-month Performance membership at 40% off. This is also the last opportunity to take advantage of the Performance Day Pass sale, ending Friday, June 27 — which lets gamers access cloud gaming for 24 hours — before diving into the 6-month Performance membership.
    Find Adventure
    Two distinct worlds — where secrets simmer and imagination runs wild — are streaming onto the cloud this week.
    Keep calm and blend in.
    Step into the surreal, retro-futuristic streets of We Happy Few, where a society obsessed with happiness hides its secrets behind a mask of forced cheer and a haze of “Joy.” This darkly whimsical adventure invites players to blend in, break out and uncover the truth lurking beneath the surface of Wellington Wells.
    Two worlds, one wild destiny.
    Broken Age spins a charming, hand-painted tale of two teenagers leading parallel lives in worlds at once strange and familiar. One of the teens yearns to escape a stifling spaceship, and the other is destined to challenge ancient traditions. With witty dialogue and heartfelt moments, Broken Age is a storybook come to life, brimming with quirky characters and clever puzzles.
    Each of these unforgettable adventures brings its own flavor — be it dark satire, whimsical wonder or pulse-pounding suspense — offering a taste of gaming at its imaginative peaks. Stream these captivating worlds straight from the cloud and enjoy seamless gameplay, no downloads or high-end hardware required.
    An Ultimate Controller
    Elevated gaming.
    Get ready for the SteelSeries Nimbus Cloud, a new dual-mode cloud controller. When paired with GeForce NOW, this new controller reaches new heights.
    Designed for versatility and comfort, and crafted specifically for cloud gaming, the SteelSeries Nimbus Cloud effortlessly shifts from a mobile device controller to a full-sized wireless controller, delivering top-notch performance and broad compatibility across devices.
    The Nimbus Cloud enables gamers to play wherever they are, as it easily adapts to fit iPhones and Android phones. Or collapse and connect the controller via Bluetooth to a gaming rig or smart TV. Transform any space into a personal gaming station with GeForce NOW and the Nimbus Cloud, part of the list of recommended products for an elevated cloud gaming experience.
    Gaming Never Sleeps
    “System Shock 2” — now with 100% more existential dread.
    System Shock 2: 25th Anniversary Remaster is an overhaul of the acclaimed sci-fi horror classic, rebuilt by Nightdive Studios with enhanced visuals, refined gameplay and features such as cross-play co-op multiplayer. Face the sinister AI SHODAN and her mutant army aboard the starship Von Braun as a cybernetically enhanced soldier with upgradable skills, powerful weapons and psionic abilities. Stream the title from the cloud with GeForce NOW for ultimate flexibility and performance.
    Look for the following games available to stream in the cloud this week:

    System Shock 2: 25th Anniversary RemasterBroken AgeEasy Red 2Sandwich SimulatorWe Happy FewWhat are you planning to play this weekend? Let us know on X or in the comments below.

    The official GFN summer bucket list
    Play anywhere Stream on every screen you own Finally crush that backlog Skip every single download bar
    Drop the emoji for the one you’re tackling right now
    — NVIDIA GeForce NOWJune 25, 2025
    #game #with #geforce #now #membership
    Game On With GeForce NOW, the Membership That Keeps on Delivering
    This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy. Gamers can score major discounts on the titles they’ve been eyeing — perfect for streaming in the cloud — during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT. This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week. The fun doesn’t stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim. And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices — including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming. Scroll Into Power GeForce NOW Premium members receive exclusive 24-hour early access to a new mythical reward in The Elder Scrolls Online — Bethesda’s award-winning role-playing game — before it opens to all members. Sharpen the sword, ready the staff and chase glory across the vast, immersive world of Tamriel. Fortune favors the bold. Claim the mythical Grand Gold Coast Experience Scrolls reward, a rare item that grants a bonus of 150% Experience Points from all sources for one hour. The scroll’s effect pauses while players are offline and resumes upon return, ensuring every minute counts. Whether tackling dungeon runs, completing epic quests or leveling a new character, the scrolls provide a powerful edge. Claim the reward, harness its power and scroll into the next adventure. Members who’ve opted into the GeForce NOW Rewards program can check their emails for redemption instructions. The offer runs through Saturday, July 26, while supplies last. Don’t miss this opportunity to become a legend in Tamriel. Steam Up Summer The Steam Summer Sale is in full swing. Snag games at discounted prices and stream them instantly from the cloud — no downloads, no waiting, just pure gaming bliss. Treat yourself. Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on the next adventure. With GeForce NOW, gaming favorites are always just a click away. While picking up discounted games, don’t miss the chance to get a GeForce NOW six-month Performance membership at 40% off. This is also the last opportunity to take advantage of the Performance Day Pass sale, ending Friday, June 27 — which lets gamers access cloud gaming for 24 hours — before diving into the 6-month Performance membership. Find Adventure Two distinct worlds — where secrets simmer and imagination runs wild — are streaming onto the cloud this week. Keep calm and blend in. Step into the surreal, retro-futuristic streets of We Happy Few, where a society obsessed with happiness hides its secrets behind a mask of forced cheer and a haze of “Joy.” This darkly whimsical adventure invites players to blend in, break out and uncover the truth lurking beneath the surface of Wellington Wells. Two worlds, one wild destiny. Broken Age spins a charming, hand-painted tale of two teenagers leading parallel lives in worlds at once strange and familiar. One of the teens yearns to escape a stifling spaceship, and the other is destined to challenge ancient traditions. With witty dialogue and heartfelt moments, Broken Age is a storybook come to life, brimming with quirky characters and clever puzzles. Each of these unforgettable adventures brings its own flavor — be it dark satire, whimsical wonder or pulse-pounding suspense — offering a taste of gaming at its imaginative peaks. Stream these captivating worlds straight from the cloud and enjoy seamless gameplay, no downloads or high-end hardware required. An Ultimate Controller Elevated gaming. Get ready for the SteelSeries Nimbus Cloud, a new dual-mode cloud controller. When paired with GeForce NOW, this new controller reaches new heights. Designed for versatility and comfort, and crafted specifically for cloud gaming, the SteelSeries Nimbus Cloud effortlessly shifts from a mobile device controller to a full-sized wireless controller, delivering top-notch performance and broad compatibility across devices. The Nimbus Cloud enables gamers to play wherever they are, as it easily adapts to fit iPhones and Android phones. Or collapse and connect the controller via Bluetooth to a gaming rig or smart TV. Transform any space into a personal gaming station with GeForce NOW and the Nimbus Cloud, part of the list of recommended products for an elevated cloud gaming experience. Gaming Never Sleeps “System Shock 2” — now with 100% more existential dread. System Shock 2: 25th Anniversary Remaster is an overhaul of the acclaimed sci-fi horror classic, rebuilt by Nightdive Studios with enhanced visuals, refined gameplay and features such as cross-play co-op multiplayer. Face the sinister AI SHODAN and her mutant army aboard the starship Von Braun as a cybernetically enhanced soldier with upgradable skills, powerful weapons and psionic abilities. Stream the title from the cloud with GeForce NOW for ultimate flexibility and performance. Look for the following games available to stream in the cloud this week: System Shock 2: 25th Anniversary RemasterBroken AgeEasy Red 2Sandwich SimulatorWe Happy FewWhat are you planning to play this weekend? Let us know on X or in the comments below. The official GFN summer bucket list Play anywhere Stream on every screen you own Finally crush that backlog Skip every single download bar Drop the emoji for the one you’re tackling right now — NVIDIA GeForce NOWJune 25, 2025 #game #with #geforce #now #membership
    BLOGS.NVIDIA.COM
    Game On With GeForce NOW, the Membership That Keeps on Delivering
    This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy. Gamers can score major discounts on the titles they’ve been eyeing — perfect for streaming in the cloud — during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT. This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week. The fun doesn’t stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim. And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices — including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming. Scroll Into Power GeForce NOW Premium members receive exclusive 24-hour early access to a new mythical reward in The Elder Scrolls Online — Bethesda’s award-winning role-playing game — before it opens to all members. Sharpen the sword, ready the staff and chase glory across the vast, immersive world of Tamriel. Fortune favors the bold. Claim the mythical Grand Gold Coast Experience Scrolls reward, a rare item that grants a bonus of 150% Experience Points from all sources for one hour. The scroll’s effect pauses while players are offline and resumes upon return, ensuring every minute counts. Whether tackling dungeon runs, completing epic quests or leveling a new character, the scrolls provide a powerful edge. Claim the reward, harness its power and scroll into the next adventure. Members who’ve opted into the GeForce NOW Rewards program can check their emails for redemption instructions. The offer runs through Saturday, July 26, while supplies last. Don’t miss this opportunity to become a legend in Tamriel. Steam Up Summer The Steam Summer Sale is in full swing. Snag games at discounted prices and stream them instantly from the cloud — no downloads, no waiting, just pure gaming bliss. Treat yourself. Check out the “Steam Summer Sale” row in the GeForce NOW app to find deals on the next adventure. With GeForce NOW, gaming favorites are always just a click away. While picking up discounted games, don’t miss the chance to get a GeForce NOW six-month Performance membership at 40% off. This is also the last opportunity to take advantage of the Performance Day Pass sale, ending Friday, June 27 — which lets gamers access cloud gaming for 24 hours — before diving into the 6-month Performance membership. Find Adventure Two distinct worlds — where secrets simmer and imagination runs wild — are streaming onto the cloud this week. Keep calm and blend in (or else). Step into the surreal, retro-futuristic streets of We Happy Few, where a society obsessed with happiness hides its secrets behind a mask of forced cheer and a haze of “Joy.” This darkly whimsical adventure invites players to blend in, break out and uncover the truth lurking beneath the surface of Wellington Wells. Two worlds, one wild destiny. Broken Age spins a charming, hand-painted tale of two teenagers leading parallel lives in worlds at once strange and familiar. One of the teens yearns to escape a stifling spaceship, and the other is destined to challenge ancient traditions. With witty dialogue and heartfelt moments, Broken Age is a storybook come to life, brimming with quirky characters and clever puzzles. Each of these unforgettable adventures brings its own flavor — be it dark satire, whimsical wonder or pulse-pounding suspense — offering a taste of gaming at its imaginative peaks. Stream these captivating worlds straight from the cloud and enjoy seamless gameplay, no downloads or high-end hardware required. An Ultimate Controller Elevated gaming. Get ready for the SteelSeries Nimbus Cloud, a new dual-mode cloud controller. When paired with GeForce NOW, this new controller reaches new heights. Designed for versatility and comfort, and crafted specifically for cloud gaming, the SteelSeries Nimbus Cloud effortlessly shifts from a mobile device controller to a full-sized wireless controller, delivering top-notch performance and broad compatibility across devices. The Nimbus Cloud enables gamers to play wherever they are, as it easily adapts to fit iPhones and Android phones. Or collapse and connect the controller via Bluetooth to a gaming rig or smart TV. Transform any space into a personal gaming station with GeForce NOW and the Nimbus Cloud, part of the list of recommended products for an elevated cloud gaming experience. Gaming Never Sleeps “System Shock 2” — now with 100% more existential dread. System Shock 2: 25th Anniversary Remaster is an overhaul of the acclaimed sci-fi horror classic, rebuilt by Nightdive Studios with enhanced visuals, refined gameplay and features such as cross-play co-op multiplayer. Face the sinister AI SHODAN and her mutant army aboard the starship Von Braun as a cybernetically enhanced soldier with upgradable skills, powerful weapons and psionic abilities. Stream the title from the cloud with GeForce NOW for ultimate flexibility and performance. Look for the following games available to stream in the cloud this week: System Shock 2: 25th Anniversary Remaster (New release on Steam, June 26) Broken Age (Steam) Easy Red 2 (Steam) Sandwich Simulator (Steam) We Happy Few (Steam) What are you planning to play this weekend? Let us know on X or in the comments below. The official GFN summer bucket list Play anywhere Stream on every screen you own Finally crush that backlog Skip every single download bar Drop the emoji for the one you’re tackling right now — NVIDIA GeForce NOW (@NVIDIAGFN) June 25, 2025
    0 التعليقات 0 المشاركات
  • Four science-based rules that will make your conversations flow

    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy
    Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats.
    David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance?
    Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail.

    Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from?
    I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently.

    Receive a weekly dose of discovery in your inbox.

    Sign up to newsletter

    The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too.
    “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about?
    My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different.
    Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos
    What’s your advice when making these decisions?
    There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else.
    After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it.
    The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is?
    Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already.

    What kinds of questions should we be asking – and avoiding?
    In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful.
    There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it.
    Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno
    What are the benefits of levity?
    When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again.
    Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room.
    Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian?
    Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud.

    This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like?
    Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it.
    Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone?
    Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far.
    Topics:
    #four #sciencebased #rules #that #will
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics: #four #sciencebased #rules #that #will
    WWW.NEWSCIENTIST.COM
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, and [my students] all get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomans [at Imperial College London] and I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics:
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 التعليقات 0 المشاركات
  • Everything new at Summer Game Fest 2025: Marvel Tōkon, Resident Evil Requiem and more

    It's early June, which means it's time for a ton of video game events! Rising from the ashes of E3, Geoff Keighley's Summer Game Fest is now the premium gaming event of the year, just inching ahead of… Geoff Keighley's Game Awards in December. Unlike the show it replaced, Summer Game Fest is an egalitarian affair, spotlighting games from AAA developers and small indies across a diverse set of livestreams. SGF 2025 includes 15 individual events running from June 3-9 — you can find the full Summer Game Fest 2025 schedule here — and we're smack dab in the middle of that programming right now.
    We're covering SGF 2025 with a small team on the ground in LA and a far larger group of writers tuning in remotely to the various livestreams. Expect game previews, interviews and reactions to arrive over the coming days, and a boatload of new trailers and release date announcements in between.
    Through it all, we're collating the biggest announcements right here, with links out to more in-depth coverage where we have it, in chronological order.
    Tuesday, June 3
    State of Unreal: The Witcher IV and Fortnite AI
    Epic hitched its wagon to SGF this year, aligning its annual developer Unreal Fest conference, which last took place in the fall of 2024, with the consumer event. The conference was held in Orlando, Florida, from June 2-5, with well over a hundred developer sessions focused on Unreal Engine. The highlight was State of Unreal, which was the first event on the official Summer Game Fest schedule. Amid a bunch of very cool tech demos and announcements, we got some meaningful updates on Epic's own Fortnite and CD PROJEKT RED's upcoming The Witcher IV.

    The Witcher IV was first unveiled at The Game Awards last year, and we've heard very little about it since. At State of Unreal, we got a tech demo for Unreal Engine 5.6, played in real time on a base PS5. The roughly 10-minute slot featured a mix of gameplay and cinematics, and showed off a detailed, bustling world. Perhaps the technical highlight was Nanite Foliage, an extension of UE5's Nanite system for geometry that renders foliage without the level of detail pop-in that is perhaps the most widespread graphical aberration still plaguing games today. On the game side, we saw a town filled with hundreds of NPCs going about their business. The town itself wasn't quite on the scale of The Witcher III's Novigrad City, but nonetheless felt alive in a way beyond anything the last game achieved.
    It's fair to say that Fortnite's moment in the spotlight was… less impressive. Hot on the heels of smooshing a profane Darth Vader AI into the game, Epic announced that creators will be able to roll their own AI NPCs into the game later this year.
    Wednesday, June 4
    PlayStation State of Play: Marvel Tōkon, Silent Hill f and the return of Lumines
    Another company getting a headstart on proceedings was Sony, who threw its third State of Play of the year onto the Summer Game Fest schedule a couple days ahead of the opening night event. It was a packed stream by Sony's standards, with over 20 games and even a surprise hardware announcement.

    The most time was given to Marvel Tōkon: Fighting Souls, a new PlayStation Studios tag fighter that fuses Marvel Superheroes with anime visuals. It's also 4 versus 4, which is wild. It's being developed by Arc System Works, the team perhaps best known for the Guilty Gear series. It's coming to PS5 and PC in 2026. Not-so-coincidentally, Sony also announced Project Defiant, a wireless fight stick that'll support PS5 and PC and arrive in… 2026.
    Elsewhere, we got a parade of release dates, with concrete dates for Sword of the Sea Baby Steps and Silent Hill f. We also got confirmation of that Final Fantasy Tactics remaster, an an all-new... let's call it aspirational "2026" date for Pragmata, which, if you're keeping score, was advertised alongside the launch of the PS5. Great going, Capcom!

    Rounding out the show was a bunch of smaller announcements. We heard about a new Nioh game, Nioh 3, coming in 2026; Suda51's new weirdness Romeo is a Dead Man; and Lumines Arise, a long-awaited return to the Lumines series from the developer behind Tetris Effect.
    Thursday, June 5
    Diddly squat
    There were absolutely no Summer Game Fest events scheduled on Thursday. We assume that's out of respect for antipodean trees, as June 5 was Arbor Day in New Zealand.Friday, June 6
    Summer Game Fest Live: Resident Evil Requiem, Stranger Than Heaven and sequels abound
    It's fair to say that previous Summer Game Fest opening night streams have been… whelming at best. This year's showing was certainly an improvement, not least because there were exponentially fewer mobile game and MMO ads littering the presentation. Yes, folks tracking Gabe Newell's yacht were disappointed that Half-Life 3 didn't show up, and the Silksong crowd remains sad, alone and unloved, but there were nonetheless some huge announcements.

    Perhaps the biggest of all was the "ninth"Resident Evil game. Resident Evil Requiem is said to be a tonal shift compared to the last game, Resident Evil Village. Here's hoping it reinvigorates the series in the same way Resident Evil VII did following the disappointing 6.
    We also heard more from Sega studio Ryu Ga Gotoku about Project Century, which seems to be a 1943 take on the Yakuza series. It's now called Stranger Than Heaven, and there's ajazzy new trailer for your consideration.

    Outside of those big swings, there were sequels to a bunch of mid-sized games, like Atomic Heart, Code Vein and Mortal Shell, and a spiritual sequel of sorts: Scott Pilgrim EX, a beat-em-up that takes the baton from the 2010 Ubisoft brawler Scott Pilgrim vs. the World: The Game.
    There were countless other announcements at the show, including:

    Troy Baker is the big cheese in Mouse: P.I. for Hire
    Here's a silly puppet boxing game you never knew you needed
    Killer Inn turns Werewolf into a multiplayer action game
    Out of Words is a cozy stop-motion co-op adventure from Epic Games
    Lego Voyagers is a co-op puzzle game from the studio behind Builder's Journey
    Mina the Hollower, from the makers of Shovel Knight, arrives on Halloween
    Wu-Tang Clan's new game blends anime with Afro-surrealism

    Day of the Devs: Blighted, Snap & Grab, Blighted and Escape Academy II
    As always, the kickoff show was followed by a Day of the Devs stream, which focused on smaller projects and indie games. You can watch the full stream here.
    Escape Academy has been firmly on our best couch co-op games list for some time, and now it's got a sequel on the way. Escape Academy 2: Back 2 School takes the same basic co-op escape room fun and expands on it, moving away from a level-select map screen and towards a fully 3D school campus for players to explore. So long as the puzzles themselves are as fun as the original, it seems like a winner. 

    Semblance studio Nyamakop is back with new jam called Relooted, a heist game with a unique twist. As in the real world, museums in the West are full of items plundered from African nations under colonialism. Unlike the real world, in Relooted the colonial powers have signed a treaty to return these items to their places of origin, but things aren't going to plan, as many artifacts are finding their way into private collections. It's your job to steal them back. The British Museum is quaking in its boots.

    Here are some of the other games that caught our eye:

    Snap & Grab is No Goblin's campy, photography-based heist game
    Please, Watch the Artwork is a puzzle game with eerie paintings and a sad clown
    Bask in the grotesque pixel-art beauty of Neverway
    Pocket Boss turns corporate data manipulation into a puzzle game
    Tire Boy is a wacky open-world adventure game you can tread all over

    The rest: Ball x Pit, Hitman and 007 First Light

    After Day of the Devs came Devolver. Its Summer Game Fest show was a little more muted than usual, focusing on a single game: Ball x Pit. It's the next game from Kenny Sun, an indie developer who previously made the sleeper hit Mr. Sun's Hatbox. Ball x Pit is being made by a team of more than half a dozen devs, in contrast to Sun's mostly solo prior works. It looks like an interesting mashup of Breakout and base-building mechanics, and there's a demo on Steam available right now.

    Then came IOI, the makers of Hitman, who put together a classic E3-style cringefest, full of awkward pauses, ill-paced demos and repetitive trailers. Honestly, as someone who's been watching game company presentations for two decades or so, it was a nice moment of nostalgia. 
    Away from the marvel of a presenter trying to cope with everything going wrong, the show did have some actual content, with an extended demo of the new James Bond-themed Hitman mission, an announcement that Hitman is coming to iOS and table tops, and a presentation on MindsEye, a game from former GTA producer Leslie Benzies that IOI is publishing. 
    Saturday-Sunday: Xbox and much, much more
    Now you're all caught up. We're expecting a lot of news this weekend, mostly from Xbox on Sunday. We'll be updating this article through the weekend and beyond, but you can find the latest announcements from Summer Game Fest 2025 on our front page.This article originally appeared on Engadget at
    #everything #new #summer #game #fest
    Everything new at Summer Game Fest 2025: Marvel Tōkon, Resident Evil Requiem and more
    It's early June, which means it's time for a ton of video game events! Rising from the ashes of E3, Geoff Keighley's Summer Game Fest is now the premium gaming event of the year, just inching ahead of… Geoff Keighley's Game Awards in December. Unlike the show it replaced, Summer Game Fest is an egalitarian affair, spotlighting games from AAA developers and small indies across a diverse set of livestreams. SGF 2025 includes 15 individual events running from June 3-9 — you can find the full Summer Game Fest 2025 schedule here — and we're smack dab in the middle of that programming right now. We're covering SGF 2025 with a small team on the ground in LA and a far larger group of writers tuning in remotely to the various livestreams. Expect game previews, interviews and reactions to arrive over the coming days, and a boatload of new trailers and release date announcements in between. Through it all, we're collating the biggest announcements right here, with links out to more in-depth coverage where we have it, in chronological order. Tuesday, June 3 State of Unreal: The Witcher IV and Fortnite AI Epic hitched its wagon to SGF this year, aligning its annual developer Unreal Fest conference, which last took place in the fall of 2024, with the consumer event. The conference was held in Orlando, Florida, from June 2-5, with well over a hundred developer sessions focused on Unreal Engine. The highlight was State of Unreal, which was the first event on the official Summer Game Fest schedule. Amid a bunch of very cool tech demos and announcements, we got some meaningful updates on Epic's own Fortnite and CD PROJEKT RED's upcoming The Witcher IV. The Witcher IV was first unveiled at The Game Awards last year, and we've heard very little about it since. At State of Unreal, we got a tech demo for Unreal Engine 5.6, played in real time on a base PS5. The roughly 10-minute slot featured a mix of gameplay and cinematics, and showed off a detailed, bustling world. Perhaps the technical highlight was Nanite Foliage, an extension of UE5's Nanite system for geometry that renders foliage without the level of detail pop-in that is perhaps the most widespread graphical aberration still plaguing games today. On the game side, we saw a town filled with hundreds of NPCs going about their business. The town itself wasn't quite on the scale of The Witcher III's Novigrad City, but nonetheless felt alive in a way beyond anything the last game achieved. It's fair to say that Fortnite's moment in the spotlight was… less impressive. Hot on the heels of smooshing a profane Darth Vader AI into the game, Epic announced that creators will be able to roll their own AI NPCs into the game later this year. Wednesday, June 4 PlayStation State of Play: Marvel Tōkon, Silent Hill f and the return of Lumines Another company getting a headstart on proceedings was Sony, who threw its third State of Play of the year onto the Summer Game Fest schedule a couple days ahead of the opening night event. It was a packed stream by Sony's standards, with over 20 games and even a surprise hardware announcement. The most time was given to Marvel Tōkon: Fighting Souls, a new PlayStation Studios tag fighter that fuses Marvel Superheroes with anime visuals. It's also 4 versus 4, which is wild. It's being developed by Arc System Works, the team perhaps best known for the Guilty Gear series. It's coming to PS5 and PC in 2026. Not-so-coincidentally, Sony also announced Project Defiant, a wireless fight stick that'll support PS5 and PC and arrive in… 2026. Elsewhere, we got a parade of release dates, with concrete dates for Sword of the Sea Baby Steps and Silent Hill f. We also got confirmation of that Final Fantasy Tactics remaster, an an all-new... let's call it aspirational "2026" date for Pragmata, which, if you're keeping score, was advertised alongside the launch of the PS5. Great going, Capcom! Rounding out the show was a bunch of smaller announcements. We heard about a new Nioh game, Nioh 3, coming in 2026; Suda51's new weirdness Romeo is a Dead Man; and Lumines Arise, a long-awaited return to the Lumines series from the developer behind Tetris Effect. Thursday, June 5 Diddly squat There were absolutely no Summer Game Fest events scheduled on Thursday. We assume that's out of respect for antipodean trees, as June 5 was Arbor Day in New Zealand.Friday, June 6 Summer Game Fest Live: Resident Evil Requiem, Stranger Than Heaven and sequels abound It's fair to say that previous Summer Game Fest opening night streams have been… whelming at best. This year's showing was certainly an improvement, not least because there were exponentially fewer mobile game and MMO ads littering the presentation. Yes, folks tracking Gabe Newell's yacht were disappointed that Half-Life 3 didn't show up, and the Silksong crowd remains sad, alone and unloved, but there were nonetheless some huge announcements. Perhaps the biggest of all was the "ninth"Resident Evil game. Resident Evil Requiem is said to be a tonal shift compared to the last game, Resident Evil Village. Here's hoping it reinvigorates the series in the same way Resident Evil VII did following the disappointing 6. We also heard more from Sega studio Ryu Ga Gotoku about Project Century, which seems to be a 1943 take on the Yakuza series. It's now called Stranger Than Heaven, and there's ajazzy new trailer for your consideration. Outside of those big swings, there were sequels to a bunch of mid-sized games, like Atomic Heart, Code Vein and Mortal Shell, and a spiritual sequel of sorts: Scott Pilgrim EX, a beat-em-up that takes the baton from the 2010 Ubisoft brawler Scott Pilgrim vs. the World: The Game. There were countless other announcements at the show, including: Troy Baker is the big cheese in Mouse: P.I. for Hire Here's a silly puppet boxing game you never knew you needed Killer Inn turns Werewolf into a multiplayer action game Out of Words is a cozy stop-motion co-op adventure from Epic Games Lego Voyagers is a co-op puzzle game from the studio behind Builder's Journey Mina the Hollower, from the makers of Shovel Knight, arrives on Halloween Wu-Tang Clan's new game blends anime with Afro-surrealism Day of the Devs: Blighted, Snap & Grab, Blighted and Escape Academy II As always, the kickoff show was followed by a Day of the Devs stream, which focused on smaller projects and indie games. You can watch the full stream here. Escape Academy has been firmly on our best couch co-op games list for some time, and now it's got a sequel on the way. Escape Academy 2: Back 2 School takes the same basic co-op escape room fun and expands on it, moving away from a level-select map screen and towards a fully 3D school campus for players to explore. So long as the puzzles themselves are as fun as the original, it seems like a winner.  Semblance studio Nyamakop is back with new jam called Relooted, a heist game with a unique twist. As in the real world, museums in the West are full of items plundered from African nations under colonialism. Unlike the real world, in Relooted the colonial powers have signed a treaty to return these items to their places of origin, but things aren't going to plan, as many artifacts are finding their way into private collections. It's your job to steal them back. The British Museum is quaking in its boots. Here are some of the other games that caught our eye: Snap & Grab is No Goblin's campy, photography-based heist game Please, Watch the Artwork is a puzzle game with eerie paintings and a sad clown Bask in the grotesque pixel-art beauty of Neverway Pocket Boss turns corporate data manipulation into a puzzle game Tire Boy is a wacky open-world adventure game you can tread all over The rest: Ball x Pit, Hitman and 007 First Light After Day of the Devs came Devolver. Its Summer Game Fest show was a little more muted than usual, focusing on a single game: Ball x Pit. It's the next game from Kenny Sun, an indie developer who previously made the sleeper hit Mr. Sun's Hatbox. Ball x Pit is being made by a team of more than half a dozen devs, in contrast to Sun's mostly solo prior works. It looks like an interesting mashup of Breakout and base-building mechanics, and there's a demo on Steam available right now. Then came IOI, the makers of Hitman, who put together a classic E3-style cringefest, full of awkward pauses, ill-paced demos and repetitive trailers. Honestly, as someone who's been watching game company presentations for two decades or so, it was a nice moment of nostalgia.  Away from the marvel of a presenter trying to cope with everything going wrong, the show did have some actual content, with an extended demo of the new James Bond-themed Hitman mission, an announcement that Hitman is coming to iOS and table tops, and a presentation on MindsEye, a game from former GTA producer Leslie Benzies that IOI is publishing.  Saturday-Sunday: Xbox and much, much more Now you're all caught up. We're expecting a lot of news this weekend, mostly from Xbox on Sunday. We'll be updating this article through the weekend and beyond, but you can find the latest announcements from Summer Game Fest 2025 on our front page.This article originally appeared on Engadget at #everything #new #summer #game #fest
    WWW.ENGADGET.COM
    Everything new at Summer Game Fest 2025: Marvel Tōkon, Resident Evil Requiem and more
    It's early June, which means it's time for a ton of video game events! Rising from the ashes of E3, Geoff Keighley's Summer Game Fest is now the premium gaming event of the year, just inching ahead of… Geoff Keighley's Game Awards in December. Unlike the show it replaced, Summer Game Fest is an egalitarian affair, spotlighting games from AAA developers and small indies across a diverse set of livestreams. SGF 2025 includes 15 individual events running from June 3-9 — you can find the full Summer Game Fest 2025 schedule here — and we're smack dab in the middle of that programming right now. We're covering SGF 2025 with a small team on the ground in LA and a far larger group of writers tuning in remotely to the various livestreams. Expect game previews, interviews and reactions to arrive over the coming days (the show's in-person component runs from Saturday-Monday), and a boatload of new trailers and release date announcements in between. Through it all, we're collating the biggest announcements right here, with links out to more in-depth coverage where we have it, in chronological order. Tuesday, June 3 State of Unreal: The Witcher IV and Fortnite AI Epic hitched its wagon to SGF this year, aligning its annual developer Unreal Fest conference, which last took place in the fall of 2024, with the consumer event. The conference was held in Orlando, Florida, from June 2-5, with well over a hundred developer sessions focused on Unreal Engine. The highlight was State of Unreal, which was the first event on the official Summer Game Fest schedule. Amid a bunch of very cool tech demos and announcements, we got some meaningful updates on Epic's own Fortnite and CD PROJEKT RED's upcoming The Witcher IV. The Witcher IV was first unveiled at The Game Awards last year, and we've heard very little about it since. At State of Unreal, we got a tech demo for Unreal Engine 5.6, played in real time on a base PS5. The roughly 10-minute slot featured a mix of gameplay and cinematics, and showed off a detailed, bustling world. Perhaps the technical highlight was Nanite Foliage, an extension of UE5's Nanite system for geometry that renders foliage without the level of detail pop-in that is perhaps the most widespread graphical aberration still plaguing games today. On the game side, we saw a town filled with hundreds of NPCs going about their business. The town itself wasn't quite on the scale of The Witcher III's Novigrad City, but nonetheless felt alive in a way beyond anything the last game achieved. It's fair to say that Fortnite's moment in the spotlight was… less impressive. Hot on the heels of smooshing a profane Darth Vader AI into the game, Epic announced that creators will be able to roll their own AI NPCs into the game later this year. Wednesday, June 4 PlayStation State of Play: Marvel Tōkon, Silent Hill f and the return of Lumines Another company getting a headstart on proceedings was Sony, who threw its third State of Play of the year onto the Summer Game Fest schedule a couple days ahead of the opening night event. It was a packed stream by Sony's standards, with over 20 games and even a surprise hardware announcement. The most time was given to Marvel Tōkon: Fighting Souls, a new PlayStation Studios tag fighter that fuses Marvel Superheroes with anime visuals. It's also 4 versus 4, which is wild. It's being developed by Arc System Works, the team perhaps best known for the Guilty Gear series. It's coming to PS5 and PC in 2026. Not-so-coincidentally, Sony also announced Project Defiant, a wireless fight stick that'll support PS5 and PC and arrive in… 2026. Elsewhere, we got a parade of release dates, with concrete dates for Sword of the Sea (August 19) Baby Steps (September 8) and Silent Hill f (September 25). We also got confirmation of that Final Fantasy Tactics remaster (coming September 30), an an all-new... let's call it aspirational "2026" date for Pragmata, which, if you're keeping score, was advertised alongside the launch of the PS5. Great going, Capcom! Rounding out the show was a bunch of smaller announcements. We heard about a new Nioh game, Nioh 3, coming in 2026; Suda51's new weirdness Romeo is a Dead Man; and Lumines Arise, a long-awaited return to the Lumines series from the developer behind Tetris Effect. Thursday, June 5 Diddly squat There were absolutely no Summer Game Fest events scheduled on Thursday. We assume that's out of respect for antipodean trees, as June 5 was Arbor Day in New Zealand. (It's probably because everyone was playing Nintendo Switch 2.) Friday, June 6 Summer Game Fest Live: Resident Evil Requiem, Stranger Than Heaven and sequels abound It's fair to say that previous Summer Game Fest opening night streams have been… whelming at best. This year's showing was certainly an improvement, not least because there were exponentially fewer mobile game and MMO ads littering the presentation. Yes, folks tracking Gabe Newell's yacht were disappointed that Half-Life 3 didn't show up, and the Silksong crowd remains sad, alone and unloved, but there were nonetheless some huge announcements. Perhaps the biggest of all was the "ninth" (Zero and Code Veronica erasure is real) Resident Evil game. Resident Evil Requiem is said to be a tonal shift compared to the last game, Resident Evil Village. Here's hoping it reinvigorates the series in the same way Resident Evil VII did following the disappointing 6. We also heard more from Sega studio Ryu Ga Gotoku about Project Century, which seems to be a 1943 take on the Yakuza series. It's now called Stranger Than Heaven, and there's a (literally) jazzy new trailer for your consideration. Outside of those big swings, there were sequels to a bunch of mid-sized games, like Atomic Heart, Code Vein and Mortal Shell, and a spiritual sequel of sorts: Scott Pilgrim EX, a beat-em-up that takes the baton from the 2010 Ubisoft brawler Scott Pilgrim vs. the World: The Game. There were countless other announcements at the show, including: Troy Baker is the big cheese in Mouse: P.I. for Hire Here's a silly puppet boxing game you never knew you needed Killer Inn turns Werewolf into a multiplayer action game Out of Words is a cozy stop-motion co-op adventure from Epic Games Lego Voyagers is a co-op puzzle game from the studio behind Builder's Journey Mina the Hollower, from the makers of Shovel Knight, arrives on Halloween Wu-Tang Clan's new game blends anime with Afro-surrealism Day of the Devs: Blighted, Snap & Grab, Blighted and Escape Academy II As always, the kickoff show was followed by a Day of the Devs stream, which focused on smaller projects and indie games. You can watch the full stream here. Escape Academy has been firmly on our best couch co-op games list for some time, and now it's got a sequel on the way. Escape Academy 2: Back 2 School takes the same basic co-op escape room fun and expands on it, moving away from a level-select map screen and towards a fully 3D school campus for players to explore. So long as the puzzles themselves are as fun as the original, it seems like a winner.  Semblance studio Nyamakop is back with new jam called Relooted, a heist game with a unique twist. As in the real world, museums in the West are full of items plundered from African nations under colonialism. Unlike the real world, in Relooted the colonial powers have signed a treaty to return these items to their places of origin, but things aren't going to plan, as many artifacts are finding their way into private collections. It's your job to steal them back. The British Museum is quaking in its boots. Here are some of the other games that caught our eye: Snap & Grab is No Goblin's campy, photography-based heist game Please, Watch the Artwork is a puzzle game with eerie paintings and a sad clown Bask in the grotesque pixel-art beauty of Neverway Pocket Boss turns corporate data manipulation into a puzzle game Tire Boy is a wacky open-world adventure game you can tread all over The rest: Ball x Pit, Hitman and 007 First Light After Day of the Devs came Devolver. Its Summer Game Fest show was a little more muted than usual, focusing on a single game: Ball x Pit. It's the next game from Kenny Sun, an indie developer who previously made the sleeper hit Mr. Sun's Hatbox. Ball x Pit is being made by a team of more than half a dozen devs, in contrast to Sun's mostly solo prior works. It looks like an interesting mashup of Breakout and base-building mechanics, and there's a demo on Steam available right now. Then came IOI, the makers of Hitman, who put together a classic E3-style cringefest, full of awkward pauses, ill-paced demos and repetitive trailers. Honestly, as someone who's been watching game company presentations for two decades or so, it was a nice moment of nostalgia.  Away from the marvel of a presenter trying to cope with everything going wrong, the show did have some actual content, with an extended demo of the new James Bond-themed Hitman mission, an announcement that Hitman is coming to iOS and table tops, and a presentation on MindsEye, a game from former GTA producer Leslie Benzies that IOI is publishing.  Saturday-Sunday: Xbox and much, much more Now you're all caught up. We're expecting a lot of news this weekend, mostly from Xbox on Sunday. We'll be updating this article through the weekend and beyond, but you can find the latest announcements from Summer Game Fest 2025 on our front page.This article originally appeared on Engadget at https://www.engadget.com/gaming/everything-new-at-summer-game-fest-2025-marvel-tokon-resident-evil-requiem-and-more-185425995.html?src=rss
    Like
    Love
    Wow
    Sad
    Angry
    525
    0 التعليقات 0 المشاركات
  • Jagex announces layoffs as it pauses Old School RuneScape's community server project

    A Jagex employee has claimed that the "majority" of those affected were from "non-game dev and non-player facing areas."
    #jagex #announces #layoffs #pauses #old
    Jagex announces layoffs as it pauses Old School RuneScape's community server project
    A Jagex employee has claimed that the "majority" of those affected were from "non-game dev and non-player facing areas." #jagex #announces #layoffs #pauses #old
    HITMARKER.NET
    Jagex announces layoffs as it pauses Old School RuneScape's community server project
    A Jagex employee has claimed that the "majority" of those affected were from "non-game dev and non-player facing areas."
    Like
    Love
    Wow
    Sad
    Angry
    498
    0 التعليقات 0 المشاركات
  • Smashing Animations Part 4: Optimising SVGs

    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations.
    But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it.

    Start Clean And Design With Optimisation In Mind
    Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths.
    Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks.

    Beginning With Outlines
    For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference.

    Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them.

    Drawing Simple Background Shapes
    With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size.

    Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes.

    Optimising The Code
    It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size.

    Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs.

    Layering SVG Elements
    My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack.

    That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic.

    Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future:
    <svg ...>
    <defs>
    <!-- ... -->
    </defs>
    <path fill="url" d="…"/>
    <!-- TITLE GRAPHIC -->
    <g>
    <path … />
    <!-- ... -->
    </g>
    </svg>

    Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers:
    <svg ...>
    <defs>
    <linearGradient id="grad" …>…</linearGradient>
    <filter id="trail" …>…</filter>
    </defs>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <path filter="url" …/>
    <!-- TITLE GRAPHIC -->
    </svg>

    Then come the magical stars, added in the same sequential fashion:
    <svg ...>
    <!-- GRADIENT -->
    <!-- TRAIL -->
    <!-- STARS -->
    <!-- TITLE GRAPHIC -->
    </svg>

    To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi:
    <g id="yogi">...</g>

    Then I build Yogi from the ground up — starting with background props, like his broom:
    <g id="broom">...</g>

    Followed by grouped elements for his body, head, collar, and tie:
    <g id="yogi">
    <g id="broom">…</g>
    <g id="body">…</g>
    <g id="head">…</g>
    <g id="collar">…</g>
    <g id="tie">…</g>
    </g>

    Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify.
    Reusing Elements With <use>
    When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB:
    <g id="stars">
    <path class="star-small" fill="#eae3da" d="..."/>
    <path class="star-medium" fill="#eae3da" d="..."/>
    <path class="star-large" fill="#eae3da" d="..."/>
    <!-- ... -->
    </g>

    Moving the stars’ fill attribute values to their parent group reduces the overall weight a little:
    <g id="stars" fill="#eae3da">
    <path class="star-small" d="…"/>
    <path class="star-medium" d="…"/>
    <path class="star-large" d="…"/>
    <!-- ... -->
    </g>

    But a more efficient and manageable option is to define each star size as a reusable template:

    <defs>
    <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/>
    <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/>
    </defs>

    With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes:
    <g id="stars">
    <!-- Large stars -->
    <use href="#star-large" x="1575" y="495"/>
    <!-- ... -->
    <!-- Medium stars -->
    <use href="#star-medium" x="1453" y="696"/>
    <!-- ... -->
    <!-- Small stars -->
    <use href="#star-small" x="1287" y="741"/>
    <!-- ... -->
    </g>

    This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance.
    Adding Animations
    The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels:
    @keyframes sparkle {
    0%, 100% { opacity: .1; }
    50% { opacity: 1; }
    }

    Next, I applied this looping animation to every use element inside my stars group:
    #stars use {
    animation: sparkle 10s ease-in-out infinite;
    }

    The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects:
    /* Fast, frequent */
    #stars use:nth-child:nth-child{
    animation-delay: .1s;
    animation-duration: 2s;
    }

    From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses:
    /* Medium */
    #stars use:nth-child:nth-child{ ... }

    /* Slow, dramatic */
    #stars use:nth-child:nth-child{ ... }

    /* Random */
    #stars use:nth-child{ ... }

    /* Alternating */
    #stars use:nth-child{ ... }

    /* Scattered */
    #stars use:nth-child{ ... }

    By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle.

    Then, for added realism, I make Yogi’s head wobble:

    @keyframes headWobble {
    0% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateY; }
    }

    #head {
    animation: headWobble 0.8s cubic-bezierinfinite alternate;
    }

    His tie waves:

    @keyframes tieWave {
    0%, 100% { transform: rotateZrotateYscaleX; }
    33% { transform: rotateZrotateYscaleX; }
    66% { transform: rotateZrotateYscaleX; }
    }

    #tie {
    transform-style: preserve-3d;
    animation: tieWave 10s cubic-bezierinfinite;
    }

    His broom swings:

    @keyframes broomSwing {
    0%, 20% { transform: rotate; }
    30% { transform: rotate; }
    50%, 70% { transform: rotate; }
    80% { transform: rotate; }
    100% { transform: rotate; }
    }

    #broom {
    animation: broomSwing 4s cubic-bezierinfinite;
    }

    And, finally, Yogi himself gently rotates as he flies on his magical broom:

    @keyframes yogiWobble {
    0% { transform: rotatetranslateYscale; }
    30% { transform: rotatetranslateY; }
    100% { transform: rotatetranslateYscale; }
    }

    #yogi {
    animation: yogiWobble 3.5s cubic-bezierinfinite alternate;
    }

    All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript.
    Try this yourself:
    See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke.
    Conclusion
    Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same:

    Start clean,
    Optimise early, and
    Structure everything with animation in mind.

    SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    #smashing #animations #part #optimising #svgs
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designerand Sketchare alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child:nth-child{ animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child:nth-child{ ... } /* Slow, dramatic */ #stars use:nth-child:nth-child{ ... } /* Random */ #stars use:nth-child{ ... } /* Alternating */ #stars use:nth-child{ ... } /* Scattered */ #stars use:nth-child{ ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotatetranslateY; } 100% { transform: rotatetranslateY; } } #head { animation: headWobble 0.8s cubic-bezierinfinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZrotateYscaleX; } 33% { transform: rotateZrotateYscaleX; } 66% { transform: rotateZrotateYscaleX; } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezierinfinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate; } 30% { transform: rotate; } 50%, 70% { transform: rotate; } 80% { transform: rotate; } 100% { transform: rotate; } } #broom { animation: broomSwing 4s cubic-bezierinfinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotatetranslateYscale; } 30% { transform: rotatetranslateY; } 100% { transform: rotatetranslateYscale; } } #yogi { animation: yogiWobble 3.5s cubic-bezierinfinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animationby Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life. #smashing #animations #part #optimising #svgs
    SMASHINGMAGAZINE.COM
    Smashing Animations Part 4: Optimising SVGs
    SVG animations take me back to the Hanna-Barbera cartoons I watched as a kid. Shows like Wacky Races, The Perils of Penelope Pitstop, and, of course, Yogi Bear. They inspired me to lovingly recreate some classic Toon Titles using CSS, SVG, and SMIL animations. But getting animations to load quickly and work smoothly needs more than nostalgia. It takes clean design, lean code, and a process that makes complex SVGs easier to animate. Here’s how I do it. Start Clean And Design With Optimisation In Mind Keeping things simple is key to making SVGs that are optimised and ready to animate. Tools like Adobe Illustrator convert bitmap images to vectors, but the output often contains too many extraneous groups, layers, and masks. Instead, I start cleaning in Sketch, work from a reference image, and use the Pen tool to create paths. Tip: Affinity Designer (UK) and Sketch (Netherlands) are alternatives to Adobe Illustrator and Figma. Both are independent and based in Europe. Sketch has been my default design app since Adobe killed Fireworks. Beginning With Outlines For these Toon Titles illustrations, I first use the Pen tool to draw black outlines with as few anchor points as possible. The more points a shape has, the bigger a file becomes, so simplifying paths and reducing the number of points makes an SVG much smaller, often with no discernible visual difference. Bearing in mind that parts of this Yogi illustration will ultimately be animated, I keep outlines for this Bewitched Bear’s body, head, collar, and tie separate so that I can move them independently. The head might nod, the tie could flap, and, like in those classic cartoons, Yogi’s collar will hide the joins between them. Drawing Simple Background Shapes With the outlines in place, I use the Pen tool again to draw new shapes, which fill the areas with colour. These colours sit behind the outlines, so they don’t need to match them exactly. The fewer anchor points, the smaller the file size. Sadly, neither Affinity Designer nor Sketch has tools that can simplify paths, but if you have it, using Adobe Illustrator can shave a few extra kilobytes off these background shapes. Optimising The Code It’s not just metadata that makes SVG bulkier. The way you export from your design app also affects file size. Exporting just those simple background shapes from Adobe Illustrator includes unnecessary groups, masks, and bloated path data by default. Sketch’s code is barely any better, and there’s plenty of room for improvement, even in its SVGO Compressor code. I rely on Jake Archibald’s SVGOMG, which uses SVGO v3 and consistently delivers the best optimised SVGs. Layering SVG Elements My process for preparing SVGs for animation goes well beyond drawing vectors and optimising paths — it also includes how I structure the code itself. When every visual element is crammed into a single SVG file, even optimised code can be a nightmare to navigate. Locating a specific path or group often feels like searching for a needle in a haystack. That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section. For example, I start with backgrounds like this gradient and title graphic. Instead of facing a wall of SVG code, I can now easily identify the background gradient’s path and its associated linearGradient, and see the group containing the title graphic. I take this opportunity to add a comment to the code, which will make editing and adding animations to it easier in the future: <svg ...> <defs> <!-- ... --> </defs> <path fill="url(#grad)" d="…"/> <!-- TITLE GRAPHIC --> <g> <path … /> <!-- ... --> </g> </svg> Next, I add the blurred trail from Yogi’s airborne broom. This includes defining a Gaussian Blur filter and placing its path between the background and title layers: <svg ...> <defs> <linearGradient id="grad" …>…</linearGradient> <filter id="trail" …>…</filter> </defs> <!-- GRADIENT --> <!-- TRAIL --> <path filter="url(#trail)" …/> <!-- TITLE GRAPHIC --> </svg> Then come the magical stars, added in the same sequential fashion: <svg ...> <!-- GRADIENT --> <!-- TRAIL --> <!-- STARS --> <!-- TITLE GRAPHIC --> </svg> To keep everything organised and animation-ready, I create an empty group that will hold all the parts of Yogi: <g id="yogi">...</g> Then I build Yogi from the ground up — starting with background props, like his broom: <g id="broom">...</g> Followed by grouped elements for his body, head, collar, and tie: <g id="yogi"> <g id="broom">…</g> <g id="body">…</g> <g id="head">…</g> <g id="collar">…</g> <g id="tie">…</g> </g> Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues later on — they’ll all slot into place automatically. I keep my code clean, readable, and ordered logically by layering elements this way. It also makes animating smoother, as each component is easier to identify. Reusing Elements With <use> When duplicate shapes get reused repeatedly, SVG files can get bulky fast. My recreation of the “Bewitched Bear” title card contains 80 stars in three sizes. Combining all those shapes into one optimised path would bring the file size down to 3KB. But I want to animate individual stars, which would almost double that to 5KB: <g id="stars"> <path class="star-small" fill="#eae3da" d="..."/> <path class="star-medium" fill="#eae3da" d="..."/> <path class="star-large" fill="#eae3da" d="..."/> <!-- ... --> </g> Moving the stars’ fill attribute values to their parent group reduces the overall weight a little: <g id="stars" fill="#eae3da"> <path class="star-small" d="…"/> <path class="star-medium" d="…"/> <path class="star-large" d="…"/> <!-- ... --> </g> But a more efficient and manageable option is to define each star size as a reusable template: <defs> <path id="star-large" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-medium" fill="#eae3da" fill-rule="evenodd" d="…"/> <path id="star-small" fill="#eae3da" fill-rule="evenodd" d="…"/> </defs> With this setup, changing a star’s design only means updating its template once, and every instance updates automatically. Then, I reference each one using <use> and position them with x and y attributes: <g id="stars"> <!-- Large stars --> <use href="#star-large" x="1575" y="495"/> <!-- ... --> <!-- Medium stars --> <use href="#star-medium" x="1453" y="696"/> <!-- ... --> <!-- Small stars --> <use href="#star-small" x="1287" y="741"/> <!-- ... --> </g> This approach makes the SVG easier to manage, lighter to load, and faster to iterate on, especially when working with dozens of repeating elements. Best of all, it keeps the markup clean without compromising on flexibility or performance. Adding Animations The stars trailing behind Yogi’s stolen broom bring so much personality to the animation. I wanted them to sparkle in a seemingly random pattern against the dark blue background, so I started by defining a keyframe animation that cycles through different opacity levels: @keyframes sparkle { 0%, 100% { opacity: .1; } 50% { opacity: 1; } } Next, I applied this looping animation to every use element inside my stars group: #stars use { animation: sparkle 10s ease-in-out infinite; } The secret to creating a convincing twinkle lies in variation. I staggered animation delays and durations across the stars using nth-child selectors, starting with the quickest and most frequent sparkle effects: /* Fast, frequent */ #stars use:nth-child(n + 1):nth-child(-n + 10) { animation-delay: .1s; animation-duration: 2s; } From there, I layered in additional timings to mix things up. Some stars sparkle slowly and dramatically, others more randomly, with a variety of rhythms and pauses: /* Medium */ #stars use:nth-child(n + 11):nth-child(-n + 20) { ... } /* Slow, dramatic */ #stars use:nth-child(n + 21):nth-child(-n + 30) { ... } /* Random */ #stars use:nth-child(3n + 2) { ... } /* Alternating */ #stars use:nth-child(4n + 1) { ... } /* Scattered */ #stars use:nth-child(n + 31) { ... } By thoughtfully structuring the SVG and reusing elements, I can build complex-looking animations without bloated code, making even a simple effect like changing opacity sparkle. Then, for added realism, I make Yogi’s head wobble: @keyframes headWobble { 0% { transform: rotate(-0.8deg) translateY(-0.5px); } 100% { transform: rotate(0.9deg) translateY(0.3px); } } #head { animation: headWobble 0.8s cubic-bezier(0.5, 0.15, 0.5, 0.85) infinite alternate; } His tie waves: @keyframes tieWave { 0%, 100% { transform: rotateZ(-4deg) rotateY(15deg) scaleX(0.96); } 33% { transform: rotateZ(5deg) rotateY(-10deg) scaleX(1.05); } 66% { transform: rotateZ(-2deg) rotateY(5deg) scaleX(0.98); } } #tie { transform-style: preserve-3d; animation: tieWave 10s cubic-bezier(0.68, -0.55, 0.27, 1.55) infinite; } His broom swings: @keyframes broomSwing { 0%, 20% { transform: rotate(-5deg); } 30% { transform: rotate(-4deg); } 50%, 70% { transform: rotate(5deg); } 80% { transform: rotate(4deg); } 100% { transform: rotate(-5deg); } } #broom { animation: broomSwing 4s cubic-bezier(0.5, 0.05, 0.5, 0.95) infinite; } And, finally, Yogi himself gently rotates as he flies on his magical broom: @keyframes yogiWobble { 0% { transform: rotate(-2.8deg) translateY(-0.8px) scale(0.998); } 30% { transform: rotate(1.5deg) translateY(0.3px); } 100% { transform: rotate(3.2deg) translateY(1.2px) scale(1.002); } } #yogi { animation: yogiWobble 3.5s cubic-bezier(.37, .14, .3, .86) infinite alternate; } All these subtle movements bring Yogi to life. By developing structured SVGs, I can create animations that feel full of character without writing a single line of JavaScript. Try this yourself: See the Pen Bewitched Bear CSS/SVG animation [forked] by Andy Clarke. Conclusion Whether you’re recreating a classic title card or animating icons for an interface, the principles are the same: Start clean, Optimise early, and Structure everything with animation in mind. SVGs offer incredible creative freedom, but only if kept lean and manageable. When you plan your process like a production cell — layer by layer, element by element — you’ll spend less time untangling code and more time bringing your work to life.
    Like
    Love
    Wow
    Angry
    Sad
    273
    0 التعليقات 0 المشاركات
  • ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models.
    Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing.
    This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications.
    The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3.
    It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature.
    According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences.
    Enhancing naturalistic speech
    A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model.
    This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems.
    By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen.
    This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation.
    Multilingual support
    Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration.
    This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction.
    The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences.
    Enterprise-grade
    One of the more powerful additions is the built-in Retrieval-Augmented Generationsystem. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections.
    For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively.
    Multimodality and alternate personas
    In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels.
    Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns.
    Batch outbound calling
    For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\
    Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages.
    This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts.
    Enterprise-grade standards and pricing plans
    Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe.
    ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments.
    As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website:

    Free: /month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing.
    Starter: /month, includes 50 minutes, 6 concurrency limit.
    Creator: /month, includes 250 minutes, 6 concurrency limit, ~per additional minute.
    Pro: /month, includes 1,100 minutes, 10 concurrency limit, ~per additional minute.
    Scale: /month, includes 3,600 minutes, 20 concurrency limit, ~per additional minute.
    Business: /month, includes 13,750 minutes, 30 concurrency limit, ~per additional minute.

    A new chapter in realistic, naturalistic AI voice interactions
    As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.”
    With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions.
    For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #elevenlabs #debuts #conversational #voice #assistants
    ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction. The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences. Enterprise-grade One of the more powerful additions is the built-in Retrieval-Augmented Generationsystem. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively. Multimodality and alternate personas In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels. Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns. Batch outbound calling For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\ Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages. This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts. Enterprise-grade standards and pricing plans Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe. ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments. As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website: Free: /month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing. Starter: /month, includes 50 minutes, 6 concurrency limit. Creator: /month, includes 250 minutes, 6 concurrency limit, ~per additional minute. Pro: /month, includes 1,100 minutes, 10 concurrency limit, ~per additional minute. Scale: /month, includes 3,600 minutes, 20 concurrency limit, ~per additional minute. Business: /month, includes 13,750 minutes, 30 concurrency limit, ~per additional minute. A new chapter in realistic, naturalistic AI voice interactions As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.” With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions. For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #elevenlabs #debuts #conversational #voice #assistants
    VENTUREBEAT.COM
    ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction. The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences. Enterprise-grade One of the more powerful additions is the built-in Retrieval-Augmented Generation (RAG) system. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively. Multimodality and alternate personas In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels. Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns. Batch outbound calling For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\ Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages. This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts. Enterprise-grade standards and pricing plans Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe. ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments. As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website: Free: $0/month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing. Starter: $5/month, includes 50 minutes, 6 concurrency limit. Creator: $11/month (discounted from $22), includes 250 minutes, 6 concurrency limit, ~$0.12 per additional minute. Pro: $99/month, includes 1,100 minutes, 10 concurrency limit, ~$0.11 per additional minute. Scale: $330/month, includes 3,600 minutes, 20 concurrency limit, ~$0.10 per additional minute. Business: $1,320/month, includes 13,750 minutes, 30 concurrency limit, ~$0.096 per additional minute. A new chapter in realistic, naturalistic AI voice interactions As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.” With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions. For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 التعليقات 0 المشاركات
  • From artificial to authentic

    Developing creative intuition, leaning into courage, and resisting editing away our unique voice.I pasted an Oscar Wilde quote into Notion the other day. As soon as I did, the AI symbol popped up with the “improve writing” suggestion.A quote by Oscar WildeI didn’t click it, but it made me think…If Oscar Wilde was alive today, would he too have been lured to change his words with AI? Would he have trusted AI more than his creative intuition?AI would most likely have edited Wilde’s voice by removing “unnecessary” words and simplifying sentences. But would it have been an improvement? If his work is no longer in his voice, how can we say it’s better?Our authentic voice is our workAs creatives, our authentic voice is our work. Whether we’re writers, singers, designers, painters, or sculptors.I’ve realized lately that I wish for us all to become less concerned with being perfect and more concerned with developing our unique voice and following our own intuition. When we read poetry, we learn that a sentence might not be perfectly correct but it speaks directly to our hearts. It breaks grammatical rules but it’s also able to break us open in ways we could only imagine.With our computers constantly prompting us to change and “improve” our writing, thinking, and making, we have to ensure we don’t lose our unique expression. We must make sure that we don’t lose touch with our creative intuition, that we continue to lean into courage, and that we don’t edit away what makes our work distinctly ours.DesignShift: From artificial to authentic1. Keep developing your own voiceWhen I use AI for my writing, I often find myself questioning if the AI’s version is really better than my own. I’m frequently confused about “why” it changed something, and even when I ask about the rationale, I find the explanation isn’t that convincing.Some would tell me that I’m just not prompting AI well enough to get the best result, but I keep asking myself what this tool is in service of.However, I’ve noticed how our tools encourage perfection, and doubt can start to creep in when AI suggests one thing and our intuition tells us something different. This happens to me on days when I show up to work with self-doubt — days when I’m deep in uncertainty about my own abilities. On those days, I trust AI more, and the prompt to change my words makes my swaying confidence even more rocky.On days like these, I remind myself of poetry. Through poetry, we learn that a sentence might not be perfectly correct but it speaks directly to our hearts. It breaks grammatical rules but it’s also able to break us open in ways we could only imagine. One such powerful voice is Maya Angelou, whose words “just do right” have stayed with me.In her wisdom, she says:“You know what’s right. Just do right. You don’t really have to ask anybody. The truth is, right may not be expedient, it may not be profitable, but it will satisfy… your soul.”Image from words move with rhythm, but they also remind us that we DO know what’s right. No one knows our voice better than us. And that is what people want to hear. We don’t always have to ask someone else or ChatGPT for a better way to say something. Trusting our own voice makes all the difference.The same way that a design that breaks the rules sometimes becomes more impactful, I remind myself that embracing my unique voice will take me further than a perfectly crafted bullet-style post powered by a robot.2. The courage to be seenThe other day, I read a quote that said “creativity is the courage to be seen.” While writing this post, this quote kept surfacing in my mind. As creatives, it takes courage to show up as our unique selves. It takes courage to show both the good and the bad. It takes courage to be all that we are. The reward for showing up vulnerably and authentically is connection.How we connect to topics. To someone’s story. To each other. When someone speaks from their heart, unedited and unfiltered, it helps us feel something.Connection happens when someone truly sees us for who we are and embraces all of it. That is true connection.There’s a difference between the desire to be seen and the courage to be seen. The desire is often rooted in external validation — wanting to be liked and wanted. Much of our online world is crafted this way. We editin order to be liked and followed. We make sure that our voice matches our brand and we craft one-minute elevator pitches to ensure people understand exactly who we are and what we have to offer.However, the courage that helps us connect to others lies beyond the poses and the polish. The courage to be seen is about showing up as our full selves.3. Connection happens in the cracksConnections and feelings are found in the cracks. They are discovered between the lines. In the awkward pauses and the unpolished thoughts. They exist in unedited, real expressions rather than perfectly written, bullet-pointed lists generated by a robot.As Joshua Schrei said on the Emerald podcast:“Art dies when culture decides that there is a certain way you have to say certain things. Then you don’t have art. You have a press release.”Poetry, art, and also the human experience thrive in its willingness to not make complete sense. For example, the raw, uninhibited expressions of artists like Jean-Michel Basquiat show us that perfection isn’t necessary for profound impact. When we share our authentic selves, we invite others to do the same. We often think that the world expects and craves perfection. We’re taught rules… but the human experience is flawed. The cracks make us able to connect with others.Creativity is about connection, and connections are formed in the cracks. When someone shows their weakness or vulnerability, we get permission to show ours.At the heart of it all are feelings. Creative work is about feelings, and even though ChatGPT can act empathetic, it’s not the same as real feelings. Because real connection is built through brokenness. It’s in the cracks that connections are formed.In times of robotsIf Oscar Wilde lived today, would AI have given him prompts? Would AI suggest “improvements” to the works of literary and artistic icons? Would Midjourney have offered to enhance Jean-Michel Basquiat’s expressive style?Would these creative icons have been lured to edit their unique expression to appeal to the masses at the creative direction of a robot? My intuition tells me that they would have resisted the prompts and leaned into their uniqueness even more — and that is what I hope for all creatives today. With our computers constantly prompting us to change and “improve” our own writing, thinking, and making, we have to ensure we don’t lose our unique expression. We must make sure that we don’t lose touch with our creative intuition and that we don’t edit away the uniqueness and the cracks that breed connections.In times of robots, I hope we can lean into our humanness even more. In times of robots, I hope we will remind ourselves and each other that our unique voices matter. In times of robots, I hope you will connect through your cracks without editing your uniqueness.Links and resources:Maya Angelou: Just do right Trickster Jumps Sides: Disruption and the Anatomy of CultureDesignShifts: a better future for and through designThe Power of Poetry | Shayna Castano | TEDxLSSCBurning Questions — James Victore is an irreverent prophet for the creative industriesFrom artificial to authentic was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #artificial #authentic
    From artificial to authentic
    Developing creative intuition, leaning into courage, and resisting editing away our unique voice.I pasted an Oscar Wilde quote into Notion the other day. As soon as I did, the AI symbol popped up with the “improve writing” suggestion.A quote by Oscar WildeI didn’t click it, but it made me think…If Oscar Wilde was alive today, would he too have been lured to change his words with AI? Would he have trusted AI more than his creative intuition?AI would most likely have edited Wilde’s voice by removing “unnecessary” words and simplifying sentences. But would it have been an improvement? If his work is no longer in his voice, how can we say it’s better?Our authentic voice is our workAs creatives, our authentic voice is our work. Whether we’re writers, singers, designers, painters, or sculptors.I’ve realized lately that I wish for us all to become less concerned with being perfect and more concerned with developing our unique voice and following our own intuition. When we read poetry, we learn that a sentence might not be perfectly correct but it speaks directly to our hearts. It breaks grammatical rules but it’s also able to break us open in ways we could only imagine.With our computers constantly prompting us to change and “improve” our writing, thinking, and making, we have to ensure we don’t lose our unique expression. We must make sure that we don’t lose touch with our creative intuition, that we continue to lean into courage, and that we don’t edit away what makes our work distinctly ours.DesignShift: From artificial to authentic1. Keep developing your own voiceWhen I use AI for my writing, I often find myself questioning if the AI’s version is really better than my own. I’m frequently confused about “why” it changed something, and even when I ask about the rationale, I find the explanation isn’t that convincing.Some would tell me that I’m just not prompting AI well enough to get the best result, but I keep asking myself what this tool is in service of.However, I’ve noticed how our tools encourage perfection, and doubt can start to creep in when AI suggests one thing and our intuition tells us something different. This happens to me on days when I show up to work with self-doubt — days when I’m deep in uncertainty about my own abilities. On those days, I trust AI more, and the prompt to change my words makes my swaying confidence even more rocky.On days like these, I remind myself of poetry. Through poetry, we learn that a sentence might not be perfectly correct but it speaks directly to our hearts. It breaks grammatical rules but it’s also able to break us open in ways we could only imagine. One such powerful voice is Maya Angelou, whose words “just do right” have stayed with me.In her wisdom, she says:“You know what’s right. Just do right. You don’t really have to ask anybody. The truth is, right may not be expedient, it may not be profitable, but it will satisfy… your soul.”Image from words move with rhythm, but they also remind us that we DO know what’s right. No one knows our voice better than us. And that is what people want to hear. We don’t always have to ask someone else or ChatGPT for a better way to say something. Trusting our own voice makes all the difference.The same way that a design that breaks the rules sometimes becomes more impactful, I remind myself that embracing my unique voice will take me further than a perfectly crafted bullet-style post powered by a robot.2. The courage to be seenThe other day, I read a quote that said “creativity is the courage to be seen.” While writing this post, this quote kept surfacing in my mind. As creatives, it takes courage to show up as our unique selves. It takes courage to show both the good and the bad. It takes courage to be all that we are. The reward for showing up vulnerably and authentically is connection.How we connect to topics. To someone’s story. To each other. When someone speaks from their heart, unedited and unfiltered, it helps us feel something.Connection happens when someone truly sees us for who we are and embraces all of it. That is true connection.There’s a difference between the desire to be seen and the courage to be seen. The desire is often rooted in external validation — wanting to be liked and wanted. Much of our online world is crafted this way. We editin order to be liked and followed. We make sure that our voice matches our brand and we craft one-minute elevator pitches to ensure people understand exactly who we are and what we have to offer.However, the courage that helps us connect to others lies beyond the poses and the polish. The courage to be seen is about showing up as our full selves.3. Connection happens in the cracksConnections and feelings are found in the cracks. They are discovered between the lines. In the awkward pauses and the unpolished thoughts. They exist in unedited, real expressions rather than perfectly written, bullet-pointed lists generated by a robot.As Joshua Schrei said on the Emerald podcast:“Art dies when culture decides that there is a certain way you have to say certain things. Then you don’t have art. You have a press release.”Poetry, art, and also the human experience thrive in its willingness to not make complete sense. For example, the raw, uninhibited expressions of artists like Jean-Michel Basquiat show us that perfection isn’t necessary for profound impact. When we share our authentic selves, we invite others to do the same. We often think that the world expects and craves perfection. We’re taught rules… but the human experience is flawed. The cracks make us able to connect with others.Creativity is about connection, and connections are formed in the cracks. When someone shows their weakness or vulnerability, we get permission to show ours.At the heart of it all are feelings. Creative work is about feelings, and even though ChatGPT can act empathetic, it’s not the same as real feelings. Because real connection is built through brokenness. It’s in the cracks that connections are formed.In times of robotsIf Oscar Wilde lived today, would AI have given him prompts? Would AI suggest “improvements” to the works of literary and artistic icons? Would Midjourney have offered to enhance Jean-Michel Basquiat’s expressive style?Would these creative icons have been lured to edit their unique expression to appeal to the masses at the creative direction of a robot? My intuition tells me that they would have resisted the prompts and leaned into their uniqueness even more — and that is what I hope for all creatives today. With our computers constantly prompting us to change and “improve” our own writing, thinking, and making, we have to ensure we don’t lose our unique expression. We must make sure that we don’t lose touch with our creative intuition and that we don’t edit away the uniqueness and the cracks that breed connections.In times of robots, I hope we can lean into our humanness even more. In times of robots, I hope we will remind ourselves and each other that our unique voices matter. In times of robots, I hope you will connect through your cracks without editing your uniqueness.Links and resources:Maya Angelou: Just do right Trickster Jumps Sides: Disruption and the Anatomy of CultureDesignShifts: a better future for and through designThe Power of Poetry | Shayna Castano | TEDxLSSCBurning Questions — James Victore is an irreverent prophet for the creative industriesFrom artificial to authentic was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #artificial #authentic
    UXDESIGN.CC
    From artificial to authentic
    Developing creative intuition, leaning into courage, and resisting editing away our unique voice.I pasted an Oscar Wilde quote into Notion the other day. As soon as I did, the AI symbol popped up with the “improve writing” suggestion.A quote by Oscar WildeI didn’t click it, but it made me think…If Oscar Wilde was alive today, would he too have been lured to change his words with AI? Would he have trusted AI more than his creative intuition?AI would most likely have edited Wilde’s voice by removing “unnecessary” words and simplifying sentences. But would it have been an improvement? If his work is no longer in his voice, how can we say it’s better?Our authentic voice is our workAs creatives, our authentic voice is our work. Whether we’re writers, singers, designers, painters, or sculptors.I’ve realized lately that I wish for us all to become less concerned with being perfect and more concerned with developing our unique voice and following our own intuition. When we read poetry, we learn that a sentence might not be perfectly correct but it speaks directly to our hearts. It breaks grammatical rules but it’s also able to break us open in ways we could only imagine.With our computers constantly prompting us to change and “improve” our writing, thinking, and making, we have to ensure we don’t lose our unique expression. We must make sure that we don’t lose touch with our creative intuition, that we continue to lean into courage, and that we don’t edit away what makes our work distinctly ours.DesignShift: From artificial to authentic1. Keep developing your own voiceWhen I use AI for my writing, I often find myself questioning if the AI’s version is really better than my own. I’m frequently confused about “why” it changed something, and even when I ask about the rationale, I find the explanation isn’t that convincing.Some would tell me that I’m just not prompting AI well enough to get the best result, but I keep asking myself what this tool is in service of.However, I’ve noticed how our tools encourage perfection, and doubt can start to creep in when AI suggests one thing and our intuition tells us something different. This happens to me on days when I show up to work with self-doubt — days when I’m deep in uncertainty about my own abilities. On those days, I trust AI more, and the prompt to change my words makes my swaying confidence even more rocky.On days like these, I remind myself of poetry. Through poetry, we learn that a sentence might not be perfectly correct but it speaks directly to our hearts. It breaks grammatical rules but it’s also able to break us open in ways we could only imagine. One such powerful voice is Maya Angelou, whose words “just do right” have stayed with me.In her wisdom, she says:“You know what’s right. Just do right. You don’t really have to ask anybody. The truth is, right may not be expedient, it may not be profitable, but it will satisfy… your soul.”Image from https://bookstr.com/article/10-writing-quotes-from-maya-angelou-to-inspire-you/These words move with rhythm, but they also remind us that we DO know what’s right. No one knows our voice better than us. And that is what people want to hear. We don’t always have to ask someone else or ChatGPT for a better way to say something. Trusting our own voice makes all the difference.The same way that a design that breaks the rules sometimes becomes more impactful, I remind myself that embracing my unique voice will take me further than a perfectly crafted bullet-style post powered by a robot.2. The courage to be seenThe other day, I read a quote that said “creativity is the courage to be seen.” While writing this post, this quote kept surfacing in my mind. As creatives, it takes courage to show up as our unique selves. It takes courage to show both the good and the bad. It takes courage to be all that we are. The reward for showing up vulnerably and authentically is connection.How we connect to topics. To someone’s story. To each other. When someone speaks from their heart, unedited and unfiltered, it helps us feel something.Connection happens when someone truly sees us for who we are and embraces all of it. That is true connection.There’s a difference between the desire to be seen and the courage to be seen. The desire is often rooted in external validation — wanting to be liked and wanted. Much of our online world is crafted this way. We edit (with or without AI) in order to be liked and followed. We make sure that our voice matches our brand and we craft one-minute elevator pitches to ensure people understand exactly who we are and what we have to offer.However, the courage that helps us connect to others lies beyond the poses and the polish. The courage to be seen is about showing up as our full selves.3. Connection happens in the cracksConnections and feelings are found in the cracks. They are discovered between the lines. In the awkward pauses and the unpolished thoughts. They exist in unedited, real expressions rather than perfectly written, bullet-pointed lists generated by a robot.As Joshua Schrei said on the Emerald podcast:“Art dies when culture decides that there is a certain way you have to say certain things. Then you don’t have art. You have a press release.”Poetry, art, and also the human experience thrive in its willingness to not make complete sense. For example, the raw, uninhibited expressions of artists like Jean-Michel Basquiat show us that perfection isn’t necessary for profound impact. When we share our authentic selves, we invite others to do the same. We often think that the world expects and craves perfection. We’re taught rules… but the human experience is flawed. The cracks make us able to connect with others.Creativity is about connection, and connections are formed in the cracks. When someone shows their weakness or vulnerability, we get permission to show ours.At the heart of it all are feelings. Creative work is about feelings, and even though ChatGPT can act empathetic, it’s not the same as real feelings. Because real connection is built through brokenness. It’s in the cracks that connections are formed.In times of robotsIf Oscar Wilde lived today, would AI have given him prompts? Would AI suggest “improvements” to the works of literary and artistic icons? Would Midjourney have offered to enhance Jean-Michel Basquiat’s expressive style?Would these creative icons have been lured to edit their unique expression to appeal to the masses at the creative direction of a robot? My intuition tells me that they would have resisted the prompts and leaned into their uniqueness even more — and that is what I hope for all creatives today. With our computers constantly prompting us to change and “improve” our own writing, thinking, and making, we have to ensure we don’t lose our unique expression. We must make sure that we don’t lose touch with our creative intuition and that we don’t edit away the uniqueness and the cracks that breed connections.In times of robots, I hope we can lean into our humanness even more. In times of robots, I hope we will remind ourselves and each other that our unique voices matter. In times of robots, I hope you will connect through your cracks without editing your uniqueness.Links and resources:Maya Angelou: Just do right (video)Trickster Jumps Sides: Disruption and the Anatomy of Culture (Podcast)DesignShifts: a better future for and through design (website)The Power of Poetry | Shayna Castano | TEDxLSSC (TED Talk)Burning Questions — James Victore is an irreverent prophet for the creative industries (article)From artificial to authentic was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 التعليقات 0 المشاركات
  • Mission: Impossible Box Office Deja Vu: Tom Cruise Has Second Good Opening Against Lilo & Stitch 

    We’re not sure if he chose to accept it intentionally or not, but Tom Cruise has cleared his mission in providing movie theaters with a healthy opening weekend against Disney’s bizarre, Elvis-loving alien for the second time in 23 years. Yep, more than two decades after Cruise shared the same opening frame with the animated Lilo & Stitch in 2002—when the hand-drawn Gen-Z classic went head to head with Cruise and Steven Spielberg’s neo noir sci-fi, Minority Report—the movie star has danced with the little space dude again via Mission: Impossible – The Final Reckoning opening opposite the Lilo & Stitch remake.
    And this time, the pecking order is reversed.

    Twenty-three years ago, it was considered almost ho-hum when Minority Report topped out above Lilo & Stitch and both films managed to gross north of million. This was otherwise business as usual in a healthy summer movie season where the real anomaly was that the first Spider-Man had become the first movie to cross the million in a weekend a month earlier. At the time, Minority Report did slightly better with million versus Lilo’s million. But in the year of our streaming lord 2025, it’s a big win for movie theaters that both Final Reckoning and ESPECIALLY Disney’s mostly live-action remake have generated the biggest Memorial Day weekend ever in the U.S., albeit now with Lilo on top via its estimated million opening across four days. For the record, this also snags another benchmark from Cruise by taking the biggest Memorial Day opening record from Top Gun: Maverick. Furthermore, Lilo earned a jaw-dropping million worldwide.
    Meanwhile Mission: Impossible – The Final Reckoning is projected to have opened at million across its first four days, and million over the first three days. Some will likely speculate how this can make up for the much gossiped about budget of the film—with Puck News estimating the eighth Mission film costing a gargantuan million—but taken in perspective of the whole franchise, this is a very good start for The Final Reckoning, which was a victim of filming both COVID pauses and delays, and then later having to suspend production because of the 2023 labor strikes.

    For context, the previously best opening the M:I series ever saw was when Mission: Impossible – Fallout debuted to million during a conventional three-day weekend in 2018. That movie also is one of the finest action films ever produced and received an “A” CinemaScore. In retrospect, it would seem when a masterpiece of blockbuster cinema like that could not clear million, a definite ceiling on the franchise’s earning potential had slowly materialized in recent years. Consider that the previous best opening in the series was Mission: Impossible II back in 2000, a clean quarter-century ago, when it made million.
    In other words, the series’ most popular days are long behind it. Nonetheless, when not counting for inflation, The Final Reckoning has enjoyed the largest opening weekend in the series’ history—including even when you discount the holiday Monday that buoys The Final Reckoning’s opening weekend to million. In one sense, this proves that the goodwill Cruise and Ethan Hunt can still generate with his most loyal audience remains sky high. In another, it is also confirmation that regaining control of IMAX screens is crucial in the 2020s for a blockbuster with a loyal but relatively contained audience.
    After all, this is a big gain for the franchise over Dead Reckoning, which despite having a higher CinemaScore grade from audiences polled than Final Reckoningopened below million two years, likely in part because audiences were saving their ticket-buying money for Barbenheimer the following weekend, which included Christopher Nolan’s Oppenheimer commandeering all the IMAX screens from Mission.
    At the end of the day, The Final Reckoning was able to grow business and audience interest over Dead Reckoning and set a franchise record in spite of opening in the same weekend as Disney’s lovable little alien.
    Whether it is enough to justify the rumored million price tag is a horse of a different color. However, Cruise has positioned himself as such a champion of movie theater owners and the box office in a post-COVID world that he can certainly take a victory lap in helping deliver a historic win for the industry this Memorial Day. And frankly, given how we remain skeptical that The Final Reckoning
    #mission #impossible #box #office #deja
    Mission: Impossible Box Office Deja Vu: Tom Cruise Has Second Good Opening Against Lilo & Stitch 
    We’re not sure if he chose to accept it intentionally or not, but Tom Cruise has cleared his mission in providing movie theaters with a healthy opening weekend against Disney’s bizarre, Elvis-loving alien for the second time in 23 years. Yep, more than two decades after Cruise shared the same opening frame with the animated Lilo & Stitch in 2002—when the hand-drawn Gen-Z classic went head to head with Cruise and Steven Spielberg’s neo noir sci-fi, Minority Report—the movie star has danced with the little space dude again via Mission: Impossible – The Final Reckoning opening opposite the Lilo & Stitch remake. And this time, the pecking order is reversed. Twenty-three years ago, it was considered almost ho-hum when Minority Report topped out above Lilo & Stitch and both films managed to gross north of million. This was otherwise business as usual in a healthy summer movie season where the real anomaly was that the first Spider-Man had become the first movie to cross the million in a weekend a month earlier. At the time, Minority Report did slightly better with million versus Lilo’s million. But in the year of our streaming lord 2025, it’s a big win for movie theaters that both Final Reckoning and ESPECIALLY Disney’s mostly live-action remake have generated the biggest Memorial Day weekend ever in the U.S., albeit now with Lilo on top via its estimated million opening across four days. For the record, this also snags another benchmark from Cruise by taking the biggest Memorial Day opening record from Top Gun: Maverick. Furthermore, Lilo earned a jaw-dropping million worldwide. Meanwhile Mission: Impossible – The Final Reckoning is projected to have opened at million across its first four days, and million over the first three days. Some will likely speculate how this can make up for the much gossiped about budget of the film—with Puck News estimating the eighth Mission film costing a gargantuan million—but taken in perspective of the whole franchise, this is a very good start for The Final Reckoning, which was a victim of filming both COVID pauses and delays, and then later having to suspend production because of the 2023 labor strikes. For context, the previously best opening the M:I series ever saw was when Mission: Impossible – Fallout debuted to million during a conventional three-day weekend in 2018. That movie also is one of the finest action films ever produced and received an “A” CinemaScore. In retrospect, it would seem when a masterpiece of blockbuster cinema like that could not clear million, a definite ceiling on the franchise’s earning potential had slowly materialized in recent years. Consider that the previous best opening in the series was Mission: Impossible II back in 2000, a clean quarter-century ago, when it made million. In other words, the series’ most popular days are long behind it. Nonetheless, when not counting for inflation, The Final Reckoning has enjoyed the largest opening weekend in the series’ history—including even when you discount the holiday Monday that buoys The Final Reckoning’s opening weekend to million. In one sense, this proves that the goodwill Cruise and Ethan Hunt can still generate with his most loyal audience remains sky high. In another, it is also confirmation that regaining control of IMAX screens is crucial in the 2020s for a blockbuster with a loyal but relatively contained audience. After all, this is a big gain for the franchise over Dead Reckoning, which despite having a higher CinemaScore grade from audiences polled than Final Reckoningopened below million two years, likely in part because audiences were saving their ticket-buying money for Barbenheimer the following weekend, which included Christopher Nolan’s Oppenheimer commandeering all the IMAX screens from Mission. At the end of the day, The Final Reckoning was able to grow business and audience interest over Dead Reckoning and set a franchise record in spite of opening in the same weekend as Disney’s lovable little alien. Whether it is enough to justify the rumored million price tag is a horse of a different color. However, Cruise has positioned himself as such a champion of movie theater owners and the box office in a post-COVID world that he can certainly take a victory lap in helping deliver a historic win for the industry this Memorial Day. And frankly, given how we remain skeptical that The Final Reckoning #mission #impossible #box #office #deja
    WWW.DENOFGEEK.COM
    Mission: Impossible Box Office Deja Vu: Tom Cruise Has Second Good Opening Against Lilo & Stitch 
    We’re not sure if he chose to accept it intentionally or not, but Tom Cruise has cleared his mission in providing movie theaters with a healthy opening weekend against Disney’s bizarre, Elvis-loving alien for the second time in 23 years. Yep, more than two decades after Cruise shared the same opening frame with the animated Lilo & Stitch in 2002—when the hand-drawn Gen-Z classic went head to head with Cruise and Steven Spielberg’s neo noir sci-fi, Minority Report—the movie star has danced with the little space dude again via Mission: Impossible – The Final Reckoning opening opposite the Lilo & Stitch remake. And this time, the pecking order is reversed. Twenty-three years ago, it was considered almost ho-hum when Minority Report topped out above Lilo & Stitch and both films managed to gross north of $35 million. This was otherwise business as usual in a healthy summer movie season where the real anomaly was that the first Spider-Man had become the first movie to cross the $100 million in a weekend a month earlier. At the time, Minority Report did slightly better with $35.7 million versus Lilo’s $35.2 million. But in the year of our streaming lord 2025, it’s a big win for movie theaters that both Final Reckoning and ESPECIALLY Disney’s mostly live-action remake have generated the biggest Memorial Day weekend ever in the U.S., albeit now with Lilo on top via its estimated $180 million opening across four days. For the record, this also snags another benchmark from Cruise by taking the biggest Memorial Day opening record from Top Gun: Maverick ($161 million in 2022). Furthermore, Lilo earned a jaw-dropping $342 million worldwide. Meanwhile Mission: Impossible – The Final Reckoning is projected to have opened at $77 million across its first four days, and $63 million over the first three days. Some will likely speculate how this can make up for the much gossiped about budget of the film—with Puck News estimating the eighth Mission film costing a gargantuan $400 million—but taken in perspective of the whole franchise, this is a very good start for The Final Reckoning, which was a victim of filming both COVID pauses and delays, and then later having to suspend production because of the 2023 labor strikes. For context, the previously best opening the M:I series ever saw was when Mission: Impossible – Fallout debuted to $61 million during a conventional three-day weekend in 2018. That movie also is one of the finest action films ever produced and received an “A” CinemaScore. In retrospect, it would seem when a masterpiece of blockbuster cinema like that could not clear $70 million, a definite ceiling on the franchise’s earning potential had slowly materialized in recent years. Consider that the previous best opening in the series was Mission: Impossible II back in 2000, a clean quarter-century ago, when it made $58 million (or about $108 million in 2025 dollars). In other words, the series’ most popular days are long behind it. Nonetheless, when not counting for inflation, The Final Reckoning has enjoyed the largest opening weekend in the series’ history—including even when you discount the holiday Monday that buoys The Final Reckoning’s opening weekend to $77 million. In one sense, this proves that the goodwill Cruise and Ethan Hunt can still generate with his most loyal audience remains sky high (consider that according to Deadline, Final Reckoning’s biggest demo was with audience members over the age of 55!). In another, it is also confirmation that regaining control of IMAX screens is crucial in the 2020s for a blockbuster with a loyal but relatively contained audience. After all, this is a big gain for the franchise over Dead Reckoning, which despite having a higher CinemaScore grade from audiences polled than Final Reckoning (an “A” vs. an “A-”) opened below $55 million two years, likely in part because audiences were saving their ticket-buying money for Barbenheimer the following weekend, which included Christopher Nolan’s Oppenheimer commandeering all the IMAX screens from Mission. At the end of the day, The Final Reckoning was able to grow business and audience interest over Dead Reckoning and set a franchise record in spite of opening in the same weekend as Disney’s lovable little alien. Whether it is enough to justify the rumored $400 million price tag is a horse of a different color. However, Cruise has positioned himself as such a champion of movie theater owners and the box office in a post-COVID world that he can certainly take a victory lap in helping deliver a historic win for the industry this Memorial Day. And frankly, given how we remain skeptical that The Final Reckoning
    0 التعليقات 0 المشاركات