• OpenAI’s next big bet won’t be a wearable: report

    In Brief

    Posted:
    9:38 PM PDT · May 21, 2025

    Image Credits:Eugene Gologursky/The New York Times / Getty Images

    OpenAI’s next big bet won’t be a wearable: report

    OpenAI pushed generative AI into the public consciousness. Now, it could be developing a very different kind of AI device.
    According to a WSJ report, OpenAI CEO Sam Altman told employees Wednesday that the company’s next major product won’t be a wearable. Instead, it will be a compact, screenless device, fully aware of its user’s surroundings. Small enough to sit on a desk or fit in a pocket, Altman described it as both a “third core device” alongside a MacBook Pro and iPhone, and an “AI companion” integrated into daily life.
    The preview followed OpenAI’s announcement that it will acquire io, a startup founded just last year by former Apple designer Jony Ive, in a billion equity deal. Ive will take on a key creative and design role at OpenAI.
    Altman reportedly told employees the acquisition could eventually add trillion in market value to the company as it creates a new category of devices unlike the handhelds, wearables, or glasses that other outfits have rolled out.
    Altman also reportedly emphasized to staff that secrecy will be critical to prevent competitors from copying the product before launch. A recording of his remarks leaked to the Journal raises questions about how much he can trust his own team and how much more he’ll be willing to disclose.

    Topics
    #openais #next #big #bet #wont
    OpenAI’s next big bet won’t be a wearable: report
    In Brief Posted: 9:38 PM PDT · May 21, 2025 Image Credits:Eugene Gologursky/The New York Times / Getty Images OpenAI’s next big bet won’t be a wearable: report OpenAI pushed generative AI into the public consciousness. Now, it could be developing a very different kind of AI device. According to a WSJ report, OpenAI CEO Sam Altman told employees Wednesday that the company’s next major product won’t be a wearable. Instead, it will be a compact, screenless device, fully aware of its user’s surroundings. Small enough to sit on a desk or fit in a pocket, Altman described it as both a “third core device” alongside a MacBook Pro and iPhone, and an “AI companion” integrated into daily life. The preview followed OpenAI’s announcement that it will acquire io, a startup founded just last year by former Apple designer Jony Ive, in a billion equity deal. Ive will take on a key creative and design role at OpenAI. Altman reportedly told employees the acquisition could eventually add trillion in market value to the company as it creates a new category of devices unlike the handhelds, wearables, or glasses that other outfits have rolled out. Altman also reportedly emphasized to staff that secrecy will be critical to prevent competitors from copying the product before launch. A recording of his remarks leaked to the Journal raises questions about how much he can trust his own team and how much more he’ll be willing to disclose. Topics #openais #next #big #bet #wont
    OpenAI’s next big bet won’t be a wearable: report
    techcrunch.com
    In Brief Posted: 9:38 PM PDT · May 21, 2025 Image Credits:Eugene Gologursky/The New York Times / Getty Images OpenAI’s next big bet won’t be a wearable: report OpenAI pushed generative AI into the public consciousness. Now, it could be developing a very different kind of AI device. According to a WSJ report, OpenAI CEO Sam Altman told employees Wednesday that the company’s next major product won’t be a wearable. Instead, it will be a compact, screenless device, fully aware of its user’s surroundings. Small enough to sit on a desk or fit in a pocket, Altman described it as both a “third core device” alongside a MacBook Pro and iPhone, and an “AI companion” integrated into daily life. The preview followed OpenAI’s announcement that it will acquire io, a startup founded just last year by former Apple designer Jony Ive, in a $6.5 billion equity deal. Ive will take on a key creative and design role at OpenAI. Altman reportedly told employees the acquisition could eventually add $1 trillion in market value to the company as it creates a new category of devices unlike the handhelds, wearables, or glasses that other outfits have rolled out. Altman also reportedly emphasized to staff that secrecy will be critical to prevent competitors from copying the product before launch. A recording of his remarks leaked to the Journal raises questions about how much he can trust his own team and how much more he’ll be willing to disclose. Topics
    0 Commentaires ·0 Parts ·0 Aperçu
  • I thought my favorite browser blocked trackers but this free privacy tool proved me wrong

    Cover Your Tracks opened my eyes... and made me switch browsers ASAP.
    #thought #favorite #browser #blocked #trackers
    I thought my favorite browser blocked trackers but this free privacy tool proved me wrong
    Cover Your Tracks opened my eyes... and made me switch browsers ASAP. #thought #favorite #browser #blocked #trackers
    I thought my favorite browser blocked trackers but this free privacy tool proved me wrong
    www.zdnet.com
    Cover Your Tracks opened my eyes... and made me switch browsers ASAP.
    0 Commentaires ·0 Parts ·0 Aperçu
  • Snooze Button Pressed Over 55% Of Time After Sleep, Alarm, Study Says

    Over halfof the sleep sessions recorded in a study published in journal Scientific Reports ... More ended with a pressing of the snooze button. In fact, when people pressed the snooze button, they tended to do it again and again—hitting it an average of 2.4 times per sleep session for an average of 10.8 minutes extra snooze.getty
    You could say that people are hitting the snooze button at a rather alarming frequency. Over halfof the sleep sessions recorded in a study published in journal Scientific Reports ended with a pressing of the snooze button. In fact, when people pressed the snooze button, they tended to do it again and again—hitting it an average of 2.4 times per sleep session for an average of 10.8 minutes extra snooze. So if you find yourself regularly using the snooze button like so many of the study participants, should you just let such behavior rest? Or would this be a you snooze you lose situation?

    Snooze Button Study Used Data From SleepCycle App
    First, here’s a heads upabout the study that produced these results. The study was an analyses of data from 21,222 people in different parts of the world using a smartphone app named SleepCycle. Mostof the participants were in the United States, followed by 12.7% from the United Kingdom, 9.9% from Japan, 6.5% from Australia and 6.2% from Germany. The app can function as an alarm clock, allowing the user to choose either a traditional snooze, where hitting a snooze button turns off the alarm for specified duration before the alarm goes off again, or what’s called a “smart snooze” where the alarm clock will sound again depending on where someone is in his or her sleep cycle. A team from the Brigham and Women’s Hospitaland Sleep Cycleconducted the study.

    Of note, the researchers tossed out any sleep sessions that were less than four hours. That’s probably because sleeping for less than four hours is more of a nap than a full I’m-going-to-get-in-my-jammies-and-see-you-in-the-morning sleep session. This left 3,017,276 recorded sleep sessions from July 1, 2022, through December 31, 2022, to be analyzed for the study.

    Snooze Button Use More Common On Weekdays And During Colder, Less Daylit Months
    Snooze button behavior did vary by day of the week. Not surprisingly, it was more common to hit the snooze button Monday through Friday than it was on weekends. Any guesses as to why this was the case? It wouldn’t happen to be a word that rhymes with twerk, would it? Although the study didn’t track why specifically people hit the snooze button, it’s likely that work had something to do with this trend.

    Snooze button behavior did also have some variation by month of the year. In the Northern Hemisphere, December had on average the highest amount of snooze use, with the snooze button being pushed an average of 2.62 times for 11.83 minutes of snooze per sleep session. By contrast, September had lowest snooze alarm activity, with averages of 2.40 times and 10.58 minutes.
    Guess what happened in the Southern Hemisphere? Yep, this was flipped around with July being the snooziest month with an average of 2.35 snooze alarm presses and 10.2 minutes of snooze per sleep session and November being the least snoozy month at 2.29 and 10.12 minutes.
    So, it looks like the months that are traditionally the coldest with the shortest durations of daylight had the greater snooze button activity. This probably isn’t super surprising either since getting out of bed when it’s cold and dark may not be as easy as when it’s warn and sunny outside.

    Sweden Had Highest Snooze Button Use, Japan The Lowest
    There wasn’t a huge amount of variation by country, although Sweden came out on top in terms of snooze alarm useand snooze sleep. Those in Japan used snooze alarms the leastwith the least snooze sleep. Australians also used the snooze alarms 2.2 times on average. The United States came in third in both categories at 2.5 times and 11.3 minutes. Naturally, a country’s averages shouldn’t necessarily apply to everyone in that country. In another words, should you encounter someone from Sweden, it’s not appropriate to say, “I bet you hit the snooze button more often.”
    Women Used The Snooze Button More Often Than Men
    Then there was the sex, meaning the sex of the participants. Women on average hit the snooze more oftenthan men. In the process, women spent more time on the snooze.
    So, what might this say about women and men? Again, population averages don’t necessarily reflect what’s happening with each individual. Plus, such a population cohort study doesn’t let you know what’s happening an the individual level. Does this mean that more women are getting less restful sleep than men? Does this mean that more women are dreading the day whether it’s due to having more work or more unpleasant circumstances than men? It’s difficult to say from this study alone.
    Hitting The Snooze Button Doesn’t Provide Restful Sleep
    One thing’s for sure, that extra amount of shut eye after the alarm goes off won’t be the same as getting that amount added to your sleep in an interrupted manner. I written previously in Forbes about the importance of regularly getting enough sleep and potential health consequences of not doing so. Well, a good night’s sleep doesn’t just mean a certain total number of hours and minutes, no matter how they add up. Instead, it means cycling sequentially through all of the following stages of sleep, as described by Eric Suni for the Sleep Foundation:

    Stage 1: This is when you first fall asleep, is really the lightest stage of sleep and averages one to seven minutes induration.
    Stage 2: Here’s your body relaxes more, hear rate falls, breathing becomes less frequent and body temperature drops. This stage tends to last 10 to 25 minutes.
    Stage 3: This is also known as delta sleep or slow-wave sleep and is when sleep gets deep enough to be more restorative. It typically lasts 20 to 40 minutes.
    Stage 4: Here REM stands for rapid eye movement and not the musical group that sang “Everybody Hurts.” This is where you tend to dream with a fair amount of brain of activity while your body becomes temporarily paralyzed. This stage tends to last 10 to 60 minutes.

    Now, you may cycle through these stages multiple times during a lengthy sleep session. But you have to go through the stages in the above order. Usually, you won’t hit the pillow and suddenly be in REM Sleep, for example. The same applies to when you are falling back asleep.
    Therefore, hitting the snooze button will likely get you to no more than Stage 1 sleep, if that. This wouldn’t bring you anywhere near restorative sleep. In essence, snooze time is lose time. You are losing time being either half or lightly asleep.
    Therefore, it’s better to wake up and get up after that first alarm goes off. Otherwise, you are only delaying the inevitable. Ideally, you wouldn’t even need the alarm and would be waking up naturally, excited to welcome the new day. But that’s another story.
    Hitting The Snooze Button Regularly Suggests That You Need To Adjust Your Sleeping Habits
    If you find yourself relying on that snooze button regularly, chances are you aren’t getting enough sleep. Therefore, it’s better to either get to sleep earlier on a regular basis or set your alarm for a later time for when you really are going to get up and stay awake. While the snooze button may seem like a nice sleep preserver, it really isn’t. You may not know what you really lose when you snooze.
    #snooze #button #pressed #over #time
    Snooze Button Pressed Over 55% Of Time After Sleep, Alarm, Study Says
    Over halfof the sleep sessions recorded in a study published in journal Scientific Reports ... More ended with a pressing of the snooze button. In fact, when people pressed the snooze button, they tended to do it again and again—hitting it an average of 2.4 times per sleep session for an average of 10.8 minutes extra snooze.getty You could say that people are hitting the snooze button at a rather alarming frequency. Over halfof the sleep sessions recorded in a study published in journal Scientific Reports ended with a pressing of the snooze button. In fact, when people pressed the snooze button, they tended to do it again and again—hitting it an average of 2.4 times per sleep session for an average of 10.8 minutes extra snooze. So if you find yourself regularly using the snooze button like so many of the study participants, should you just let such behavior rest? Or would this be a you snooze you lose situation? Snooze Button Study Used Data From SleepCycle App First, here’s a heads upabout the study that produced these results. The study was an analyses of data from 21,222 people in different parts of the world using a smartphone app named SleepCycle. Mostof the participants were in the United States, followed by 12.7% from the United Kingdom, 9.9% from Japan, 6.5% from Australia and 6.2% from Germany. The app can function as an alarm clock, allowing the user to choose either a traditional snooze, where hitting a snooze button turns off the alarm for specified duration before the alarm goes off again, or what’s called a “smart snooze” where the alarm clock will sound again depending on where someone is in his or her sleep cycle. A team from the Brigham and Women’s Hospitaland Sleep Cycleconducted the study. Of note, the researchers tossed out any sleep sessions that were less than four hours. That’s probably because sleeping for less than four hours is more of a nap than a full I’m-going-to-get-in-my-jammies-and-see-you-in-the-morning sleep session. This left 3,017,276 recorded sleep sessions from July 1, 2022, through December 31, 2022, to be analyzed for the study. Snooze Button Use More Common On Weekdays And During Colder, Less Daylit Months Snooze button behavior did vary by day of the week. Not surprisingly, it was more common to hit the snooze button Monday through Friday than it was on weekends. Any guesses as to why this was the case? It wouldn’t happen to be a word that rhymes with twerk, would it? Although the study didn’t track why specifically people hit the snooze button, it’s likely that work had something to do with this trend. Snooze button behavior did also have some variation by month of the year. In the Northern Hemisphere, December had on average the highest amount of snooze use, with the snooze button being pushed an average of 2.62 times for 11.83 minutes of snooze per sleep session. By contrast, September had lowest snooze alarm activity, with averages of 2.40 times and 10.58 minutes. Guess what happened in the Southern Hemisphere? Yep, this was flipped around with July being the snooziest month with an average of 2.35 snooze alarm presses and 10.2 minutes of snooze per sleep session and November being the least snoozy month at 2.29 and 10.12 minutes. So, it looks like the months that are traditionally the coldest with the shortest durations of daylight had the greater snooze button activity. This probably isn’t super surprising either since getting out of bed when it’s cold and dark may not be as easy as when it’s warn and sunny outside. Sweden Had Highest Snooze Button Use, Japan The Lowest There wasn’t a huge amount of variation by country, although Sweden came out on top in terms of snooze alarm useand snooze sleep. Those in Japan used snooze alarms the leastwith the least snooze sleep. Australians also used the snooze alarms 2.2 times on average. The United States came in third in both categories at 2.5 times and 11.3 minutes. Naturally, a country’s averages shouldn’t necessarily apply to everyone in that country. In another words, should you encounter someone from Sweden, it’s not appropriate to say, “I bet you hit the snooze button more often.” Women Used The Snooze Button More Often Than Men Then there was the sex, meaning the sex of the participants. Women on average hit the snooze more oftenthan men. In the process, women spent more time on the snooze. So, what might this say about women and men? Again, population averages don’t necessarily reflect what’s happening with each individual. Plus, such a population cohort study doesn’t let you know what’s happening an the individual level. Does this mean that more women are getting less restful sleep than men? Does this mean that more women are dreading the day whether it’s due to having more work or more unpleasant circumstances than men? It’s difficult to say from this study alone. Hitting The Snooze Button Doesn’t Provide Restful Sleep One thing’s for sure, that extra amount of shut eye after the alarm goes off won’t be the same as getting that amount added to your sleep in an interrupted manner. I written previously in Forbes about the importance of regularly getting enough sleep and potential health consequences of not doing so. Well, a good night’s sleep doesn’t just mean a certain total number of hours and minutes, no matter how they add up. Instead, it means cycling sequentially through all of the following stages of sleep, as described by Eric Suni for the Sleep Foundation: Stage 1: This is when you first fall asleep, is really the lightest stage of sleep and averages one to seven minutes induration. Stage 2: Here’s your body relaxes more, hear rate falls, breathing becomes less frequent and body temperature drops. This stage tends to last 10 to 25 minutes. Stage 3: This is also known as delta sleep or slow-wave sleep and is when sleep gets deep enough to be more restorative. It typically lasts 20 to 40 minutes. Stage 4: Here REM stands for rapid eye movement and not the musical group that sang “Everybody Hurts.” This is where you tend to dream with a fair amount of brain of activity while your body becomes temporarily paralyzed. This stage tends to last 10 to 60 minutes. Now, you may cycle through these stages multiple times during a lengthy sleep session. But you have to go through the stages in the above order. Usually, you won’t hit the pillow and suddenly be in REM Sleep, for example. The same applies to when you are falling back asleep. Therefore, hitting the snooze button will likely get you to no more than Stage 1 sleep, if that. This wouldn’t bring you anywhere near restorative sleep. In essence, snooze time is lose time. You are losing time being either half or lightly asleep. Therefore, it’s better to wake up and get up after that first alarm goes off. Otherwise, you are only delaying the inevitable. Ideally, you wouldn’t even need the alarm and would be waking up naturally, excited to welcome the new day. But that’s another story. Hitting The Snooze Button Regularly Suggests That You Need To Adjust Your Sleeping Habits If you find yourself relying on that snooze button regularly, chances are you aren’t getting enough sleep. Therefore, it’s better to either get to sleep earlier on a regular basis or set your alarm for a later time for when you really are going to get up and stay awake. While the snooze button may seem like a nice sleep preserver, it really isn’t. You may not know what you really lose when you snooze. #snooze #button #pressed #over #time
    Snooze Button Pressed Over 55% Of Time After Sleep, Alarm, Study Says
    www.forbes.com
    Over half (55.6%) of the sleep sessions recorded in a study published in journal Scientific Reports ... More ended with a pressing of the snooze button. In fact, when people pressed the snooze button, they tended to do it again and again—hitting it an average of 2.4 times per sleep session for an average of 10.8 minutes extra snooze. (Photo: Getty)getty You could say that people are hitting the snooze button at a rather alarming frequency. Over half (55.6%) of the sleep sessions recorded in a study published in journal Scientific Reports ended with a pressing of the snooze button. In fact, when people pressed the snooze button, they tended to do it again and again—hitting it an average of 2.4 times per sleep session for an average of 10.8 minutes extra snooze. So if you find yourself regularly using the snooze button like so many of the study participants, should you just let such behavior rest? Or would this be a you snooze you lose situation? Snooze Button Study Used Data From SleepCycle App First, here’s a heads up (as opposed to a heads down on the pillow) about the study that produced these results. The study was an analyses of data from 21,222 people in different parts of the world using a smartphone app named SleepCycle. Most (43.6%) of the participants were in the United States, followed by 12.7% from the United Kingdom, 9.9% from Japan, 6.5% from Australia and 6.2% from Germany. The app can function as an alarm clock, allowing the user to choose either a traditional snooze, where hitting a snooze button turns off the alarm for specified duration before the alarm goes off again, or what’s called a “smart snooze” where the alarm clock will sound again depending on where someone is in his or her sleep cycle. A team from the Brigham and Women’s Hospital (Rebecca Robbins, Matthew D. Weaver, Stuart F. Quan and Charles A. Czeisler) and Sleep Cycle (Daniel Sääf and Michael Gradisar) conducted the study. Of note, the researchers tossed out any sleep sessions that were less than four hours. That’s probably because sleeping for less than four hours is more of a nap than a full I’m-going-to-get-in-my-jammies-and-see-you-in-the-morning sleep session. This left 3,017,276 recorded sleep sessions from July 1, 2022, through December 31, 2022, to be analyzed for the study. Snooze Button Use More Common On Weekdays And During Colder, Less Daylit Months Snooze button behavior did vary by day of the week. Not surprisingly, it was more common to hit the snooze button Monday through Friday than it was on weekends. Any guesses as to why this was the case? It wouldn’t happen to be a word that rhymes with twerk, would it? Although the study didn’t track why specifically people hit the snooze button, it’s likely that work had something to do with this trend. Snooze button behavior did also have some variation by month of the year. In the Northern Hemisphere, December had on average the highest amount of snooze use, with the snooze button being pushed an average of 2.62 times for 11.83 minutes of snooze per sleep session. By contrast, September had lowest snooze alarm activity, with averages of 2.40 times and 10.58 minutes. Guess what happened in the Southern Hemisphere? Yep, this was flipped around with July being the snooziest month with an average of 2.35 snooze alarm presses and 10.2 minutes of snooze per sleep session and November being the least snoozy month at 2.29 and 10.12 minutes. So, it looks like the months that are traditionally the coldest with the shortest durations of daylight had the greater snooze button activity. This probably isn’t super surprising either since getting out of bed when it’s cold and dark may not be as easy as when it’s warn and sunny outside. Sweden Had Highest Snooze Button Use, Japan The Lowest There wasn’t a huge amount of variation by country, although Sweden came out on top in terms of snooze alarm use (an average of 2.7 times) and snooze sleep (11.7 minutes). Those in Japan used snooze alarms the least (2.2 times) with the least snooze sleep (9.2 minutes). Australians also used the snooze alarms 2.2 times on average. The United States came in third in both categories at 2.5 times and 11.3 minutes. Naturally, a country’s averages shouldn’t necessarily apply to everyone in that country. In another words, should you encounter someone from Sweden, it’s not appropriate to say, “I bet you hit the snooze button more often.” Women Used The Snooze Button More Often Than Men Then there was the sex, meaning the sex of the participants. Women on average hit the snooze more often (2.5 times per sleep session) than men (2.3 times). In the process, women spent more time on the snooze (11.5 minutes versus 10.2 minutes). So, what might this say about women and men? Again, population averages don’t necessarily reflect what’s happening with each individual. Plus, such a population cohort study doesn’t let you know what’s happening an the individual level. Does this mean that more women are getting less restful sleep than men? Does this mean that more women are dreading the day whether it’s due to having more work or more unpleasant circumstances than men? It’s difficult to say from this study alone. Hitting The Snooze Button Doesn’t Provide Restful Sleep One thing’s for sure, that extra amount of shut eye after the alarm goes off won’t be the same as getting that amount added to your sleep in an interrupted manner. I written previously in Forbes about the importance of regularly getting enough sleep and potential health consequences of not doing so. Well, a good night’s sleep doesn’t just mean a certain total number of hours and minutes, no matter how they add up. Instead, it means cycling sequentially through all of the following stages of sleep, as described by Eric Suni for the Sleep Foundation: Stage 1 (N1): This is when you first fall asleep, is really the lightest stage of sleep and averages one to seven minutes induration. Stage 2 (N2): Here’s your body relaxes more, hear rate falls, breathing becomes less frequent and body temperature drops. This stage tends to last 10 to 25 minutes. Stage 3 (N3 or deep sleep): This is also known as delta sleep or slow-wave sleep and is when sleep gets deep enough to be more restorative. It typically lasts 20 to 40 minutes. Stage 4 (REM Sleep): Here REM stands for rapid eye movement and not the musical group that sang “Everybody Hurts.” This is where you tend to dream with a fair amount of brain of activity while your body becomes temporarily paralyzed. This stage tends to last 10 to 60 minutes. Now, you may cycle through these stages multiple times during a lengthy sleep session. But you have to go through the stages in the above order. Usually, you won’t hit the pillow and suddenly be in REM Sleep, for example. The same applies to when you are falling back asleep. Therefore, hitting the snooze button will likely get you to no more than Stage 1 sleep, if that. This wouldn’t bring you anywhere near restorative sleep. In essence, snooze time is lose time. You are losing time being either half or lightly asleep. Therefore, it’s better to wake up and get up after that first alarm goes off. Otherwise, you are only delaying the inevitable. Ideally, you wouldn’t even need the alarm and would be waking up naturally, excited to welcome the new day. But that’s another story. Hitting The Snooze Button Regularly Suggests That You Need To Adjust Your Sleeping Habits If you find yourself relying on that snooze button regularly, chances are you aren’t getting enough sleep. Therefore, it’s better to either get to sleep earlier on a regular basis or set your alarm for a later time for when you really are going to get up and stay awake. While the snooze button may seem like a nice sleep preserver, it really isn’t. You may not know what you really lose when you snooze.
    0 Commentaires ·0 Parts ·0 Aperçu
  • The Alienware x16 R2 gaming laptop with RTX 4090 is $900 off

    You have to be prepared to spend a significant amount of cash if you want a powerful gaming laptop, but you should also be on the lookout for any opportunities at savings. Take a look at Alienware deals at Dell, which has tempting offers like this one: the Alienware x16 R2 with the Nvidia GeForce RTX 4090 graphics card with a discount. From its original price of it’s down to which is still pretty expensive, but an excellent price for a device of its caliber. You need to hurry though, as it may be back to its regular price as soon as tomorrow!

    Why you should buy the Alienware x16 R2 gaming laptop
    The Nvidia GeForce RTX 4090 graphics card that’s found in this configuration of the Alienware x16 R2 is an extremely powerful GPU. When you combine it with the Intel Core Ultra 9 185H processor and 32GB of RAM, which our guide on how much RAM you need says is the sweet spot for high-end gamers, you’ll enjoy an unparalleled gaming experience when playing the best PC games — and that’s even if you select the most demanding settings.
    The Alienware x16 R2 is equipped with a 16-inch screen with Full HD+ resolution and a 480Hz refresh rate, which will allow it to give justice to modern graphics. You’ll be able to install several titles on the gaming laptop as it comes with a 2TB SSD, and with Windows 11 Home out of the box, you can start building your video game library as soon as you turn on the Alienware x16 R2 for the first time.
    Gamers who want an upgrade should check out gaming laptop deals, as there are some excellent bargains on top-of-the-line models. Here’s one from Dell — the Alienware x16 R2 with the Nvidia GeForce RTX 4090 graphics card for for savings of on its sticker price of We don’t expect the discount to stick around for much longer though, so if you want to take advantage of this offer, there’s only one thing to do: add the Alienware x16 R2 gaming laptop to your cart and finish the checkout process immediately.
    #alienware #x16 #gaming #laptop #with
    The Alienware x16 R2 gaming laptop with RTX 4090 is $900 off
    You have to be prepared to spend a significant amount of cash if you want a powerful gaming laptop, but you should also be on the lookout for any opportunities at savings. Take a look at Alienware deals at Dell, which has tempting offers like this one: the Alienware x16 R2 with the Nvidia GeForce RTX 4090 graphics card with a discount. From its original price of it’s down to which is still pretty expensive, but an excellent price for a device of its caliber. You need to hurry though, as it may be back to its regular price as soon as tomorrow! Why you should buy the Alienware x16 R2 gaming laptop The Nvidia GeForce RTX 4090 graphics card that’s found in this configuration of the Alienware x16 R2 is an extremely powerful GPU. When you combine it with the Intel Core Ultra 9 185H processor and 32GB of RAM, which our guide on how much RAM you need says is the sweet spot for high-end gamers, you’ll enjoy an unparalleled gaming experience when playing the best PC games — and that’s even if you select the most demanding settings. The Alienware x16 R2 is equipped with a 16-inch screen with Full HD+ resolution and a 480Hz refresh rate, which will allow it to give justice to modern graphics. You’ll be able to install several titles on the gaming laptop as it comes with a 2TB SSD, and with Windows 11 Home out of the box, you can start building your video game library as soon as you turn on the Alienware x16 R2 for the first time. Gamers who want an upgrade should check out gaming laptop deals, as there are some excellent bargains on top-of-the-line models. Here’s one from Dell — the Alienware x16 R2 with the Nvidia GeForce RTX 4090 graphics card for for savings of on its sticker price of We don’t expect the discount to stick around for much longer though, so if you want to take advantage of this offer, there’s only one thing to do: add the Alienware x16 R2 gaming laptop to your cart and finish the checkout process immediately. #alienware #x16 #gaming #laptop #with
    The Alienware x16 R2 gaming laptop with RTX 4090 is $900 off
    www.digitaltrends.com
    You have to be prepared to spend a significant amount of cash if you want a powerful gaming laptop, but you should also be on the lookout for any opportunities at savings. Take a look at Alienware deals at Dell, which has tempting offers like this one: the Alienware x16 R2 with the Nvidia GeForce RTX 4090 graphics card with a $900 discount. From its original price of $3,600, it’s down to $2,700, which is still pretty expensive, but an excellent price for a device of its caliber. You need to hurry though, as it may be back to its regular price as soon as tomorrow! Why you should buy the Alienware x16 R2 gaming laptop The Nvidia GeForce RTX 4090 graphics card that’s found in this configuration of the Alienware x16 R2 is an extremely powerful GPU. When you combine it with the Intel Core Ultra 9 185H processor and 32GB of RAM, which our guide on how much RAM you need says is the sweet spot for high-end gamers, you’ll enjoy an unparalleled gaming experience when playing the best PC games — and that’s even if you select the most demanding settings. The Alienware x16 R2 is equipped with a 16-inch screen with Full HD+ resolution and a 480Hz refresh rate, which will allow it to give justice to modern graphics. You’ll be able to install several titles on the gaming laptop as it comes with a 2TB SSD, and with Windows 11 Home out of the box, you can start building your video game library as soon as you turn on the Alienware x16 R2 for the first time. Gamers who want an upgrade should check out gaming laptop deals, as there are some excellent bargains on top-of-the-line models. Here’s one from Dell — the Alienware x16 R2 with the Nvidia GeForce RTX 4090 graphics card for $2,700, for savings of $900 on its sticker price of $3,600. We don’t expect the discount to stick around for much longer though, so if you want to take advantage of this offer, there’s only one thing to do: add the Alienware x16 R2 gaming laptop to your cart and finish the checkout process immediately.
    0 Commentaires ·0 Parts ·0 Aperçu
  • What Sam Altman Told OpenAI About the Secret Device He’s Making With Jony Ive

    The idea is a “chance to do the biggest thing we’ve ever done as a company here,” Altman told OpenAI employees Wednesday.
    #what #sam #altman #told #openai
    What Sam Altman Told OpenAI About the Secret Device He’s Making With Jony Ive
    The idea is a “chance to do the biggest thing we’ve ever done as a company here,” Altman told OpenAI employees Wednesday. #what #sam #altman #told #openai
    What Sam Altman Told OpenAI About the Secret Device He’s Making With Jony Ive
    www.wsj.com
    The idea is a “chance to do the biggest thing we’ve ever done as a company here,” Altman told OpenAI employees Wednesday.
    0 Commentaires ·0 Parts ·0 Aperçu
  • Jane Goodall, 91, on being objectified early in her career: 'If my legs were getting me the money, thank you legs'

    Jane Goodall says she was objectified by male scientists when she first appeared on the cover of National Geographic.

    Robin L Marshall/Getty Images

    2025-05-22T04:30:39Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Jane Goodall, 91, says she was objectified by her male peers early in her career.
    "Back then, all I wanted was to get back to the chimps. So if my legs were getting me the money, thank you legs," she said.
    While her experience happened years ago, gender inequality persists in the workplace.

    Jane Goodall, 91, may be one of the world's leading primatologists now, but there was a time when she wasn't being taken seriously.During an appearance on Tuesday'sCall Her Daddy" podcast, Goodall reflected on the challenges she faced in her decadeslong career.Goodall told podcast host Alex Cooper that her love for animals started when she read "Tarzan of the Apes" as a child."Anyway, I knew there wasn't a Tarzan. But that's when my dream began," Goodall said. "I will grow up, go to Africa, live with wild animals, and write books — no thought of being a scientist."Most people around her thought her dream was unrealistic, except her mother, she said."And everybody said, 'That's ridiculous. I mean, you don't have money. Africa's far away and you're just a girl,'" Goodall said.Years later, Goodall appeared on the cover of National Geographic.She recalled being objectified by others in the scientific community who said that her looks, not her research, earned her the spotlight."Well, some of the jealous male scientists would say, well, you know, she's just got this notoriety and she's getting money from Geographic, and they want her on the cover, and they wouldn't put her on the cover if she didn't have nice legs," Goodall said.If someone had said that today, they'd be sued, she added. "Back then, all I wanted was to get back to the chimps. So if my legs were getting me the money, thank you legs. And if you look at those covers, they were jolly nice legs," Goodall said.The English conservationist acknowledged that things are different now."I did it by accepting that, in a way, they were right. So, thank you for giving me this advantage. It was good to give me that money," Goodall said. "I know that for me it was a long time ago. It was a different era. It wouldn't work today. "While Goodall's experience may have unfolded years ago, gender inequality persists in the workplace.Sexism at work comes in many forms, including wage disparities, stereotypes, and harassment.Several female celebrities have also spoken up about the discrimination they faced in Hollywood.In an interview with Porter magazine in November 2023, Anne Hathaway said she was told her career would "fall off a cliff" after she turned 35.In January 2024, Sofia Vergara told the LA Times that her acting jobs were limited because of her "stupid accent."Kathy Bates told Variety in September that she could have a long acting career only because she "wasn't a beauty queen."A representative for Goodall did not immediately respond to a request for comment sent by Business Insider outside regular hours.

    Recommended video
    #jane #goodall #being #objectified #early
    Jane Goodall, 91, on being objectified early in her career: 'If my legs were getting me the money, thank you legs'
    Jane Goodall says she was objectified by male scientists when she first appeared on the cover of National Geographic. Robin L Marshall/Getty Images 2025-05-22T04:30:39Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Jane Goodall, 91, says she was objectified by her male peers early in her career. "Back then, all I wanted was to get back to the chimps. So if my legs were getting me the money, thank you legs," she said. While her experience happened years ago, gender inequality persists in the workplace. Jane Goodall, 91, may be one of the world's leading primatologists now, but there was a time when she wasn't being taken seriously.During an appearance on Tuesday'sCall Her Daddy" podcast, Goodall reflected on the challenges she faced in her decadeslong career.Goodall told podcast host Alex Cooper that her love for animals started when she read "Tarzan of the Apes" as a child."Anyway, I knew there wasn't a Tarzan. But that's when my dream began," Goodall said. "I will grow up, go to Africa, live with wild animals, and write books — no thought of being a scientist."Most people around her thought her dream was unrealistic, except her mother, she said."And everybody said, 'That's ridiculous. I mean, you don't have money. Africa's far away and you're just a girl,'" Goodall said.Years later, Goodall appeared on the cover of National Geographic.She recalled being objectified by others in the scientific community who said that her looks, not her research, earned her the spotlight."Well, some of the jealous male scientists would say, well, you know, she's just got this notoriety and she's getting money from Geographic, and they want her on the cover, and they wouldn't put her on the cover if she didn't have nice legs," Goodall said.If someone had said that today, they'd be sued, she added. "Back then, all I wanted was to get back to the chimps. So if my legs were getting me the money, thank you legs. And if you look at those covers, they were jolly nice legs," Goodall said.The English conservationist acknowledged that things are different now."I did it by accepting that, in a way, they were right. So, thank you for giving me this advantage. It was good to give me that money," Goodall said. "I know that for me it was a long time ago. It was a different era. It wouldn't work today. "While Goodall's experience may have unfolded years ago, gender inequality persists in the workplace.Sexism at work comes in many forms, including wage disparities, stereotypes, and harassment.Several female celebrities have also spoken up about the discrimination they faced in Hollywood.In an interview with Porter magazine in November 2023, Anne Hathaway said she was told her career would "fall off a cliff" after she turned 35.In January 2024, Sofia Vergara told the LA Times that her acting jobs were limited because of her "stupid accent."Kathy Bates told Variety in September that she could have a long acting career only because she "wasn't a beauty queen."A representative for Goodall did not immediately respond to a request for comment sent by Business Insider outside regular hours. Recommended video #jane #goodall #being #objectified #early
    Jane Goodall, 91, on being objectified early in her career: 'If my legs were getting me the money, thank you legs'
    www.businessinsider.com
    Jane Goodall says she was objectified by male scientists when she first appeared on the cover of National Geographic. Robin L Marshall/Getty Images 2025-05-22T04:30:39Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Jane Goodall, 91, says she was objectified by her male peers early in her career. "Back then, all I wanted was to get back to the chimps. So if my legs were getting me the money, thank you legs," she said. While her experience happened years ago, gender inequality persists in the workplace. Jane Goodall, 91, may be one of the world's leading primatologists now, but there was a time when she wasn't being taken seriously.During an appearance on Tuesday'sCall Her Daddy" podcast, Goodall reflected on the challenges she faced in her decadeslong career.Goodall told podcast host Alex Cooper that her love for animals started when she read "Tarzan of the Apes" as a child."Anyway, I knew there wasn't a Tarzan. But that's when my dream began," Goodall said. "I will grow up, go to Africa, live with wild animals, and write books — no thought of being a scientist."Most people around her thought her dream was unrealistic, except her mother, she said."And everybody said, 'That's ridiculous. I mean, you don't have money. Africa's far away and you're just a girl,'" Goodall said.Years later, Goodall appeared on the cover of National Geographic.She recalled being objectified by others in the scientific community who said that her looks, not her research, earned her the spotlight."Well, some of the jealous male scientists would say, well, you know, she's just got this notoriety and she's getting money from Geographic, and they want her on the cover, and they wouldn't put her on the cover if she didn't have nice legs," Goodall said.If someone had said that today, they'd be sued, she added. "Back then, all I wanted was to get back to the chimps. So if my legs were getting me the money, thank you legs. And if you look at those covers, they were jolly nice legs," Goodall said.The English conservationist acknowledged that things are different now."I did it by accepting that, in a way, they were right. So, thank you for giving me this advantage. It was good to give me that money," Goodall said. "I know that for me it was a long time ago. It was a different era. It wouldn't work today. "While Goodall's experience may have unfolded years ago, gender inequality persists in the workplace.Sexism at work comes in many forms, including wage disparities, stereotypes, and harassment.Several female celebrities have also spoken up about the discrimination they faced in Hollywood.In an interview with Porter magazine in November 2023, Anne Hathaway said she was told her career would "fall off a cliff" after she turned 35.In January 2024, Sofia Vergara told the LA Times that her acting jobs were limited because of her "stupid accent."Kathy Bates told Variety in September that she could have a long acting career only because she "wasn't a beauty queen."A representative for Goodall did not immediately respond to a request for comment sent by Business Insider outside regular hours. Recommended video
    0 Commentaires ·0 Parts ·0 Aperçu
  • Live Updates From Google I/O 2025

    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    #live #updates #google
    Live Updates From Google I/O 2025 🔴
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an objectand then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browserlater this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through”smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time sinceGoogle Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardwarein a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco, and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer productsfor the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong #live #updates #google
    Live Updates From Google I/O 2025 🔴
    gizmodo.com
    © Gizmodo I wish I was making this stuff up, but chaos seems to follow me at all tech events. After waiting an hour to try out Google’s hyped-up Android XR smart glasses for five minutes, I was actually given a three-minute demo, where I actually had 90 seconds to use Gemini in an extremely controlled environment. And actually, if you watch the video in my hands-on write-up below, you’ll see that I spent even less time with it because Gemini fumbled a few times in the beginning. Oof. I really hope there’s another chance to try them again because it was just too rushed. I think it might be the most rushed product demo I’ve ever had in my life, and I’ve been covering new gadgets for the past 15 years. —Raymond Wong Google, a company valued at $2 trillion, seemingly brought one pair of Android XR smart glasses for press to demo… and one pair of Samsung’s Project Moohan mixed reality headset running the same augmented reality platform. I’m told the wait is 1 hour to try either device for 5 minutes. Of course, I’m going to try out the smart glasses. But if I want to demo Moohan, I need to get back in line and wait all over again. This is madness! —Raymond Wong May 20Keynote Fin © Raymond Wong / Gizmodo Talk about a loooooong keynote. Total duration: 1 hour and 55 minutes, and then Sundar Pichai walked off stage. What do you make of all the AI announcements? Let’s hang in the comments! I’m headed over to a demo area to try out a pair of Android XR smart glasses. I can’t lie, even though the video stream from the live demo lagged for a good portion, I’m hyped! It really feels like Google is finally delivering on Google Glass over a decade later. Shoulda had Google co-founder Sergey Brin jump out of a helicopter and land on stage again, though. —Raymond Wong Pieces of Project Astra, Google’s computer vision-based UI, are winding up in various different products, it seems, and not all of them are geared toward smart glasses specifically. One of the most exciting updates to Astra is “computer control,” which allows one to do a lot more on their devices with computer vision alone. For instance, you could just point your phone at an object (say, a bike) and then ask Astra to search for the bike, find some brakes for it, and then even pull up a YouTube tutorial on how to fix it—all without typing anything into your phone. —James Pero Shopping bots aren’t just for scalpers anymore. Google is putting the power of automated consumerism in your hands with its new AI shopping tool. There are some pretty wild ideas here, too, including a virtual shopping avatar that’s supposed to represent your own body—the idea is you can make it try on clothes to see how they fit. How all that works in practice is TBD, but if you’re ready for a full AI shopping experience, you’ve finally got it. For the whole story, check out our story from Gizmodo’s Senior Editor, Consumer Tech, Raymond Wong. —James Pero I got what I wanted. Google showed off what its Android XR tech can bring to smart glasses. In a live demo, Google showcased how a pair of unspecified smart glasses did a few of the things that I’ve been waiting to do, including projecting live navigation and remembering objects in your environment—basically the stuff that it pitched with Project Astra last year, but in a glasses form factor. There’s still a lot that needs to happen, both hardware and software-wise, before you can walk around wearing glasses that actually do all those things, but it was exciting to see that Google is making progress in that direction. It’s worth noting that not all of the demos went off smoothly—there was lots of stutter in the live translation demo—but I guess props to them for giving it a go. When we’ll actually get to walk around wearing functional smart glasses with some kind of optical passthrough or virtual display is anyone’s guess, but the race is certainly heating up. —James Pero Google’s SynthID has been around for nearly three years, but it’s been largely kept out of the public eye. The system disturbs AI-generated images, video, or audio with an invisible, undetectable watermark that can be observed with Google DeepMind’s proprietary tool. At I/O, Google said it was working with both Nvidia and GetReal to introduce the same watermarking technique with those companies’ AI image generators. Users may be able to detect these watermarks themselves, even if only part of the media was modified with AI. Early testers are getting access to it “today,” but hopefully more people can acess it at a later date from labs.google/synthid. — Kyle Barr This keynote has been going on for 1.5 hours now. Do I run to the restroom now or wait? But how much longer until it ends??? Can we petiton to Sundar Pichai to make these keynotes shorter or at least have an intermission? Update: I ran for it right near the end before Android XR news hit. I almost made it… —Raymond Wong © Raymond Wong / Gizmodo Google’s new video generator Veo, is getting a big upgrade that includes sound generation, and it’s not just dialogue. Veo 3 can also generate sound effects and music. In a demo, Google showed off an animated forest scene that includes all three—dialogue, sound effects, and video. The length of clips, I assume, will be short at first, but the results look pretty sophisticated if the demo is to be believed. —James Pero If you pay for a Google One subscription, you’ll start to see Gemini in your Google Chrome browser (and—judging by this developer conference—everywhere else) later this week. This will appear as the sparkle icon at the top of your browser app. You can use this to bring up a prompt box to ask a question about the current page you’re browsing, such as if you want to consolidate a number of user reviews for a local campsite. — Kyle Barr © Google / GIF by Gizmodo Google’s high-tech video conferencing tech, now called Beam, looks impressive. You can make eye contact! It feels like the person in the screen is right in front of you! It’s glasses-free 3D! Come back down to Earth, buddy—it’s not coming out as a consumer product. Commercial first with partners like HP. Time to apply for a new job? —Raymond Wong Read more here: Google doesn’t want Search to be tied to your browser or apps anymore. Search Live is akin to the video and audio comprehension capabilities of Gemini Live, but with the added benefit of getting quick answers based on sites from around the web. Google showed how Search Live could comprehend queries about at-home science experiment and bring in answers from sites like Quora or YouTube. — Kyle Barr Google is getting deep into augmented reality with Android XR—its operating system built specifically for AR glasses and VR headsets. Google showed us how users may be able to see a holographic live Google Maps view directly on their glasses or set up calendar events, all without needing to touch a single screen. This uses Gemini AI to comprehend your voice prompts and follow through on your instructions. Google doesn’t have its own device to share at I/O, but its planning to work with companies like XReal and Samsung to craft new devices across both AR and VR. — Kyle Barr Read our full report here: I know how much you all love subscriptions! Google does too, apparently, and is now offering a $250 per month AI bundle that groups some of its most advanced AI services. Subscribing to Google AI Ultra will get you: Gemini and its full capabilities Flow, a new, more advanced AI filmmaking tool based on Veo Whisk, which allows text-to-image creation NotebookLM, an AI note-taking app Gemini in Gmail and Docs Gemini in Chrome Project Mariner, an agentic research AI 30TB of storage I’m not sure who needs all of this, but maybe there are more AI superusers than I thought. —James Pero Google CEO Sundar Pichai was keen to claim that users are big, big fans of AI overviews in Google Search results. If there wasn’t already enough AI on your search bar, Google will now stick an entire “AI Mode” tab on your search bar next to the Google Lens button. This encompasses the Gemini 2.5 model. This opens up an entirely new UI for searching via a prompt with a chatbot. After you input your rambling search query, it will bring up an assortment of short-form textual answers, links, and even a Google Maps widget depending on what you were looking for. AI Mode should be available starting today. Google said AI Mode pulls together information from the web alongside its other data like weather or academic research through Google Scholar. It should also eventually encompass your “personal context,” which will be available later this summer. Eventually, Google will add more AI Mode capabilities directly to AI Overviews. — Kyle Barr May 20News Embargo Has Lifted! © Xreal Get your butt over to Gizmodo.com’s home page because the Google I/O news embargo just lifted. We’ve got a bunch of stories, including this one about Google partnering up with Xreal for a new pair of “optical see-through” (OST) smart glasses called Project Aura. The smart glasses run Android XR and are powered by a Qualcomm chip. You can see three cameras. Wireless, these are not—you’ll need to tether to a phone or other device. Update: Little scoop: I’ve confirmed that Project Aura has a 70-degree field of view, which is way wider than the One Pro’s FOV, which is 57 degrees. —Raymond Wong © Raymond Wong / Gizmodo Google’s DeepMind CEO showed off the updated version of Project Astra running on a phone and drove home how its “personal, proactive, and powerful” AI features are the groundwork for a “universal assistant” that truly understands and works on your behalf. If you think Gemini is a fad, it’s time to get familiar with it because it’s not going anywhere. —Raymond Wong May 20Gemini 2.5 Pro Is Here © Gizmodo Google says Gemini 2.5 Pro is its “most advanced model yet,” and comes with “enhanced reasoning,” better coding ability, and can even create interactive simulations. You can try it now via Google AI Studio. —James Pero There are two major types of transformer AI used today. One is the LLM, AKA large language models, and diffusion models—which are mostly used for image generation. The Gemini Diffusion model blurs the lines of these types of models. Google said its new research model can iterate on a solution quickly and correct itself while generating an answer. For math or coding prompts, Gemini Diffusion can potentially output an entire response much faster than a typical Chatbot. Unlike a traditional LLM model, which may take a few seconds to answer a question, Gemini Diffusion can create a response to a complex math equation in the blink of an eye, and still share the steps it took to reach its conclusion. — Kyle Barr © Gizmodo New Gemini 2.5 Flash and Gemini Pro models are incoming and, naturally, Google says both are faster and more sophisticated across the board. One of the improvements for Gemini 2.5 Flash is even more inflection when speaking. Unfortunately for my ears, Google demoed the new Flash speaking in a whisper that sent chills down my spine. —James Pero Is anybody keeping track of how many times Google execs have said “Gemini” and “AI” so far? Oops, I think I’m already drunk, and we’re only 20 minutes in. —Raymond Wong © Raymond Wong / Gizmodo Google’s Project Astra is supposed to be getting much better at avoiding hallucinations, AKA when the AI makes stuff up. Project Astra’s vision and audio comprehension capabilities are supposed to be far better at knowing when you’re trying to trick it. In a video, Google showed how its Gemini Live AI wouldn’t buy your bullshit if you tell it that a garbage truck is a convertible, a lamp pole is a skyscraper, or your shadow is some stalker. This should hopefully mean the AI doesn’t confidently lie to you, as well. Google CEO Sundar Pichai said “Gemini is really good at telling you when you’re wrong.” These enhanced features should be rolling out today for Gemini app on iOS and Android. — Kyle Barr May 20Release the Agents Like pretty much every other AI player, Google is pursuing agentic AI in a big way. I’d prepare for a lot more talk about how Gemini can take tasks off your hands as the keynote progresses. —James Pero © Gizmodo Google has finally moved Project Starline—its futuristic video-calling machine—into a commercial project called Google Beam. According to Pichai, Google Beam can take a 2D image and transform it into a 3D one, and will also incorporate live translate. —James Pero © Gizmodo Google’s CEO, Sundar Pichai, says Google is shipping at a relentless pace, and to be honest, I tend to agree. There are tons of Gemini models out there already, even though it’s only been out for two years. Probably my favorite milestone, though, is that it has now completed Pokémon Blue, earning all 8 badges according to Pichai. —James Pero May 20Let’s Do This Buckle up, kiddos, it’s I/O time. Methinks there will be a lot to get to, so you may want to grab a snack now. —James Pero Counting down until the keynote… only a few more minutes to go. The DJ just said AI is changing music and how it’s made. But don’t forget that we’re all here… in person. Will we all be wearing Android XR smart glasses next year? Mixed reality headsets? —Raymond Wong © Raymond Wong / Gizmodo Fun fact: I haven’t attended Google I/O in person since before Covid-19. The Wi-Fi is definitely stronger and more stable now. It’s so great to be back and covering for Gizmodo. Dream job, unlocked! —Raymond Wong © Raymond Wong / Gizmodo Mini breakfast burritos… bagels… but these bagels can’t compare to real Made In New York City bagels with that authentic NY water 😏 —Raymond Wong © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo © Raymond Wong / Gizmodo I’ve arrived at the Shoreline Amphitheatre in Mountain View, Calif., where the Google I/O keynote is taking place in 40 minutes. Seats are filling up. But first, must go check out the breakfast situation because my tummy is growling… —Raymond Wong May 20Should We Do a Giveaway? © Raymond Wong / Gizmodo Google I/O attendees get a special tote bag, a metal water bottle, a cap, and a cute sheet of stickers. I always end up donating this stuff to Goodwill during the holidays. A guy living in NYC with two cats only has so much room for tote bags and water bottles… Would be cool to do giveaway. Leave a comment to let us know if you’d be into that and I can pester top brass to make it happen 🤪 —Raymond Wong May 20Got My Press Badge! In 13 hours, Google will blitz everyone with Gemini AI, Gemini AI, and tons more Gemini AI. Who’s ready for… Gemini AI? —Raymond Wong May 19Google Glass: The Redux © Google / Screenshot by Gizmodo Google is very obviously inching toward the release of some kind of smart glasses product for the first time since (gulp) Google Glass, and if I were a betting man, I’d say this one will have a much warmer reception than its forebearer. I’m not saying Google can snatch the crown from Meta and its Ray-Ban smart glasses right out of the gate, but if it plays its cards right, it could capitalize on the integration with its other hardware (hello, Pixel devices) in a big way. Meta may finally have a real competitor on its hands. ICYMI: Here’s Google’s President of the Android Ecosystem, Sameer Samat, teasing some kind of smart glasses device in a recorded demo last week. —James Pero Hi folks, I’m James Pero, Gizmodo’s new Senior Writer. There’s a lot we have to get to with Google I/O, so I’ll keep this introduction short. I like long walks on the beach, the wind in my nonexistent hair, and I’m really, really, looking forward to bringing you even more of the spicy, insightful, and entertaining coverage on consumer tech that Gizmodo is known for. I’m starting my tenure here out hot with Google I/O, so make sure you check back here throughout the week to get those sweet, sweet blogs and commentary from me and Gizmodo’s Senior Consumer Tech Editor Raymond Wong. —James Pero © Raymond Wong / Gizmodo Hey everyone! Raymond Wong, senior editor in charge of Gizmodo’s consumer tech team, here! Landed in San Francisco (the sunrise was *chef’s kiss*), and I’ll be making my way over to Mountain View, California, later today to pick up my press badge and scope out the scene for tomorrow’s Google I/O keynote, which kicks off at 1 p.m. ET / 10 a.m. PT. Google I/O is a developer conference, but that doesn’t mean it’s news only for engineers. While there will be a lot of nerdy stuff that will have developers hollering, what Google announces—expect updates on Gemini AI, Android, and Android XR, to name a few headliners—will shape consumer products (hardware, software, and services) for the rest of this year and also the years to come. I/O is a glimpse at Google’s technology roadmap as AI weaves itself into the way we compute at our desks and on the go. This is going to be a fun live blog! —Raymond Wong
    0 Commentaires ·0 Parts ·0 Aperçu
  • Fushi Auberge / Tezuka Architects

    Fushi Auberge / Tezuka ArchitectsSave this picture!© Kida Katsushida, FOTOTECAHotels•Akiruno, Japan

    Architects:
    Tezuka Architects
    Area
    Area of this architecture project

    Area: 
    349 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Kida Katsushida, FOTOTECA

    Lead Architects:

    Takaharu Tezuka, Yui Tezuka, Keiji Yabe

    More SpecsLess Specs
    this picture!
    "Fushi" is an auberge limited to one group per day, nestled deep in the Akigawa Valley on the western edge of Tokyo. The building overlooks the confluence of the Akigawa and Bonbori Rivers. To the west is Mount Joyama, a natural mountain fortress carved by clear streams. The site has a unique landscape that could not be achieved by human power. The auberge was founded by a family that has been running Kaiseki-Ryorirestaurants for over half a century.this picture!this picture!this picture!The building is basically made of timberworks, but the eave is held in place by a steel-framed structure, carefully designed with highly skilled modern engineering. It is a natural form that seeks the wisdom of living in the climate and culture of the Akigawa Valley.this picture!A heroic roof is sensibly integrated into the site. It rises 1.5 meters from the floor, designed to the eye level of the average Japanese. The eave spans 33 meters horizontally without pillars. It invites one to look toward the remarkably clear water stream with one-of-a-kind stone arrangements that can only be achieved by a million years of river erosion. The scenery of the Akigawa Valley unfolded in front of one's eye like a scroll painting.this picture!this picture!this picture!"Outdoor Living" is found at the heart of the building. It is an expansive outdoor deck covered with deep and low eaves, with a scroll of the Akigawa Valley landscape in front, the bamboo grove courtyard in the background, and a gentle breeze flowing through the building, fully embodying the essence of the site. Hence, the auberge is named "Fushi", borrowed from Noh Master Zeami Motokiyo's text "Fushikaden". This place can also be a Noh stage, too, without a wall or pillar to define its boundaries. It changes its "setting" according to the seasons. Rain and snow also become part of the atmosphere.this picture!The walls are made of over 20 thousand cedar wood strips, sized 19mm, aligned in parallel with a one-millimeter gap between them. The exterior walls are made vertically in the direction of the rainfall; the interior walls are made horizontally in the direction of the breeze flow. Not a single nail is visible from the surface.this picture!this picture!There are two original fusuma paintings on two sides of a sliding screen, Dawn and Evening. Dawn is the faint glow of the moment before the sun rises, while Evening is tinted with the foreboding hues of the moment before the night falls.this picture!this picture!Recent discussions of architecture have shifted from thinking of architecture as an "object" to an "experience." However, the history of architecture is almost as long as the history of mankind, it is impossible to explain the qualities of architecture through experience and theory alone. This project reveals the intangible elements of architecture, realising the balance between experiential satisfaction and physical integrity. Experience of beauty and sublimity of architecture permeated the cuisines, ceramics, flowers, Noh, and gardens as a whole.this picture!

    Project gallerySee allShow less
    Project locationAddress:Akiruno, Tokyo, JapanLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTezuka ArchitectsOffice•••
    MaterialWoodMaterials and TagsPublished on May 22, 2025Cite: "Fushi Auberge / Tezuka Architects" 22 May 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #fushi #auberge #tezuka #architects
    Fushi Auberge / Tezuka Architects
    Fushi Auberge / Tezuka ArchitectsSave this picture!© Kida Katsushida, FOTOTECAHotels•Akiruno, Japan Architects: Tezuka Architects Area Area of this architecture project Area:  349 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Kida Katsushida, FOTOTECA Lead Architects: Takaharu Tezuka, Yui Tezuka, Keiji Yabe More SpecsLess Specs this picture! "Fushi" is an auberge limited to one group per day, nestled deep in the Akigawa Valley on the western edge of Tokyo. The building overlooks the confluence of the Akigawa and Bonbori Rivers. To the west is Mount Joyama, a natural mountain fortress carved by clear streams. The site has a unique landscape that could not be achieved by human power. The auberge was founded by a family that has been running Kaiseki-Ryorirestaurants for over half a century.this picture!this picture!this picture!The building is basically made of timberworks, but the eave is held in place by a steel-framed structure, carefully designed with highly skilled modern engineering. It is a natural form that seeks the wisdom of living in the climate and culture of the Akigawa Valley.this picture!A heroic roof is sensibly integrated into the site. It rises 1.5 meters from the floor, designed to the eye level of the average Japanese. The eave spans 33 meters horizontally without pillars. It invites one to look toward the remarkably clear water stream with one-of-a-kind stone arrangements that can only be achieved by a million years of river erosion. The scenery of the Akigawa Valley unfolded in front of one's eye like a scroll painting.this picture!this picture!this picture!"Outdoor Living" is found at the heart of the building. It is an expansive outdoor deck covered with deep and low eaves, with a scroll of the Akigawa Valley landscape in front, the bamboo grove courtyard in the background, and a gentle breeze flowing through the building, fully embodying the essence of the site. Hence, the auberge is named "Fushi", borrowed from Noh Master Zeami Motokiyo's text "Fushikaden". This place can also be a Noh stage, too, without a wall or pillar to define its boundaries. It changes its "setting" according to the seasons. Rain and snow also become part of the atmosphere.this picture!The walls are made of over 20 thousand cedar wood strips, sized 19mm, aligned in parallel with a one-millimeter gap between them. The exterior walls are made vertically in the direction of the rainfall; the interior walls are made horizontally in the direction of the breeze flow. Not a single nail is visible from the surface.this picture!this picture!There are two original fusuma paintings on two sides of a sliding screen, Dawn and Evening. Dawn is the faint glow of the moment before the sun rises, while Evening is tinted with the foreboding hues of the moment before the night falls.this picture!this picture!Recent discussions of architecture have shifted from thinking of architecture as an "object" to an "experience." However, the history of architecture is almost as long as the history of mankind, it is impossible to explain the qualities of architecture through experience and theory alone. This project reveals the intangible elements of architecture, realising the balance between experiential satisfaction and physical integrity. Experience of beauty and sublimity of architecture permeated the cuisines, ceramics, flowers, Noh, and gardens as a whole.this picture! Project gallerySee allShow less Project locationAddress:Akiruno, Tokyo, JapanLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTezuka ArchitectsOffice••• MaterialWoodMaterials and TagsPublished on May 22, 2025Cite: "Fushi Auberge / Tezuka Architects" 22 May 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #fushi #auberge #tezuka #architects
    Fushi Auberge / Tezuka Architects
    www.archdaily.com
    Fushi Auberge / Tezuka ArchitectsSave this picture!© Kida Katsushida, FOTOTECAHotels•Akiruno, Japan Architects: Tezuka Architects Area Area of this architecture project Area:  349 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Kida Katsushida, FOTOTECA Lead Architects: Takaharu Tezuka, Yui Tezuka, Keiji Yabe More SpecsLess Specs Save this picture! "Fushi" is an auberge limited to one group per day, nestled deep in the Akigawa Valley on the western edge of Tokyo. The building overlooks the confluence of the Akigawa and Bonbori Rivers. To the west is Mount Joyama, a natural mountain fortress carved by clear streams. The site has a unique landscape that could not be achieved by human power. The auberge was founded by a family that has been running Kaiseki-Ryori (Traditional Multi-course meals) restaurants for over half a century.Save this picture!Save this picture!Save this picture!The building is basically made of timberworks, but the eave is held in place by a steel-framed structure, carefully designed with highly skilled modern engineering. It is a natural form that seeks the wisdom of living in the climate and culture of the Akigawa Valley.Save this picture!A heroic roof is sensibly integrated into the site. It rises 1.5 meters from the floor, designed to the eye level of the average Japanese. The eave spans 33 meters horizontally without pillars. It invites one to look toward the remarkably clear water stream with one-of-a-kind stone arrangements that can only be achieved by a million years of river erosion. The scenery of the Akigawa Valley unfolded in front of one's eye like a scroll painting.Save this picture!Save this picture!Save this picture!"Outdoor Living" is found at the heart of the building. It is an expansive outdoor deck covered with deep and low eaves, with a scroll of the Akigawa Valley landscape in front, the bamboo grove courtyard in the background, and a gentle breeze flowing through the building, fully embodying the essence of the site. Hence, the auberge is named "Fushi" (Figure of Wind), borrowed from Noh Master Zeami Motokiyo's text "Fushikaden" (The Transmission of the Flower of Acting Style). This place can also be a Noh stage, too, without a wall or pillar to define its boundaries. It changes its "setting" according to the seasons. Rain and snow also become part of the atmosphere.Save this picture!The walls are made of over 20 thousand cedar wood strips, sized 19mm, aligned in parallel with a one-millimeter gap between them. The exterior walls are made vertically in the direction of the rainfall; the interior walls are made horizontally in the direction of the breeze flow. Not a single nail is visible from the surface.Save this picture!Save this picture!There are two original fusuma paintings on two sides of a sliding screen, Dawn and Evening. Dawn is the faint glow of the moment before the sun rises, while Evening is tinted with the foreboding hues of the moment before the night falls.Save this picture!Save this picture!Recent discussions of architecture have shifted from thinking of architecture as an "object" to an "experience." However, the history of architecture is almost as long as the history of mankind, it is impossible to explain the qualities of architecture through experience and theory alone. This project reveals the intangible elements of architecture, realising the balance between experiential satisfaction and physical integrity. Experience of beauty and sublimity of architecture permeated the cuisines, ceramics, flowers, Noh, and gardens as a whole.Save this picture! Project gallerySee allShow less Project locationAddress:Akiruno, Tokyo, JapanLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTezuka ArchitectsOffice••• MaterialWoodMaterials and TagsPublished on May 22, 2025Cite: "Fushi Auberge / Tezuka Architects" 22 May 2025. ArchDaily. Accessed . <https://www.archdaily.com/1030248/fushi-tezuka-architects&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Commentaires ·0 Parts ·0 Aperçu
  • Mastering GPU Particle Effects in Unreal Engine 5 #shorts

    Discover how to enhance your particle effects in Unreal Engine 5 by optimizing GPU settings. In this clip, we showcase techniques to reduce flickering and improve visual quality with alpha adjustments. Perfect for game developers looking to elevate their VFX skills!#UnrealEngine #GameDev #VFX #Niagara #UE5
    #mastering #gpu #particle #effects #unreal
    Mastering GPU Particle Effects in Unreal Engine 5 #shorts
    Discover how to enhance your particle effects in Unreal Engine 5 by optimizing GPU settings. In this clip, we showcase techniques to reduce flickering and improve visual quality with alpha adjustments. Perfect for game developers looking to elevate their VFX skills!#UnrealEngine #GameDev #VFX #Niagara #UE5 #mastering #gpu #particle #effects #unreal
    Mastering GPU Particle Effects in Unreal Engine 5 #shorts
    www.youtube.com
    Discover how to enhance your particle effects in Unreal Engine 5 by optimizing GPU settings. In this clip, we showcase techniques to reduce flickering and improve visual quality with alpha adjustments. Perfect for game developers looking to elevate their VFX skills!#UnrealEngine #GameDev #VFX #Niagara #UE5
    0 Commentaires ·0 Parts ·0 Aperçu
  • Unexpected clustering pattern in dwarf galaxies challenges formation models

    Nature, Published online: 21 May 2025; doi:10.1038/s41586-025-08965-5Unexpected large-scale clustering of isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups, challenges current models of cosmology and galaxy evolution.
    #unexpected #clustering #pattern #dwarf #galaxies
    Unexpected clustering pattern in dwarf galaxies challenges formation models
    Nature, Published online: 21 May 2025; doi:10.1038/s41586-025-08965-5Unexpected large-scale clustering of isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups, challenges current models of cosmology and galaxy evolution. #unexpected #clustering #pattern #dwarf #galaxies
    Unexpected clustering pattern in dwarf galaxies challenges formation models
    www.nature.com
    Nature, Published online: 21 May 2025; doi:10.1038/s41586-025-08965-5Unexpected large-scale clustering of isolated, diffuse and blue dwarf galaxies, comparable to that seen for massive galaxy groups, challenges current models of cosmology and galaxy evolution.
    0 Commentaires ·0 Parts ·0 Aperçu
CGShares https://cgshares.com