• trans health care, trans joy, resources for trans people, hormones, surgery, US Supreme Court, trans rights, healthcare access, Jules Gill-Peterson

    ---

    ## The Silent Struggles of Trans Individuals

    In the shadow of societal norms, where acceptance often feels like an unreachable dream, the journey of trans individuals is riddled with pain and longing. Each day, they navigate a world that frequently denies them their basic rights, subjecting them to the torment of invisibility and neglect. The ...
    trans health care, trans joy, resources for trans people, hormones, surgery, US Supreme Court, trans rights, healthcare access, Jules Gill-Peterson --- ## The Silent Struggles of Trans Individuals In the shadow of societal norms, where acceptance often feels like an unreachable dream, the journey of trans individuals is riddled with pain and longing. Each day, they navigate a world that frequently denies them their basic rights, subjecting them to the torment of invisibility and neglect. The ...
    This Historian Has Seen the Future of Trans Health Care: A Call for Change
    trans health care, trans joy, resources for trans people, hormones, surgery, US Supreme Court, trans rights, healthcare access, Jules Gill-Peterson --- ## The Silent Struggles of Trans Individuals In the shadow of societal norms, where acceptance often feels like an unreachable dream, the journey of trans individuals is riddled with pain and longing. Each day, they navigate a world that...
    Like
    Love
    Wow
    Angry
    Sad
    534
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • Seven New Gemini Features Google Announced at I/O 2025

    Google I/O's 2025 keynote could have more reasonably been called The Google AI Show. Almost everything the company talked about was AI-powered, some of which is promised to arrive in the future, and some of which is available today. Features were spread across Google's whole range of products, but here are some of the ones you're actually likely to see.It's tough to talk about Gemini because it simultaneously refers to a set of models, different versions of those models, and different apps that these models are available through. There's the dedicated Gemini app, the voice assistant in things like Pixel phones and watches, as well as Gemini tools built into apps like Google Docs, Gmail, or Search.I'll do my best to specify which features are coming to what products, but keep in mind that sometimes Google tends to announce the same thing a few times.Agent Mode is coming to Gemini, Search, and moreThe Gemini app is getting a new Agent Mode that can perform tasks for you while you do something else. Google showed off an example of asking Gemini to find apartments in a city. The app then searches listings online, filters them by the criteria you set, and can offer to set up apartment tours for you.The most interesting aspect of this is that Google pitches this as a task you can have Gemini repeat regularly. So, for example, if you want Gemini to search for new apartments every week, the app can repeat the process, continuing with the information in previous iterations of the search.Agent Mode is similarly coming to Google Search for certain requests. Google uses the example of asking for tickets to an upcoming event. Google scours ticket listing sites, cross-references against your preferences, and presents the results. Gmail will pretend to be you when it replies to your emailsGmail has had smart replies for a while, but they can sound pretty generic. It's a dead giveaway to your recipient that you're not really paying attention. To help you get away with quietly ghosting your friends, Gmail will soon be able to tailor its responses to you by referring to your past emails and even Drive documents.Google uses the example of a friend asking how you planned your recent vacation, a common thing we all email each other all the time. In this case, Gmail can draft a response based on your email history, with the advice you would be likely to give, and even write it how the AI thinks you would write it.Thought summaries will summarize how AI summarizes its thought processYes, you read that right. AI "reasoning" models typically work by taking your query, generating text that breaks it down into smaller parts, sending those parts to the AI again, then carrying out each step. That's a lot of instructions happening behind the scenes on your behalf. Usually, reasoning modelswill have a little drop down to show you the steps it took in the interim.If even that is too much reading for you, Gemini will now summarize the summary of the thought process. In theory, this is to make it easier to understand why Gemini arrived at the answers it gives you. Native audio output will whisper to youThis is technically a new feature of the Gemini API, which means developers can build on these tools in their apps. Native audio output will let developers generate natural-sounding speech. In its demo, Google showed off voices that could switch between multiple languages, which was pretty cool.What isn't so cool, however, is the model can also whisper. I do not yet know what the practical use-cases are for an AI-generated voice that can whisper, but I do know I won't be able to get it out of my head for a week. At best.Jules will fix your code's bugs in the background while you workLast year, Google announced Jules, a coding agent that can help you with your code, similar to Github's Copilot. Now, the public beta of Jules is available. Google says Jules can fix bugs while you're working on other tasks, bump dependency versions, and even provide an audio summary of the changes that it's made to your code.Google Search will let you virtually try on clothes while shopping onlineI'm not great at visualizing what a piece of clothing will look like on my particular body, so this new try-on feature might actually be useful. Google is launching a Search Labs experiment that lets you upload a full-length photo of yourself that Google will alter to show what the clothing will look like on you.The company is also integrating shopping tools that can buy items for you and even track for the best price. It will then be able to buy stuff for you via Google Pay, using your saved payment and shipping info. This one isn't available quite yet, and frankly we'd want to learn a little more about how the process works and how to prevent purchases you don't want before we'd recommend using it.New Veo and Imagen models will generate audio and videoVideo is, definitionally, a series of images played at a fast enough speed to convey a sense of motion. With that definition, I can confidently say that the demos of Google's new Veo 3 model does, in fact, show video. Whether that video is any good is in the eye of the beholder, I suppose.Google seems to be betting on users finding the video generated by Veo 3to be worthwhile, because the company is also building a video editing suite around it. Flow is a video editing tool that ostensibly lets editors extend and re-generate clips to get the right look.Google also says that Veo 3 can generate sounds to go along with its video. For example, in the owl scene linked above, Veo also generates forest sound effects. We'll have to see how it generates these elementsbut for now the demos speak for themselves. Veo 3 is now available in the Gemini app for Ultra subscribers.
    #seven #new #gemini #features #google
    Seven New Gemini Features Google Announced at I/O 2025
    Google I/O's 2025 keynote could have more reasonably been called The Google AI Show. Almost everything the company talked about was AI-powered, some of which is promised to arrive in the future, and some of which is available today. Features were spread across Google's whole range of products, but here are some of the ones you're actually likely to see.It's tough to talk about Gemini because it simultaneously refers to a set of models, different versions of those models, and different apps that these models are available through. There's the dedicated Gemini app, the voice assistant in things like Pixel phones and watches, as well as Gemini tools built into apps like Google Docs, Gmail, or Search.I'll do my best to specify which features are coming to what products, but keep in mind that sometimes Google tends to announce the same thing a few times.Agent Mode is coming to Gemini, Search, and moreThe Gemini app is getting a new Agent Mode that can perform tasks for you while you do something else. Google showed off an example of asking Gemini to find apartments in a city. The app then searches listings online, filters them by the criteria you set, and can offer to set up apartment tours for you.The most interesting aspect of this is that Google pitches this as a task you can have Gemini repeat regularly. So, for example, if you want Gemini to search for new apartments every week, the app can repeat the process, continuing with the information in previous iterations of the search.Agent Mode is similarly coming to Google Search for certain requests. Google uses the example of asking for tickets to an upcoming event. Google scours ticket listing sites, cross-references against your preferences, and presents the results. Gmail will pretend to be you when it replies to your emailsGmail has had smart replies for a while, but they can sound pretty generic. It's a dead giveaway to your recipient that you're not really paying attention. To help you get away with quietly ghosting your friends, Gmail will soon be able to tailor its responses to you by referring to your past emails and even Drive documents.Google uses the example of a friend asking how you planned your recent vacation, a common thing we all email each other all the time. In this case, Gmail can draft a response based on your email history, with the advice you would be likely to give, and even write it how the AI thinks you would write it.Thought summaries will summarize how AI summarizes its thought processYes, you read that right. AI "reasoning" models typically work by taking your query, generating text that breaks it down into smaller parts, sending those parts to the AI again, then carrying out each step. That's a lot of instructions happening behind the scenes on your behalf. Usually, reasoning modelswill have a little drop down to show you the steps it took in the interim.If even that is too much reading for you, Gemini will now summarize the summary of the thought process. In theory, this is to make it easier to understand why Gemini arrived at the answers it gives you. Native audio output will whisper to youThis is technically a new feature of the Gemini API, which means developers can build on these tools in their apps. Native audio output will let developers generate natural-sounding speech. In its demo, Google showed off voices that could switch between multiple languages, which was pretty cool.What isn't so cool, however, is the model can also whisper. I do not yet know what the practical use-cases are for an AI-generated voice that can whisper, but I do know I won't be able to get it out of my head for a week. At best.Jules will fix your code's bugs in the background while you workLast year, Google announced Jules, a coding agent that can help you with your code, similar to Github's Copilot. Now, the public beta of Jules is available. Google says Jules can fix bugs while you're working on other tasks, bump dependency versions, and even provide an audio summary of the changes that it's made to your code.Google Search will let you virtually try on clothes while shopping onlineI'm not great at visualizing what a piece of clothing will look like on my particular body, so this new try-on feature might actually be useful. Google is launching a Search Labs experiment that lets you upload a full-length photo of yourself that Google will alter to show what the clothing will look like on you.The company is also integrating shopping tools that can buy items for you and even track for the best price. It will then be able to buy stuff for you via Google Pay, using your saved payment and shipping info. This one isn't available quite yet, and frankly we'd want to learn a little more about how the process works and how to prevent purchases you don't want before we'd recommend using it.New Veo and Imagen models will generate audio and videoVideo is, definitionally, a series of images played at a fast enough speed to convey a sense of motion. With that definition, I can confidently say that the demos of Google's new Veo 3 model does, in fact, show video. Whether that video is any good is in the eye of the beholder, I suppose.Google seems to be betting on users finding the video generated by Veo 3to be worthwhile, because the company is also building a video editing suite around it. Flow is a video editing tool that ostensibly lets editors extend and re-generate clips to get the right look.Google also says that Veo 3 can generate sounds to go along with its video. For example, in the owl scene linked above, Veo also generates forest sound effects. We'll have to see how it generates these elementsbut for now the demos speak for themselves. Veo 3 is now available in the Gemini app for Ultra subscribers. #seven #new #gemini #features #google
    LIFEHACKER.COM
    Seven New Gemini Features Google Announced at I/O 2025
    Google I/O's 2025 keynote could have more reasonably been called The Google AI Show. Almost everything the company talked about was AI-powered, some of which is promised to arrive in the future, and some of which is available today. Features were spread across Google's whole range of products, but here are some of the ones you're actually likely to see.It's tough to talk about Gemini because it simultaneously refers to a set of models (like Gemini Flash, Gemini Pro, and Gemini Pro Deep Research), different versions of those models (the latest seems to be 2.5 for most of these), and different apps that these models are available through. There's the dedicated Gemini app, the voice assistant in things like Pixel phones and watches, as well as Gemini tools built into apps like Google Docs, Gmail, or Search.I'll do my best to specify which features are coming to what products, but keep in mind that sometimes Google tends to announce the same thing a few times.Agent Mode is coming to Gemini, Search, and moreThe Gemini app is getting a new Agent Mode that can perform tasks for you while you do something else. Google showed off an example of asking Gemini to find apartments in a city. The app then searches listings online, filters them by the criteria you set, and can offer to set up apartment tours for you.The most interesting aspect of this is that Google pitches this as a task you can have Gemini repeat regularly. So, for example, if you want Gemini to search for new apartments every week, the app can repeat the process, continuing with the information in previous iterations of the search.Agent Mode is similarly coming to Google Search for certain requests. Google uses the example of asking for tickets to an upcoming event. Google scours ticket listing sites, cross-references against your preferences, and presents the results. Gmail will pretend to be you when it replies to your emailsGmail has had smart replies for a while, but they can sound pretty generic (without intervention, anyway). It's a dead giveaway to your recipient that you're not really paying attention. To help you get away with quietly ghosting your friends, Gmail will soon be able to tailor its responses to you by referring to your past emails and even Drive documents.Google uses the example of a friend asking how you planned your recent vacation, a common thing we all email each other all the time. In this case, Gmail can draft a response based on your email history, with the advice you would be likely to give, and even write it how the AI thinks you would write it.Thought summaries will summarize how AI summarizes its thought processYes, you read that right. AI "reasoning" models typically work by taking your query, generating text that breaks it down into smaller parts, sending those parts to the AI again, then carrying out each step. That's a lot of instructions happening behind the scenes on your behalf. Usually, reasoning models (including Gemini) will have a little drop down to show you the steps it took in the interim.If even that is too much reading for you, Gemini will now summarize the summary of the thought process. In theory, this is to make it easier to understand why Gemini arrived at the answers it gives you. Native audio output will whisper to you (in your nightmares)This is technically a new feature of the Gemini API, which means developers can build on these tools in their apps. Native audio output will let developers generate natural-sounding speech. In its demo, Google showed off voices that could switch between multiple languages, which was pretty cool.What isn't so cool, however, is the model can also whisper. I do not yet know what the practical use-cases are for an AI-generated voice that can whisper, but I do know I won't be able to get it out of my head for a week. At best.Jules will fix your code's bugs in the background while you workLast year, Google announced Jules, a coding agent that can help you with your code, similar to Github's Copilot. Now, the public beta of Jules is available. Google says Jules can fix bugs while you're working on other tasks, bump dependency versions, and even provide an audio summary of the changes that it's made to your code.Google Search will let you virtually try on clothes while shopping onlineI'm not great at visualizing what a piece of clothing will look like on my particular body, so this new try-on feature might actually be useful. Google is launching a Search Labs experiment that lets you upload a full-length photo of yourself that Google will alter to show what the clothing will look like on you.The company is also integrating shopping tools that can buy items for you and even track for the best price. It will then be able to buy stuff for you via Google Pay, using your saved payment and shipping info. This one isn't available quite yet, and frankly we'd want to learn a little more about how the process works and how to prevent purchases you don't want before we'd recommend using it.New Veo and Imagen models will generate audio and videoVideo is, definitionally, a series of images played at a fast enough speed to convey a sense of motion. With that definition, I can confidently say that the demos of Google's new Veo 3 model does, in fact, show video. Whether that video is any good is in the eye of the beholder, I suppose.Google seems to be betting on users finding the video generated by Veo 3 (and, by association, the images from Imagen 4) to be worthwhile, because the company is also building a video editing suite around it. Flow is a video editing tool that ostensibly lets editors extend and re-generate clips to get the right look.Google also says that Veo 3 can generate sounds to go along with its video. For example, in the owl scene linked above, Veo also generates forest sound effects. We'll have to see how it generates these elements (can you edit individual sounds distinctly, for example?) but for now the demos speak for themselves. Veo 3 is now available in the Gemini app for Ultra subscribers.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Key talking points from UKREiiF 2025

    Scene at UKREiiF 2025 outside the Canary bar
    UKREiiF is getting bigger by the year, with more than 16,000 professionals attending the 2025 construction conference in Leeds this week during three days of sunny weather, networking, panel discussions and robust amounts of booze. It has grown so big over the past few years that it seems almost to have outgrown the city of Leeds itself.
    A running joke among attendees was the varying quality of accommodation people had managed to secure. All of the budget hotels in the city were fully booked months in advance of the conference, with many - including at least one member of Parliament - reduced to kipping in bed and breakfasts of a questionable nature. Many were forced to stay in nearby towns including York, Wakefield and Bradford and catch the train to the conference each morning.
    But these snags served as ice breakers for more important conversations at an event which has come at a key pivot point for the industry. With the government on the brink of launching its 10-year industrial strategy and its new towns programme, opportunity was in the air.
    Networking events between government departments and potential suppliers of all sectors were well attended, although many discussion panels focused on the question of how all of this work would be paid for. And hanging over the conference like a storm cloud were the mounting issues at the Building Safety Regulator which are continuing to cause expensive delays to high rise schemes across the country.
    While many attendees eyed a huge amount of potential work to fill up pipelines, it was clear the industry is still facing some systemic challenges which could threaten a much-needed recovery following a long period of turmoil.

    How will the issues at the Building Safety Regulator be fixed?
    You did not even have to go inside an event titled “Gateways and Growing Pains: Tackling the Building Safety Act” to see how much this issue is affecting construction at the moment. The packed out tent was overflowing into the space outside, with those inside stood like sardines to watch a panel discussion about what has been happening in the high rise residential sector over the past year. 
    Audience members shared their horror stories of schemes which have been waiting for the best part of a year to get gateway 2 approval from the regulator, which is needed to start construction. There was a palpable sense of anger in the crowd, one professional describing the hold-ups which had affected his scheme as a “disgrace”.
    Others highlighted the apparent inconsistency of the regulator’s work. One attendee told how two identical buildings had been submitted to the regulator in separate gateway 2 applications and assigned to two separate technical teams for approval. One application had received no follow up questions, while the other had been extensively interrogated. “The industry should hold its head in shame with regard to what happened at Grenfell, but post that, it’s just complete disarray,” he said.

    More than 16,000 professionals attended the 2025 event
    While many are currently focusing on delays at pre-construction, others raised the looming gateway 3 approvals which are needed before occupation. Pareto Projects director Kuli Bajwa said: “Gateway 2 is an issue, but when we get to gateway 3, we’re committed to this project, money’s been spent, debt’s been taken out and week on week it’s costing money. It just keeps wracking up, so we need to resolve that with the regulator asap.”
    >> See also: Homes England boss calls on government to fix ‘unacceptably slow’ gateway 2 approvals
    Caddick Construction managing director for Yorkshire and the North East Steve Ford added: “I think where it will probably get interesting and quite heated I guess is at the point where some of these schemes get rejected at gateway 3, and the finger pointing starts as to why it’s not got through gateway 3.”
    Simon Latson, head of living for the UK and Ireland at JLL, offered a potential solution. “We will be dealing with the regulator all the way through the construction process, and you would like to think that there is a collaborative process where you get early engagement and you can say ‘I’m 12 weeks out from completion, I’m going to start sending you all of my completion documents, my fire alarm certificate’, and say ‘thanks very much that’s the last thing on my list’. That’s probably wishful thinking but that’s got to be a practical solution, as early engagement as possible.”

    How is the government going to pay for its infrastructure strategy?
    Ministers are expected to outline the government’s ten-year infrastructure strategy next month, outlining ambitions not only for transport but social infrastructure including schools and healthcare. At an event titled “A Decade of National Renewal: What Will This Mean for our Regions, Towns and Cities?”, a panel of experts including London deputy mayor Jules Pipe highlighted how much of this new infrastructure is needed to enable the government to achieve its housing targets. But how will it be funded?
    Tom Wagner, cofounder of investment firm Knighthead Capital, which operates largely in the West Midlands with assets including Birmingham City FC, gave a frank assessment of the government’s policies on attracting private sector investment. “There have been a lot of policies in the UK that have forced capital allocators to go elsewhere,” he said, calling for lower taxes and less restrictions on private finance in order to stop investors fleeing to more amenable destinations overseas. 
    “What we’ve found in the UK is, as we’re seeking to tax those who can most afford it, that’s fine, but unless they’re chained here, they’ll just go somewhere else. That creates a bad dynamic because those people are the capital providers, and right now what we need is capital infusion to foster growth.”

    The main square at the centre of the conference
    Pipe offered a counterpoint, suggesting low taxes were not the only reason which determines where wealthy people live and highlighted the appeal of cities which had been made livable by good infrastructure. “There are people living in some very expensive cities but they live there because of the cosmopolitan culture and the parks and the general vibe, and that’s what we have to get right. And the key thing that leads to that is good transport, making it livable.”
    Pipe also criticised the penny-pinching tendencies of past governments on infrastructure investment, including on major transports schemes like Crossrail 2 which were mothballed due to a lack of funds and a perceived lack of value added. “All these things were fought in the trenches with the Treasury about ‘oh well there’s no cost benefit to this’. And where is the major transport like that where after ten years people are saying ‘no one’s using it, that was a really bad idea, it’s never opened up any new businesses or new homes’? It’s absolute nonsense. But that seems to be how we judge it,” he said.
    One solution could be funding through business rates, an approach used on the Northern Line Extension to Battersea Power Station. But the benefits of this have been largely overlooked, Pipe said. “One scheme every ten or twenty years is not good enough. We need to do this more frequently”.

    What is the latest on the government’s new towns programme?
    Where are the new towns going to be built? It was a question which everybody was asking during the conference, with rumours circulating around potential sites in Cambridge of Plymouth. The government is set to reveal the first 12 locations of 10,000 homes each in July, an announcement which will inevitably unleash an onslaught of NIMBY outcries from affected communities.
    A large crowd gathered for an “exclusive update” on the programme from Michael Lyons, chair of the New Towns Taskforce appointed by the government to recommend suitable sites, with many in attendance hoping for a big reveal on the first sites. They were disappointed, but Lyons did provide some interesting insights into the taskforce’s work. Despite a “rather hairbrained” timescale given to the team, which was only established last September, Lyons said it was at a “very advanced stage” in its deliberations after spending the past few months touring the country speaking to developers, landowners and residents in search of potential sites.
    >> See also: Don’t scrimp on quality standards for new towns, taskforce chair tells housebuilders
    “We stand at a crucial moment in the history of home building in this country,” he said. The government’s commitment to so many large-scale developments could herald a return to ambitious spatial planning, he said, with communities strategically located close to the most practical locations for the supply of new infrastructure needed for people to move in.

    A line of tents at the docks site, including the London Pavilion
    “Infrastructure constraints, whether it’s water or power, sewage or transport, must no longer be allowed to hold back growth, and we’ve been shocked as we looked around the country at the extent to which plans ready to be advanced are held back by those infrastructure problems,” he said. The first sites will be in places where much of this infrastructure is already in place, he said, allowing work to start immediately. 
    An emphasis on “identity and legibility” is also part of the criteria for the initial locations, with the government’s design and construction partners to be required to put placemaking at the heart of their schemes. “
    We need to be confident that these can be distinctive places, and that the title of new town, whether it’s an urban extension or whether it’s even a reshaping of an existing urban area or a genuine greenfield site, that it genuinely can be seen and will be seen by its residents as a distinct community.”

    How do you manage a working public-private partnership?
    Successful public partnerships between the public sector and private housebuilders will be essential for the government to achieve its target to build 1.5 million homes by the end of this parliament in 2029. At an event hosted by Muse, a panel discussed where past partnerships have gone wrong and what lessons have been learned.
    Mark Bradbury, Thurrock council’s chief officer for strategic growth partnerships and special projects, spoke of the series of events which led to L&Q pulling out of the 2,800-home Purfleet-on-Thames scheme in Essex and its replacement by housing association Swan.
    “I think it was partly the complex nature of the procurement process that led to market conditions being quite different at the end of the process to the start,” he said.
    “Some of the original partners pulled out halfway through because their business model changed. I think the early conversations at Purfleet on Thames around the masterplan devised by Will Alsop, the potential for L&Q to be one of the partners, the potential for a development manager, the potential for some overseas investment, ended up with L&Q deciding it wasn’t for their business model going forwards. The money from the far east never materialised, so we ended up with somebody who didn’t have the track record, and there was nobody who had working capital. 
    “By then it was clear that the former partnership wasn’t right, so trying to persuade someone to join a partnership which wasn’t working was really difficult. So you’ve got to be really clear at the outset that this is a partnership which is going to work, you know where the working capital is coming from, and everybody’s got a track record.”
    Muse development director for residential Duncan Cumberland outlined a three-part “accelerated procurement process” which the developer has been looking at in order to avoid some of the setbacks which can hit large public private partnerships on housing schemes. The first part is developing a masterplan vision which has the support of community stakeholders, the second is outlining a “realistic and honest” business plan which accommodates viability challenges, and the third is working closely with public sector officials on a strong business case.
    A good partnership is almost like being in a marriage, Avison Young’s London co-managing director Kat Hanna added. “It’s hard to just walk away. We’re in it now, so we need to make it work, and perhaps being in a partnership can often be more revealing in tough times.”
    #key #talking #points #ukreiif
    Key talking points from UKREiiF 2025
    Scene at UKREiiF 2025 outside the Canary bar UKREiiF is getting bigger by the year, with more than 16,000 professionals attending the 2025 construction conference in Leeds this week during three days of sunny weather, networking, panel discussions and robust amounts of booze. It has grown so big over the past few years that it seems almost to have outgrown the city of Leeds itself. A running joke among attendees was the varying quality of accommodation people had managed to secure. All of the budget hotels in the city were fully booked months in advance of the conference, with many - including at least one member of Parliament - reduced to kipping in bed and breakfasts of a questionable nature. Many were forced to stay in nearby towns including York, Wakefield and Bradford and catch the train to the conference each morning. But these snags served as ice breakers for more important conversations at an event which has come at a key pivot point for the industry. With the government on the brink of launching its 10-year industrial strategy and its new towns programme, opportunity was in the air. Networking events between government departments and potential suppliers of all sectors were well attended, although many discussion panels focused on the question of how all of this work would be paid for. And hanging over the conference like a storm cloud were the mounting issues at the Building Safety Regulator which are continuing to cause expensive delays to high rise schemes across the country. While many attendees eyed a huge amount of potential work to fill up pipelines, it was clear the industry is still facing some systemic challenges which could threaten a much-needed recovery following a long period of turmoil. How will the issues at the Building Safety Regulator be fixed? You did not even have to go inside an event titled “Gateways and Growing Pains: Tackling the Building Safety Act” to see how much this issue is affecting construction at the moment. The packed out tent was overflowing into the space outside, with those inside stood like sardines to watch a panel discussion about what has been happening in the high rise residential sector over the past year.  Audience members shared their horror stories of schemes which have been waiting for the best part of a year to get gateway 2 approval from the regulator, which is needed to start construction. There was a palpable sense of anger in the crowd, one professional describing the hold-ups which had affected his scheme as a “disgrace”. Others highlighted the apparent inconsistency of the regulator’s work. One attendee told how two identical buildings had been submitted to the regulator in separate gateway 2 applications and assigned to two separate technical teams for approval. One application had received no follow up questions, while the other had been extensively interrogated. “The industry should hold its head in shame with regard to what happened at Grenfell, but post that, it’s just complete disarray,” he said. More than 16,000 professionals attended the 2025 event While many are currently focusing on delays at pre-construction, others raised the looming gateway 3 approvals which are needed before occupation. Pareto Projects director Kuli Bajwa said: “Gateway 2 is an issue, but when we get to gateway 3, we’re committed to this project, money’s been spent, debt’s been taken out and week on week it’s costing money. It just keeps wracking up, so we need to resolve that with the regulator asap.” >> See also: Homes England boss calls on government to fix ‘unacceptably slow’ gateway 2 approvals Caddick Construction managing director for Yorkshire and the North East Steve Ford added: “I think where it will probably get interesting and quite heated I guess is at the point where some of these schemes get rejected at gateway 3, and the finger pointing starts as to why it’s not got through gateway 3.” Simon Latson, head of living for the UK and Ireland at JLL, offered a potential solution. “We will be dealing with the regulator all the way through the construction process, and you would like to think that there is a collaborative process where you get early engagement and you can say ‘I’m 12 weeks out from completion, I’m going to start sending you all of my completion documents, my fire alarm certificate’, and say ‘thanks very much that’s the last thing on my list’. That’s probably wishful thinking but that’s got to be a practical solution, as early engagement as possible.” How is the government going to pay for its infrastructure strategy? Ministers are expected to outline the government’s ten-year infrastructure strategy next month, outlining ambitions not only for transport but social infrastructure including schools and healthcare. At an event titled “A Decade of National Renewal: What Will This Mean for our Regions, Towns and Cities?”, a panel of experts including London deputy mayor Jules Pipe highlighted how much of this new infrastructure is needed to enable the government to achieve its housing targets. But how will it be funded? Tom Wagner, cofounder of investment firm Knighthead Capital, which operates largely in the West Midlands with assets including Birmingham City FC, gave a frank assessment of the government’s policies on attracting private sector investment. “There have been a lot of policies in the UK that have forced capital allocators to go elsewhere,” he said, calling for lower taxes and less restrictions on private finance in order to stop investors fleeing to more amenable destinations overseas.  “What we’ve found in the UK is, as we’re seeking to tax those who can most afford it, that’s fine, but unless they’re chained here, they’ll just go somewhere else. That creates a bad dynamic because those people are the capital providers, and right now what we need is capital infusion to foster growth.” The main square at the centre of the conference Pipe offered a counterpoint, suggesting low taxes were not the only reason which determines where wealthy people live and highlighted the appeal of cities which had been made livable by good infrastructure. “There are people living in some very expensive cities but they live there because of the cosmopolitan culture and the parks and the general vibe, and that’s what we have to get right. And the key thing that leads to that is good transport, making it livable.” Pipe also criticised the penny-pinching tendencies of past governments on infrastructure investment, including on major transports schemes like Crossrail 2 which were mothballed due to a lack of funds and a perceived lack of value added. “All these things were fought in the trenches with the Treasury about ‘oh well there’s no cost benefit to this’. And where is the major transport like that where after ten years people are saying ‘no one’s using it, that was a really bad idea, it’s never opened up any new businesses or new homes’? It’s absolute nonsense. But that seems to be how we judge it,” he said. One solution could be funding through business rates, an approach used on the Northern Line Extension to Battersea Power Station. But the benefits of this have been largely overlooked, Pipe said. “One scheme every ten or twenty years is not good enough. We need to do this more frequently”. What is the latest on the government’s new towns programme? Where are the new towns going to be built? It was a question which everybody was asking during the conference, with rumours circulating around potential sites in Cambridge of Plymouth. The government is set to reveal the first 12 locations of 10,000 homes each in July, an announcement which will inevitably unleash an onslaught of NIMBY outcries from affected communities. A large crowd gathered for an “exclusive update” on the programme from Michael Lyons, chair of the New Towns Taskforce appointed by the government to recommend suitable sites, with many in attendance hoping for a big reveal on the first sites. They were disappointed, but Lyons did provide some interesting insights into the taskforce’s work. Despite a “rather hairbrained” timescale given to the team, which was only established last September, Lyons said it was at a “very advanced stage” in its deliberations after spending the past few months touring the country speaking to developers, landowners and residents in search of potential sites. >> See also: Don’t scrimp on quality standards for new towns, taskforce chair tells housebuilders “We stand at a crucial moment in the history of home building in this country,” he said. The government’s commitment to so many large-scale developments could herald a return to ambitious spatial planning, he said, with communities strategically located close to the most practical locations for the supply of new infrastructure needed for people to move in. A line of tents at the docks site, including the London Pavilion “Infrastructure constraints, whether it’s water or power, sewage or transport, must no longer be allowed to hold back growth, and we’ve been shocked as we looked around the country at the extent to which plans ready to be advanced are held back by those infrastructure problems,” he said. The first sites will be in places where much of this infrastructure is already in place, he said, allowing work to start immediately.  An emphasis on “identity and legibility” is also part of the criteria for the initial locations, with the government’s design and construction partners to be required to put placemaking at the heart of their schemes. “ We need to be confident that these can be distinctive places, and that the title of new town, whether it’s an urban extension or whether it’s even a reshaping of an existing urban area or a genuine greenfield site, that it genuinely can be seen and will be seen by its residents as a distinct community.” How do you manage a working public-private partnership? Successful public partnerships between the public sector and private housebuilders will be essential for the government to achieve its target to build 1.5 million homes by the end of this parliament in 2029. At an event hosted by Muse, a panel discussed where past partnerships have gone wrong and what lessons have been learned. Mark Bradbury, Thurrock council’s chief officer for strategic growth partnerships and special projects, spoke of the series of events which led to L&Q pulling out of the 2,800-home Purfleet-on-Thames scheme in Essex and its replacement by housing association Swan. “I think it was partly the complex nature of the procurement process that led to market conditions being quite different at the end of the process to the start,” he said. “Some of the original partners pulled out halfway through because their business model changed. I think the early conversations at Purfleet on Thames around the masterplan devised by Will Alsop, the potential for L&Q to be one of the partners, the potential for a development manager, the potential for some overseas investment, ended up with L&Q deciding it wasn’t for their business model going forwards. The money from the far east never materialised, so we ended up with somebody who didn’t have the track record, and there was nobody who had working capital.  “By then it was clear that the former partnership wasn’t right, so trying to persuade someone to join a partnership which wasn’t working was really difficult. So you’ve got to be really clear at the outset that this is a partnership which is going to work, you know where the working capital is coming from, and everybody’s got a track record.” Muse development director for residential Duncan Cumberland outlined a three-part “accelerated procurement process” which the developer has been looking at in order to avoid some of the setbacks which can hit large public private partnerships on housing schemes. The first part is developing a masterplan vision which has the support of community stakeholders, the second is outlining a “realistic and honest” business plan which accommodates viability challenges, and the third is working closely with public sector officials on a strong business case. A good partnership is almost like being in a marriage, Avison Young’s London co-managing director Kat Hanna added. “It’s hard to just walk away. We’re in it now, so we need to make it work, and perhaps being in a partnership can often be more revealing in tough times.” #key #talking #points #ukreiif
    WWW.BDONLINE.CO.UK
    Key talking points from UKREiiF 2025
    Scene at UKREiiF 2025 outside the Canary bar UKREiiF is getting bigger by the year, with more than 16,000 professionals attending the 2025 construction conference in Leeds this week during three days of sunny weather, networking, panel discussions and robust amounts of booze. It has grown so big over the past few years that it seems almost to have outgrown the city of Leeds itself. A running joke among attendees was the varying quality of accommodation people had managed to secure. All of the budget hotels in the city were fully booked months in advance of the conference, with many - including at least one member of Parliament - reduced to kipping in bed and breakfasts of a questionable nature. Many were forced to stay in nearby towns including York, Wakefield and Bradford and catch the train to the conference each morning. But these snags served as ice breakers for more important conversations at an event which has come at a key pivot point for the industry. With the government on the brink of launching its 10-year industrial strategy and its new towns programme, opportunity was in the air. Networking events between government departments and potential suppliers of all sectors were well attended, although many discussion panels focused on the question of how all of this work would be paid for. And hanging over the conference like a storm cloud were the mounting issues at the Building Safety Regulator which are continuing to cause expensive delays to high rise schemes across the country. While many attendees eyed a huge amount of potential work to fill up pipelines, it was clear the industry is still facing some systemic challenges which could threaten a much-needed recovery following a long period of turmoil. How will the issues at the Building Safety Regulator be fixed? You did not even have to go inside an event titled “Gateways and Growing Pains: Tackling the Building Safety Act” to see how much this issue is affecting construction at the moment. The packed out tent was overflowing into the space outside, with those inside stood like sardines to watch a panel discussion about what has been happening in the high rise residential sector over the past year.  Audience members shared their horror stories of schemes which have been waiting for the best part of a year to get gateway 2 approval from the regulator, which is needed to start construction. There was a palpable sense of anger in the crowd, one professional describing the hold-ups which had affected his scheme as a “disgrace”. Others highlighted the apparent inconsistency of the regulator’s work. One attendee told how two identical buildings had been submitted to the regulator in separate gateway 2 applications and assigned to two separate technical teams for approval. One application had received no follow up questions, while the other had been extensively interrogated. “The industry should hold its head in shame with regard to what happened at Grenfell, but post that, it’s just complete disarray,” he said. More than 16,000 professionals attended the 2025 event While many are currently focusing on delays at pre-construction, others raised the looming gateway 3 approvals which are needed before occupation. Pareto Projects director Kuli Bajwa said: “Gateway 2 is an issue, but when we get to gateway 3, we’re committed to this project, money’s been spent, debt’s been taken out and week on week it’s costing money. It just keeps wracking up, so we need to resolve that with the regulator asap.” >> See also: Homes England boss calls on government to fix ‘unacceptably slow’ gateway 2 approvals Caddick Construction managing director for Yorkshire and the North East Steve Ford added: “I think where it will probably get interesting and quite heated I guess is at the point where some of these schemes get rejected at gateway 3, and the finger pointing starts as to why it’s not got through gateway 3.” Simon Latson, head of living for the UK and Ireland at JLL, offered a potential solution. “We will be dealing with the regulator all the way through the construction process, and you would like to think that there is a collaborative process where you get early engagement and you can say ‘I’m 12 weeks out from completion, I’m going to start sending you all of my completion documents, my fire alarm certificate’, and say ‘thanks very much that’s the last thing on my list’. That’s probably wishful thinking but that’s got to be a practical solution, as early engagement as possible.” How is the government going to pay for its infrastructure strategy? Ministers are expected to outline the government’s ten-year infrastructure strategy next month, outlining ambitions not only for transport but social infrastructure including schools and healthcare. At an event titled “A Decade of National Renewal: What Will This Mean for our Regions, Towns and Cities?”, a panel of experts including London deputy mayor Jules Pipe highlighted how much of this new infrastructure is needed to enable the government to achieve its housing targets. But how will it be funded? Tom Wagner, cofounder of investment firm Knighthead Capital, which operates largely in the West Midlands with assets including Birmingham City FC, gave a frank assessment of the government’s policies on attracting private sector investment. “There have been a lot of policies in the UK that have forced capital allocators to go elsewhere,” he said, calling for lower taxes and less restrictions on private finance in order to stop investors fleeing to more amenable destinations overseas.  “What we’ve found in the UK is, as we’re seeking to tax those who can most afford it, that’s fine, but unless they’re chained here, they’ll just go somewhere else. That creates a bad dynamic because those people are the capital providers, and right now what we need is capital infusion to foster growth.” The main square at the centre of the conference Pipe offered a counterpoint, suggesting low taxes were not the only reason which determines where wealthy people live and highlighted the appeal of cities which had been made livable by good infrastructure. “There are people living in some very expensive cities but they live there because of the cosmopolitan culture and the parks and the general vibe, and that’s what we have to get right. And the key thing that leads to that is good transport, making it livable.” Pipe also criticised the penny-pinching tendencies of past governments on infrastructure investment, including on major transports schemes like Crossrail 2 which were mothballed due to a lack of funds and a perceived lack of value added. “All these things were fought in the trenches with the Treasury about ‘oh well there’s no cost benefit to this’. And where is the major transport like that where after ten years people are saying ‘no one’s using it, that was a really bad idea, it’s never opened up any new businesses or new homes’? It’s absolute nonsense. But that seems to be how we judge it,” he said. One solution could be funding through business rates, an approach used on the Northern Line Extension to Battersea Power Station. But the benefits of this have been largely overlooked, Pipe said. “One scheme every ten or twenty years is not good enough. We need to do this more frequently”. What is the latest on the government’s new towns programme? Where are the new towns going to be built? It was a question which everybody was asking during the conference, with rumours circulating around potential sites in Cambridge of Plymouth. The government is set to reveal the first 12 locations of 10,000 homes each in July, an announcement which will inevitably unleash an onslaught of NIMBY outcries from affected communities. A large crowd gathered for an “exclusive update” on the programme from Michael Lyons, chair of the New Towns Taskforce appointed by the government to recommend suitable sites, with many in attendance hoping for a big reveal on the first sites. They were disappointed, but Lyons did provide some interesting insights into the taskforce’s work. Despite a “rather hairbrained” timescale given to the team, which was only established last September, Lyons said it was at a “very advanced stage” in its deliberations after spending the past few months touring the country speaking to developers, landowners and residents in search of potential sites. >> See also: Don’t scrimp on quality standards for new towns, taskforce chair tells housebuilders “We stand at a crucial moment in the history of home building in this country,” he said. The government’s commitment to so many large-scale developments could herald a return to ambitious spatial planning, he said, with communities strategically located close to the most practical locations for the supply of new infrastructure needed for people to move in. A line of tents at the docks site, including the London Pavilion “Infrastructure constraints, whether it’s water or power, sewage or transport, must no longer be allowed to hold back growth, and we’ve been shocked as we looked around the country at the extent to which plans ready to be advanced are held back by those infrastructure problems,” he said. The first sites will be in places where much of this infrastructure is already in place, he said, allowing work to start immediately.  An emphasis on “identity and legibility” is also part of the criteria for the initial locations, with the government’s design and construction partners to be required to put placemaking at the heart of their schemes. “ We need to be confident that these can be distinctive places, and that the title of new town, whether it’s an urban extension or whether it’s even a reshaping of an existing urban area or a genuine greenfield site, that it genuinely can be seen and will be seen by its residents as a distinct community.” How do you manage a working public-private partnership? Successful public partnerships between the public sector and private housebuilders will be essential for the government to achieve its target to build 1.5 million homes by the end of this parliament in 2029. At an event hosted by Muse, a panel discussed where past partnerships have gone wrong and what lessons have been learned. Mark Bradbury, Thurrock council’s chief officer for strategic growth partnerships and special projects, spoke of the series of events which led to L&Q pulling out of the 2,800-home Purfleet-on-Thames scheme in Essex and its replacement by housing association Swan. “I think it was partly the complex nature of the procurement process that led to market conditions being quite different at the end of the process to the start,” he said. “Some of the original partners pulled out halfway through because their business model changed. I think the early conversations at Purfleet on Thames around the masterplan devised by Will Alsop, the potential for L&Q to be one of the partners, the potential for a development manager, the potential for some overseas investment, ended up with L&Q deciding it wasn’t for their business model going forwards. The money from the far east never materialised, so we ended up with somebody who didn’t have the track record, and there was nobody who had working capital.  “By then it was clear that the former partnership wasn’t right, so trying to persuade someone to join a partnership which wasn’t working was really difficult. So you’ve got to be really clear at the outset that this is a partnership which is going to work, you know where the working capital is coming from, and everybody’s got a track record.” Muse development director for residential Duncan Cumberland outlined a three-part “accelerated procurement process” which the developer has been looking at in order to avoid some of the setbacks which can hit large public private partnerships on housing schemes. The first part is developing a masterplan vision which has the support of community stakeholders, the second is outlining a “realistic and honest” business plan which accommodates viability challenges, and the third is working closely with public sector officials on a strong business case. A good partnership is almost like being in a marriage, Avison Young’s London co-managing director Kat Hanna added. “It’s hard to just walk away. We’re in it now, so we need to make it work, and perhaps being in a partnership can often be more revealing in tough times.”
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • The best free AI courses and certificates in 2025 - and I've tried many

    Artur Debat/Getty ImagesGenerative AI is an astonishing technology that's not only here to stay but promises to impact all sectors of work and business. It's already made unprecedented inroads into our daily lives.We all have a lot to learn about it. Spewing out a few prompts to ChatGPT may be easy, but before you can turn all these new capabilities into productive tools, you need to grow your skills. Fortunately, there is a wide range of classes that can help. Also: I let Google's Jules AI agent into my code repo and it did four hours of work in an instantMany companies and schools will try to sell you on their AI education programs. But as I'll show in the following compendium of great resources, you can learn a ton about AI and even get some certifications -- all for free.I have taken at least one class from each of the providers below, and they've all been pretty good. Obviously, some teachers are more compelling than others, but it's been a very helpful process. When working on AI projects for ZDNET, I've also sometimes gone back and taken other classes to shore up my knowledge and understanding.So, I recommend you take a quick spin through my short reviews, possibly dig deeper into the linked articles, and bookmark all of these, because they're valuable resources. Let's get started. Course selection: Huge, more than 1,500AI coursesProgram pricing: Free trial, then /moLinkedIn Learning is one of the oldest online learning platforms, established in 1995 as Lynda.com. The company offers an enormous library of courses on a broad range of topics. There is a monthly fee, but many companies and schools have accounts for all their employees and students. Also: Want a top engineering job in 2025? Here are the skills you need, according to LinkedInLinkedIn Learning is probably the one online education site I've used more than any other -- starting back in the late 1990s. For years, I paid for a membership. Then, I got a membership as an alum of my grad school, which is how I use it now. With so many courses on so many topics, it's a great go-to learning resource.I took two classes on LinkedIn Learning. Here's my testimonial on one of them. I also took the two-hour Machine Learning with Python: Foundations course, which had a great instructor -- Prof. Frederick Nwanganga -- who was previously unknown to me. I have to hand it to LinkedIn. They choose people who know how to teach.I learned a lot in this course, especially about how to collect and prepare data for machine learning. I also was able to stretch my Python programming knowledge, specifically about how a machine learning model can be built in Python. In just two hours, I felt like I got a friendly and comprehensive brain dump.You can read more here: How LinkedIn's free AI course made me a better Python developer.Since there are so many AI courses, you're bound to find a helpful series. To get you started, I've picked three that might open some doors:ChatGPT Tips for the Help Desk: Learn to apply strategic planning, prompt engineering, and agent scripting, as well as other AI techniques, to AI operations.Machine Learning with Python: Foundations: Get step-by-step guidance on how to get started with machine learning via Python.Building Career Agility and Resilience in the Age of AI: Learn how to reimagine your career to adapt and find success in the age of AI. It's worth checking with your employer, agency, or school to see if you qualify for a free membership. Otherwise, you can pay by month or year.A company representative told ZDNET, "LinkedIn Learning has awarded nearly 500K professional certificates over the past 2.5 years. And, generative AI is one of the top topics represented."
    Show more
    View now at Linkedin Learning Course selection: 93Program pricing: Free during the Skills Fest, mostly free afterMicrosoft earned itself a Guinness World Record for its online training session called Skills Fest, which ran in April and May of 2025. This was a mixed combination of live and on-demand courses that anyone could take for free. The only cost was giving up your email account and registering with Microsoft.Also: You can get free AI skills training from Microsoft for a few more days, and I recommend you doHere are three courses I took. The Minecraft one was adorable, and I recommend it for a kids' intro to generative AI.AI Adventurers: A Minecraft Education presentation about the basics of generative AIBuilding applications with GitHub Copilot agent mode:AI for Organizational Leaders:Not all the courses are available on demand. After Skills Fest ends, you should be able to get to the course catalog by visiting this link. There's a Filters block on the left. Click On-Demand and then Apply Filters. You should see a bunch of courses still available for you to enjoy.
    Show more
    View now at Microsoft Skills Fest Course selection: Quite a lotProgram pricing: Many free, some on a paid subscriptionAmazon puts the demand in infrastructure on demand. Rather than building out their own infrastructure, many companies now rely on Amazon to provide scalable cloud infrastructure on demand. Nearly every aspect of IT technology is available for rent from Amazon's wide range of web services. This also includes a fairly large spectrum of individual AI services from computer vision to human-sounding speech to Bedrock, which "makes LLMs from Amazon and leading AI startups available through an API."Also: I spent a weekend with Amazon's free AI courses, and highly recommend you do tooAmazon also offers a wide range of training courses for all these services. Some of them are available for free, while others are available via a paid subscription. Here are three of the free courses you can try out:Foundations of Prompt Engineering: Learn about the principles, techniques, and best practices for designing effective prompts. Amazon Bedrock -- Getting Started: Learn about Amazon's service for building generative AI applications. Twitch Series: AWS Power Hour Introduction to Machine Learning for Developers: This is a recording of a Twitch-based learning chat series. It helps you learn the foundations of machine learning and get a practical perspective on what developers really need to know to get started with machine learning. In addition to classes located on Amazon's sites, the company also has quite a few classes on YouTube. I spent a fun and interesting weekend gobbling up the Generative AI Foundations series, which is an entire playlist of cool stuff to learn about AI.If you're using or even just considering AWS-based services, these courses are well worth your time.
    Show more
    View now Course selection: Fairly broad IT and career buildingProgram pricing: FreeIBM, of course, is IBM. It led the AI pack for years with its Watson offerings. Its generative AI solution is called Watsonx. It focuses on enabling businesses to deploy and manage both traditional machine learning and generative AI, tailored to their unique needs.Also: Have 10 hours? IBM will train you in AI fundamentals - for freeThe company's SkillsBuild Learning classes offer a lot, providing basic training for a few key IT job descriptions -- including cybersecurity specialist, data analyst, user experience designer, and more. Right now, there's only one free AI credential, but it's one that excited a lot of our readers. That's the AI Fundamentals learning credential, which offers six courses. You need to be logged in to follow the link. But registration is easy and free. When you're done, you get an official credential, which you can list on LinkedIn. After I took the course, I did just that -- and, of course, I documented it for you.My favorite was the AI Ethics class, which is an hour and 45 minutes. Through real-world examples you'll learn about AI ethics, how they are implemented, and why AI ethics are so important in building trustworthy AI systems.
    Show more
    View now at IBM SkillsBuild Course selection: Nearly 90 AI-focused coursesProgram pricing: FreeDeepLearning is an education-focused company specializing in AI training. The company is constantly adding new courses that provide training, mostly for developers, in many different facets of AI technology. It partnered with OpenAIto create a number of pretty great courses.I took the ChatGPT Prompt Engineering for Developers course below, which was my first detailed introduction to the ChatGPT API. If you're interested in how coders can use LLMs like ChatGPT, this course is worth your time. Interspersing traditional code with detailed prompts that look more like comments than commands can help you understand these two very different styles of coding.: I took this free AI course for developers in one weekend and highly recommend itThree courses I recommend you check out are:ChatGPT Prompt Engineering for Developers: Go beyond the chat box. Use API access to leverage LLMs into your own applications, and learn to build a custom chatbot.Evaluating and Debugging Generative AI Models Using Weights and Biases: Learn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.Large Language Models with Semantic Search: Learn to use LLMs to enhance search and summarize results.With AI such a hot growth area, I am always amazed at the vast quantity of high-value courseware available for free. Definitely bookmark DeepLearning and keep checking back as it adds more courses.
    Show more
    View now at DeepLearning Google Generative AI Leader course Designed for business leaders Course selection: 5Program pricing: Free to learn, for a certificateGoogle is offering a 7-8 hour program that teaches generative AI concepts to business leaders. This is a pretty comprehensive set of courses, all of which you can watch for free. They include: Gen AI: Beyond the chatbot: Foundational overview of generative AIGen AI: Unlock foundational concepts: Core AI concepts explained Gen AI: Navigate the landscape: AI ecosystem and infrastructureGen AI Apps: Transform your work: Business-focused AI applicationsGen AI Agents: Transform your organization: Strategy and adoption of AIThere is a small catch here: If you want the actual certificate, you need to pony up and take a 90-minute exam. But if you're calling yourself a business leader and want the recognition, I figure is probably a fair price to pay for anointing yourselfas a generative AI business leader.Also: Google offers AI certification for business leaders now - free trainings includedI took the foundational learning module, which was mostly text-based with interactive quizzes and involvement devices. It provided a good overview for someone just getting into the field, and I'm sure the remainder of the classes are equally interesting.
    Show more
    Course selection: Thousands of courses on AI aloneProgram pricing: Free trial, then /mo. Courses are also sold individually.Udemy is a courseware aggregator that publishes courses produced by individual trainers. That makes course style and quality a little inconsistent, but the rating system does help the more outstanding trainers rise to the top. Udemy has a free trial, which is why it's on this list.  I spent some time in Steve Ballinger's Complete ChatGPT Course For Work 2023! and found it quite helpful. Clocking in at a little over two hours, it helps you understand how to balance ChatGPT with your work processes, while keeping in mind the ethics and issues that arise from using AI at work.Udemy offers a /month all-you-can-eat plan, and also sells individual courses. I honestly can't see why anyone would buy the courses individually, since most of them cost more for one course than the entire library does on a subscription.Also: I'm taking AI image courses for free on Udemy with this little trick - and you can tooHere are three courses you might want to check out:ChatGPT Masterclass: ChatGPT Guide for Beginners to Experts!: Gain a professional understanding of ChatGPT, and learn to produce high-quality content seamlessly and grow your earning potential.Discover, Validate & Launch New Business Ideas with ChatGPT: Learn how to generate startup ideas, evaluate their potential, and test them with customers in real life.Midjourney Mastery: Create Visually Stunning AI Art: Learn how to use Midjourney to create art.One of the more interesting aspects of Udemy is that you may find courses on very niche applications of AI, which might not suit vendors offering a more limited selection of mainstream courses. If you have a unique application need, don't hesitate to spend some extra time searching for just the right course.
    Show more
    View now at Udemy Course selection: One AI courseProgram pricing: FreeGoogle's Grow With Google program offers a fairly wide range of certificate programs, which are normally run through Coursera. Earning one of those certificates often requires paying a subscription fee. But we're specifically interested in one Grow With Google program, which is aimed at teachers, and does not involve any fees.The Generative AI for Educators class, developed in concert with MIT's Responsible AI for Social Empowerment and Education, is a 2-hour program designed to help teachers learn about generative AI, and how to use it in the classroom. Also: Google and MIT launch a free generative AI course for teachersGenerative AI is a big challenge in education because it can provide amazing support for students and teachers and, unfortunately, provide an easy way out for students to cheat on their assignments. So a course that can help teachers come up to speed on all the issues can be very powerful.The course provides a professional development certificate on completion, and this one is free.
    Show more
    View now at Google Why should you trust me? I've been working with AI for a very long time. I conducted one of the first-ever academic studies of AI ethics as a thesis project way back in the day. I created and launched an expert system development environment before the first link was connected on the World Wide Web. I did some of the first research of AI on RISC-based computing architectureswhen RISC processors were the size of refrigerators. I also wrote and deployed the AI Editor, a generative AI tool that built news and content dynamically. That may not seem like much today, but I did it way back in 2010, when I had to create a generative AI engine from scratch. At that point, to work, it had to be distributed across five individual servers, each running one agent of a team of clustered AI agents.Also: Six skills you need to become an AI prompt engineerI also have a master's degree in education, focusing on learning and technology. My specialty is adult online learning, so this kind of stuff is right up my alley. When it comes to the courses and programs I'm spotlighting here, there's no way I could take all of them. But I have taken at least one course from each vendor, in order to test them out and report back to you. And, given my long background in the world of AI, this is a topic that has fascinated and enthralled me for most of my academic and professional career.With all that, I will say that the absolute high point was when I could get an AI to talk like a pirate.
    Show more
    Some companies are promoting micro-degrees. They seem expensive, but fast, but are they any good? Let's be clear: A micro-degree is not a degree. It's a set of courses with a marketing name attached. Degrees are granted by accredited academic institutions, accredited by regional accrediting bodies. I'm not saying you won't learn anything in those programs. But they're not degrees and they may cost more than just-as-good courses that don't have a fancy marketing name attached.
    Show more
    So, do certificates have any value? Yes, but how much value they have depends on your prospective employer's perspective. A certificate says you completed a course of study successfully. That might be something of value to you, as well. Also: Want a job in AI? Check out these new AWS AI certificationsYou can set a goal to learn a topic, and if you get a credential, you can be fairly confident you have achieved some learning. Accredited degrees, by contrast, are an assurance that you not only learned the material but did so according to some level of standard and rigor common to other accredited institutions.My advice: If you can get a certificate, and the price for getting it doesn't overly stretch your budget, go ahead and get it. It still is a resume point. But don't fork over bucks on the scale of a college tuition for some promise that you'll get qualified for a job faster and easier than, you know, going to college.
    Show more
    Other learning resources you'll probably loveYou can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.Want more stories about AI? Sign up for Innovation, our weekly newsletter.Artificial Intelligence
    #best #free #courses #certificates #i039ve
    The best free AI courses and certificates in 2025 - and I've tried many
    Artur Debat/Getty ImagesGenerative AI is an astonishing technology that's not only here to stay but promises to impact all sectors of work and business. It's already made unprecedented inroads into our daily lives.We all have a lot to learn about it. Spewing out a few prompts to ChatGPT may be easy, but before you can turn all these new capabilities into productive tools, you need to grow your skills. Fortunately, there is a wide range of classes that can help. Also: I let Google's Jules AI agent into my code repo and it did four hours of work in an instantMany companies and schools will try to sell you on their AI education programs. But as I'll show in the following compendium of great resources, you can learn a ton about AI and even get some certifications -- all for free.I have taken at least one class from each of the providers below, and they've all been pretty good. Obviously, some teachers are more compelling than others, but it's been a very helpful process. When working on AI projects for ZDNET, I've also sometimes gone back and taken other classes to shore up my knowledge and understanding.So, I recommend you take a quick spin through my short reviews, possibly dig deeper into the linked articles, and bookmark all of these, because they're valuable resources. Let's get started. Course selection: Huge, more than 1,500AI coursesProgram pricing: Free trial, then /moLinkedIn Learning is one of the oldest online learning platforms, established in 1995 as Lynda.com. The company offers an enormous library of courses on a broad range of topics. There is a monthly fee, but many companies and schools have accounts for all their employees and students. Also: Want a top engineering job in 2025? Here are the skills you need, according to LinkedInLinkedIn Learning is probably the one online education site I've used more than any other -- starting back in the late 1990s. For years, I paid for a membership. Then, I got a membership as an alum of my grad school, which is how I use it now. With so many courses on so many topics, it's a great go-to learning resource.I took two classes on LinkedIn Learning. Here's my testimonial on one of them. I also took the two-hour Machine Learning with Python: Foundations course, which had a great instructor -- Prof. Frederick Nwanganga -- who was previously unknown to me. I have to hand it to LinkedIn. They choose people who know how to teach.I learned a lot in this course, especially about how to collect and prepare data for machine learning. I also was able to stretch my Python programming knowledge, specifically about how a machine learning model can be built in Python. In just two hours, I felt like I got a friendly and comprehensive brain dump.You can read more here: How LinkedIn's free AI course made me a better Python developer.Since there are so many AI courses, you're bound to find a helpful series. To get you started, I've picked three that might open some doors:ChatGPT Tips for the Help Desk: Learn to apply strategic planning, prompt engineering, and agent scripting, as well as other AI techniques, to AI operations.Machine Learning with Python: Foundations: Get step-by-step guidance on how to get started with machine learning via Python.Building Career Agility and Resilience in the Age of AI: Learn how to reimagine your career to adapt and find success in the age of AI. It's worth checking with your employer, agency, or school to see if you qualify for a free membership. Otherwise, you can pay by month or year.A company representative told ZDNET, "LinkedIn Learning has awarded nearly 500K professional certificates over the past 2.5 years. And, generative AI is one of the top topics represented." Show more View now at Linkedin Learning Course selection: 93Program pricing: Free during the Skills Fest, mostly free afterMicrosoft earned itself a Guinness World Record for its online training session called Skills Fest, which ran in April and May of 2025. This was a mixed combination of live and on-demand courses that anyone could take for free. The only cost was giving up your email account and registering with Microsoft.Also: You can get free AI skills training from Microsoft for a few more days, and I recommend you doHere are three courses I took. The Minecraft one was adorable, and I recommend it for a kids' intro to generative AI.AI Adventurers: A Minecraft Education presentation about the basics of generative AIBuilding applications with GitHub Copilot agent mode:AI for Organizational Leaders:Not all the courses are available on demand. After Skills Fest ends, you should be able to get to the course catalog by visiting this link. There's a Filters block on the left. Click On-Demand and then Apply Filters. You should see a bunch of courses still available for you to enjoy. Show more View now at Microsoft Skills Fest Course selection: Quite a lotProgram pricing: Many free, some on a paid subscriptionAmazon puts the demand in infrastructure on demand. Rather than building out their own infrastructure, many companies now rely on Amazon to provide scalable cloud infrastructure on demand. Nearly every aspect of IT technology is available for rent from Amazon's wide range of web services. This also includes a fairly large spectrum of individual AI services from computer vision to human-sounding speech to Bedrock, which "makes LLMs from Amazon and leading AI startups available through an API."Also: I spent a weekend with Amazon's free AI courses, and highly recommend you do tooAmazon also offers a wide range of training courses for all these services. Some of them are available for free, while others are available via a paid subscription. Here are three of the free courses you can try out:Foundations of Prompt Engineering: Learn about the principles, techniques, and best practices for designing effective prompts. Amazon Bedrock -- Getting Started: Learn about Amazon's service for building generative AI applications. Twitch Series: AWS Power Hour Introduction to Machine Learning for Developers: This is a recording of a Twitch-based learning chat series. It helps you learn the foundations of machine learning and get a practical perspective on what developers really need to know to get started with machine learning. In addition to classes located on Amazon's sites, the company also has quite a few classes on YouTube. I spent a fun and interesting weekend gobbling up the Generative AI Foundations series, which is an entire playlist of cool stuff to learn about AI.If you're using or even just considering AWS-based services, these courses are well worth your time. Show more View now Course selection: Fairly broad IT and career buildingProgram pricing: FreeIBM, of course, is IBM. It led the AI pack for years with its Watson offerings. Its generative AI solution is called Watsonx. It focuses on enabling businesses to deploy and manage both traditional machine learning and generative AI, tailored to their unique needs.Also: Have 10 hours? IBM will train you in AI fundamentals - for freeThe company's SkillsBuild Learning classes offer a lot, providing basic training for a few key IT job descriptions -- including cybersecurity specialist, data analyst, user experience designer, and more. Right now, there's only one free AI credential, but it's one that excited a lot of our readers. That's the AI Fundamentals learning credential, which offers six courses. You need to be logged in to follow the link. But registration is easy and free. When you're done, you get an official credential, which you can list on LinkedIn. After I took the course, I did just that -- and, of course, I documented it for you.My favorite was the AI Ethics class, which is an hour and 45 minutes. Through real-world examples you'll learn about AI ethics, how they are implemented, and why AI ethics are so important in building trustworthy AI systems. Show more View now at IBM SkillsBuild Course selection: Nearly 90 AI-focused coursesProgram pricing: FreeDeepLearning is an education-focused company specializing in AI training. The company is constantly adding new courses that provide training, mostly for developers, in many different facets of AI technology. It partnered with OpenAIto create a number of pretty great courses.I took the ChatGPT Prompt Engineering for Developers course below, which was my first detailed introduction to the ChatGPT API. If you're interested in how coders can use LLMs like ChatGPT, this course is worth your time. Interspersing traditional code with detailed prompts that look more like comments than commands can help you understand these two very different styles of coding.: I took this free AI course for developers in one weekend and highly recommend itThree courses I recommend you check out are:ChatGPT Prompt Engineering for Developers: Go beyond the chat box. Use API access to leverage LLMs into your own applications, and learn to build a custom chatbot.Evaluating and Debugging Generative AI Models Using Weights and Biases: Learn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.Large Language Models with Semantic Search: Learn to use LLMs to enhance search and summarize results.With AI such a hot growth area, I am always amazed at the vast quantity of high-value courseware available for free. Definitely bookmark DeepLearning and keep checking back as it adds more courses. Show more View now at DeepLearning Google Generative AI Leader course Designed for business leaders Course selection: 5Program pricing: Free to learn, for a certificateGoogle is offering a 7-8 hour program that teaches generative AI concepts to business leaders. This is a pretty comprehensive set of courses, all of which you can watch for free. They include: Gen AI: Beyond the chatbot: Foundational overview of generative AIGen AI: Unlock foundational concepts: Core AI concepts explained Gen AI: Navigate the landscape: AI ecosystem and infrastructureGen AI Apps: Transform your work: Business-focused AI applicationsGen AI Agents: Transform your organization: Strategy and adoption of AIThere is a small catch here: If you want the actual certificate, you need to pony up and take a 90-minute exam. But if you're calling yourself a business leader and want the recognition, I figure is probably a fair price to pay for anointing yourselfas a generative AI business leader.Also: Google offers AI certification for business leaders now - free trainings includedI took the foundational learning module, which was mostly text-based with interactive quizzes and involvement devices. It provided a good overview for someone just getting into the field, and I'm sure the remainder of the classes are equally interesting. Show more Course selection: Thousands of courses on AI aloneProgram pricing: Free trial, then /mo. Courses are also sold individually.Udemy is a courseware aggregator that publishes courses produced by individual trainers. That makes course style and quality a little inconsistent, but the rating system does help the more outstanding trainers rise to the top. Udemy has a free trial, which is why it's on this list.  I spent some time in Steve Ballinger's Complete ChatGPT Course For Work 2023! and found it quite helpful. Clocking in at a little over two hours, it helps you understand how to balance ChatGPT with your work processes, while keeping in mind the ethics and issues that arise from using AI at work.Udemy offers a /month all-you-can-eat plan, and also sells individual courses. I honestly can't see why anyone would buy the courses individually, since most of them cost more for one course than the entire library does on a subscription.Also: I'm taking AI image courses for free on Udemy with this little trick - and you can tooHere are three courses you might want to check out:ChatGPT Masterclass: ChatGPT Guide for Beginners to Experts!: Gain a professional understanding of ChatGPT, and learn to produce high-quality content seamlessly and grow your earning potential.Discover, Validate & Launch New Business Ideas with ChatGPT: Learn how to generate startup ideas, evaluate their potential, and test them with customers in real life.Midjourney Mastery: Create Visually Stunning AI Art: Learn how to use Midjourney to create art.One of the more interesting aspects of Udemy is that you may find courses on very niche applications of AI, which might not suit vendors offering a more limited selection of mainstream courses. If you have a unique application need, don't hesitate to spend some extra time searching for just the right course. Show more View now at Udemy Course selection: One AI courseProgram pricing: FreeGoogle's Grow With Google program offers a fairly wide range of certificate programs, which are normally run through Coursera. Earning one of those certificates often requires paying a subscription fee. But we're specifically interested in one Grow With Google program, which is aimed at teachers, and does not involve any fees.The Generative AI for Educators class, developed in concert with MIT's Responsible AI for Social Empowerment and Education, is a 2-hour program designed to help teachers learn about generative AI, and how to use it in the classroom. Also: Google and MIT launch a free generative AI course for teachersGenerative AI is a big challenge in education because it can provide amazing support for students and teachers and, unfortunately, provide an easy way out for students to cheat on their assignments. So a course that can help teachers come up to speed on all the issues can be very powerful.The course provides a professional development certificate on completion, and this one is free. Show more View now at Google Why should you trust me? I've been working with AI for a very long time. I conducted one of the first-ever academic studies of AI ethics as a thesis project way back in the day. I created and launched an expert system development environment before the first link was connected on the World Wide Web. I did some of the first research of AI on RISC-based computing architectureswhen RISC processors were the size of refrigerators. I also wrote and deployed the AI Editor, a generative AI tool that built news and content dynamically. That may not seem like much today, but I did it way back in 2010, when I had to create a generative AI engine from scratch. At that point, to work, it had to be distributed across five individual servers, each running one agent of a team of clustered AI agents.Also: Six skills you need to become an AI prompt engineerI also have a master's degree in education, focusing on learning and technology. My specialty is adult online learning, so this kind of stuff is right up my alley. When it comes to the courses and programs I'm spotlighting here, there's no way I could take all of them. But I have taken at least one course from each vendor, in order to test them out and report back to you. And, given my long background in the world of AI, this is a topic that has fascinated and enthralled me for most of my academic and professional career.With all that, I will say that the absolute high point was when I could get an AI to talk like a pirate. Show more Some companies are promoting micro-degrees. They seem expensive, but fast, but are they any good? Let's be clear: A micro-degree is not a degree. It's a set of courses with a marketing name attached. Degrees are granted by accredited academic institutions, accredited by regional accrediting bodies. I'm not saying you won't learn anything in those programs. But they're not degrees and they may cost more than just-as-good courses that don't have a fancy marketing name attached. Show more So, do certificates have any value? Yes, but how much value they have depends on your prospective employer's perspective. A certificate says you completed a course of study successfully. That might be something of value to you, as well. Also: Want a job in AI? Check out these new AWS AI certificationsYou can set a goal to learn a topic, and if you get a credential, you can be fairly confident you have achieved some learning. Accredited degrees, by contrast, are an assurance that you not only learned the material but did so according to some level of standard and rigor common to other accredited institutions.My advice: If you can get a certificate, and the price for getting it doesn't overly stretch your budget, go ahead and get it. It still is a resume point. But don't fork over bucks on the scale of a college tuition for some promise that you'll get qualified for a job faster and easier than, you know, going to college. Show more Other learning resources you'll probably loveYou can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.Want more stories about AI? Sign up for Innovation, our weekly newsletter.Artificial Intelligence #best #free #courses #certificates #i039ve
    WWW.ZDNET.COM
    The best free AI courses and certificates in 2025 - and I've tried many
    Artur Debat/Getty ImagesGenerative AI is an astonishing technology that's not only here to stay but promises to impact all sectors of work and business. It's already made unprecedented inroads into our daily lives.We all have a lot to learn about it. Spewing out a few prompts to ChatGPT may be easy, but before you can turn all these new capabilities into productive tools, you need to grow your skills. Fortunately, there is a wide range of classes that can help. Also: I let Google's Jules AI agent into my code repo and it did four hours of work in an instantMany companies and schools will try to sell you on their AI education programs. But as I'll show in the following compendium of great resources, you can learn a ton about AI and even get some certifications -- all for free.I have taken at least one class from each of the providers below, and they've all been pretty good. Obviously, some teachers are more compelling than others, but it's been a very helpful process. When working on AI projects for ZDNET, I've also sometimes gone back and taken other classes to shore up my knowledge and understanding.So, I recommend you take a quick spin through my short reviews, possibly dig deeper into the linked articles, and bookmark all of these, because they're valuable resources. Let's get started. Course selection: Huge, more than 1,500(!) AI coursesProgram pricing: Free trial, then $39.99/moLinkedIn Learning is one of the oldest online learning platforms, established in 1995 as Lynda.com. The company offers an enormous library of courses on a broad range of topics. There is a monthly fee, but many companies and schools have accounts for all their employees and students. Also: Want a top engineering job in 2025? Here are the skills you need, according to LinkedInLinkedIn Learning is probably the one online education site I've used more than any other -- starting back in the late 1990s. For years, I paid for a membership. Then, I got a membership as an alum of my grad school, which is how I use it now. With so many courses on so many topics, it's a great go-to learning resource.I took two classes on LinkedIn Learning. Here's my testimonial on one of them. I also took the two-hour Machine Learning with Python: Foundations course, which had a great instructor -- Prof. Frederick Nwanganga -- who was previously unknown to me. I have to hand it to LinkedIn. They choose people who know how to teach.I learned a lot in this course, especially about how to collect and prepare data for machine learning. I also was able to stretch my Python programming knowledge, specifically about how a machine learning model can be built in Python. In just two hours, I felt like I got a friendly and comprehensive brain dump.You can read more here: How LinkedIn's free AI course made me a better Python developer.Since there are so many AI courses, you're bound to find a helpful series. To get you started, I've picked three that might open some doors:ChatGPT Tips for the Help Desk: Learn to apply strategic planning, prompt engineering, and agent scripting, as well as other AI techniques, to AI operations.Machine Learning with Python: Foundations: Get step-by-step guidance on how to get started with machine learning via Python.Building Career Agility and Resilience in the Age of AI: Learn how to reimagine your career to adapt and find success in the age of AI. It's worth checking with your employer, agency, or school to see if you qualify for a free membership. Otherwise, you can pay by month or year (the by-year option is about half price).A company representative told ZDNET, "LinkedIn Learning has awarded nearly 500K professional certificates over the past 2.5 years. And, generative AI is one of the top topics represented." Show more View now at Linkedin Learning Course selection: 93Program pricing: Free during the Skills Fest, mostly free afterMicrosoft earned itself a Guinness World Record for its online training session called Skills Fest, which ran in April and May of 2025. This was a mixed combination of live and on-demand courses that anyone could take for free. The only cost was giving up your email account and registering with Microsoft.Also: You can get free AI skills training from Microsoft for a few more days, and I recommend you doHere are three courses I took. The Minecraft one was adorable, and I recommend it for a kids' intro to generative AI.AI Adventurers: A Minecraft Education presentation about the basics of generative AIBuilding applications with GitHub Copilot agent mode:AI for Organizational Leaders:Not all the courses are available on demand. After Skills Fest ends, you should be able to get to the course catalog by visiting this link. There's a Filters block on the left. Click On-Demand and then Apply Filters. You should see a bunch of courses still available for you to enjoy. Show more View now at Microsoft Skills Fest Course selection: Quite a lotProgram pricing: Many free, some on a paid subscriptionAmazon puts the demand in infrastructure on demand. Rather than building out their own infrastructure, many companies now rely on Amazon to provide scalable cloud infrastructure on demand. Nearly every aspect of IT technology is available for rent from Amazon's wide range of web services. This also includes a fairly large spectrum of individual AI services from computer vision to human-sounding speech to Bedrock, which "makes LLMs from Amazon and leading AI startups available through an API."Also: I spent a weekend with Amazon's free AI courses, and highly recommend you do tooAmazon also offers a wide range of training courses for all these services. Some of them are available for free, while others are available via a paid subscription. Here are three of the free courses you can try out:Foundations of Prompt Engineering: Learn about the principles, techniques, and best practices for designing effective prompts. Amazon Bedrock -- Getting Started: Learn about Amazon's service for building generative AI applications. Twitch Series: AWS Power Hour Introduction to Machine Learning for Developers: This is a recording of a Twitch-based learning chat series. It helps you learn the foundations of machine learning and get a practical perspective on what developers really need to know to get started with machine learning. In addition to classes located on Amazon's sites, the company also has quite a few classes on YouTube. I spent a fun and interesting weekend gobbling up the Generative AI Foundations series, which is an entire playlist of cool stuff to learn about AI.If you're using or even just considering AWS-based services, these courses are well worth your time. Show more View now at Amazon Course selection: Fairly broad IT and career buildingProgram pricing: FreeIBM, of course, is IBM. It led the AI pack for years with its Watson offerings. Its generative AI solution is called Watsonx. It focuses on enabling businesses to deploy and manage both traditional machine learning and generative AI, tailored to their unique needs.Also: Have 10 hours? IBM will train you in AI fundamentals - for freeThe company's SkillsBuild Learning classes offer a lot, providing basic training for a few key IT job descriptions -- including cybersecurity specialist, data analyst, user experience designer, and more. Right now, there's only one free AI credential, but it's one that excited a lot of our readers. That's the AI Fundamentals learning credential, which offers six courses. You need to be logged in to follow the link. But registration is easy and free. When you're done, you get an official credential, which you can list on LinkedIn. After I took the course, I did just that -- and, of course, I documented it for you.My favorite was the AI Ethics class, which is an hour and 45 minutes. Through real-world examples you'll learn about AI ethics, how they are implemented, and why AI ethics are so important in building trustworthy AI systems. Show more View now at IBM SkillsBuild Course selection: Nearly 90 AI-focused coursesProgram pricing: FreeDeepLearning is an education-focused company specializing in AI training. The company is constantly adding new courses that provide training, mostly for developers, in many different facets of AI technology. It partnered with OpenAI (the makers of ChatGPT) to create a number of pretty great courses.I took the ChatGPT Prompt Engineering for Developers course below, which was my first detailed introduction to the ChatGPT API. If you're interested in how coders can use LLMs like ChatGPT, this course is worth your time. Interspersing traditional code with detailed prompts that look more like comments than commands can help you understand these two very different styles of coding.Read more: I took this free AI course for developers in one weekend and highly recommend itThree courses I recommend you check out are:ChatGPT Prompt Engineering for Developers: Go beyond the chat box. Use API access to leverage LLMs into your own applications, and learn to build a custom chatbot.Evaluating and Debugging Generative AI Models Using Weights and Biases: Learn MLOps tools for managing, versioning, debugging, and experimenting in your ML workflow.Large Language Models with Semantic Search: Learn to use LLMs to enhance search and summarize results.With AI such a hot growth area, I am always amazed at the vast quantity of high-value courseware available for free. Definitely bookmark DeepLearning and keep checking back as it adds more courses. Show more View now at DeepLearning Google Generative AI Leader course Designed for business leaders Course selection: 5Program pricing: Free to learn, $99 for a certificateGoogle is offering a 7-8 hour program that teaches generative AI concepts to business leaders. This is a pretty comprehensive set of courses, all of which you can watch for free. They include: Gen AI: Beyond the chatbot: Foundational overview of generative AIGen AI: Unlock foundational concepts: Core AI concepts explained Gen AI: Navigate the landscape: AI ecosystem and infrastructureGen AI Apps: Transform your work: Business-focused AI applicationsGen AI Agents: Transform your organization: Strategy and adoption of AIThere is a small catch here: If you want the actual certificate, you need to pony up $99 and take a 90-minute exam. But if you're calling yourself a business leader and want the recognition, I figure $99 is probably a fair price to pay for anointing yourself (with a Google seal of approval) as a generative AI business leader.Also: Google offers AI certification for business leaders now - free trainings includedI took the foundational learning module, which was mostly text-based with interactive quizzes and involvement devices. It provided a good overview for someone just getting into the field, and I'm sure the remainder of the classes are equally interesting. Show more Course selection: Thousands of courses on AI aloneProgram pricing: Free trial, then $20/mo. Courses are also sold individually.Udemy is a courseware aggregator that publishes courses produced by individual trainers. That makes course style and quality a little inconsistent, but the rating system does help the more outstanding trainers rise to the top. Udemy has a free trial, which is why it's on this list.  I spent some time in Steve Ballinger's Complete ChatGPT Course For Work 2023 (Ethically)! and found it quite helpful. Clocking in at a little over two hours, it helps you understand how to balance ChatGPT with your work processes, while keeping in mind the ethics and issues that arise from using AI at work.Udemy offers a $20/month all-you-can-eat plan, and also sells individual courses. I honestly can't see why anyone would buy the courses individually, since most of them cost more for one course than the entire library does on a subscription.Also: I'm taking AI image courses for free on Udemy with this little trick - and you can tooHere are three courses you might want to check out:ChatGPT Masterclass: ChatGPT Guide for Beginners to Experts!: Gain a professional understanding of ChatGPT, and learn to produce high-quality content seamlessly and grow your earning potential.Discover, Validate & Launch New Business Ideas with ChatGPT: Learn how to generate startup ideas, evaluate their potential, and test them with customers in real life.Midjourney Mastery: Create Visually Stunning AI Art: Learn how to use Midjourney to create art.One of the more interesting aspects of Udemy is that you may find courses on very niche applications of AI, which might not suit vendors offering a more limited selection of mainstream courses. If you have a unique application need, don't hesitate to spend some extra time searching for just the right course. Show more View now at Udemy Course selection: One AI courseProgram pricing: FreeGoogle's Grow With Google program offers a fairly wide range of certificate programs, which are normally run through Coursera. Earning one of those certificates often requires paying a subscription fee. But we're specifically interested in one Grow With Google program, which is aimed at teachers, and does not involve any fees.The Generative AI for Educators class, developed in concert with MIT's Responsible AI for Social Empowerment and Education, is a 2-hour program designed to help teachers learn about generative AI, and how to use it in the classroom. Also: Google and MIT launch a free generative AI course for teachersGenerative AI is a big challenge in education because it can provide amazing support for students and teachers and, unfortunately, provide an easy way out for students to cheat on their assignments. So a course that can help teachers come up to speed on all the issues can be very powerful.The course provides a professional development certificate on completion, and this one is free. Show more View now at Google Why should you trust me? I've been working with AI for a very long time. I conducted one of the first-ever academic studies of AI ethics as a thesis project way back in the day. I created and launched an expert system development environment before the first link was connected on the World Wide Web. I did some of the first research of AI on RISC-based computing architectures (the chips in your phone) when RISC processors were the size of refrigerators. I also wrote and deployed the AI Editor, a generative AI tool that built news and content dynamically. That may not seem like much today, but I did it way back in 2010, when I had to create a generative AI engine from scratch. At that point, to work, it had to be distributed across five individual servers, each running one agent of a team of clustered AI agents.Also: Six skills you need to become an AI prompt engineerI also have a master's degree in education, focusing on learning and technology. My specialty is adult online learning, so this kind of stuff is right up my alley. When it comes to the courses and programs I'm spotlighting here, there's no way I could take all of them. But I have taken at least one course from each vendor, in order to test them out and report back to you. And, given my long background in the world of AI, this is a topic that has fascinated and enthralled me for most of my academic and professional career.With all that, I will say that the absolute high point was when I could get an AI to talk like a pirate. Show more Some companies are promoting micro-degrees. They seem expensive, but fast, but are they any good? Let's be clear: A micro-degree is not a degree. It's a set of courses with a marketing name attached. Degrees are granted by accredited academic institutions, accredited by regional accrediting bodies. I'm not saying you won't learn anything in those programs. But they're not degrees and they may cost more than just-as-good courses that don't have a fancy marketing name attached. Show more So, do certificates have any value? Yes, but how much value they have depends on your prospective employer's perspective. A certificate says you completed a course of study successfully. That might be something of value to you, as well. Also: Want a job in AI? Check out these new AWS AI certificationsYou can set a goal to learn a topic, and if you get a credential, you can be fairly confident you have achieved some learning. Accredited degrees, by contrast, are an assurance that you not only learned the material but did so according to some level of standard and rigor common to other accredited institutions.My advice: If you can get a certificate, and the price for getting it doesn't overly stretch your budget, go ahead and get it. It still is a resume point. But don't fork over bucks on the scale of a college tuition for some promise that you'll get qualified for a job faster and easier than, you know, going to college. Show more Other learning resources you'll probably loveYou can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.Want more stories about AI? Sign up for Innovation, our weekly newsletter.Artificial Intelligence
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Google's most powerful AI tools aren't for us

    At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new per month AI Ultra plan.
    During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features.

    What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it.
    I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip.
    By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better.
    The fact you have to pay per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public.
    By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it.

    Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine.
    The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently.
    Kris Holt for Engadget
    I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.This article originally appeared on Engadget at
    #google039s #most #powerful #tools #aren039t
    Google's most powerful AI tools aren't for us
    At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new per month AI Ultra plan. During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features. What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it. I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip. By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better. The fact you have to pay per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public. By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it. Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine. The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently. Kris Holt for Engadget I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.This article originally appeared on Engadget at #google039s #most #powerful #tools #aren039t
    WWW.ENGADGET.COM
    Google's most powerful AI tools aren't for us
    At I/O 2025, nothing Google showed off felt new. Instead, we got a retread of the company's familiar obsession with its own AI prowess. For the better part of two hours, Google spent playing up products like AI Mode, generative AI apps like Jules and Flow, and a bewildering new $250 per month AI Ultra plan. During Tuesday's keynote, I thought a lot about my first visit to Mountain View in 2018. I/O 2018 was different. Between Digital Wellbeing for Android, an entirely redesigned Maps app and even Duplex, Google felt like a company that had its pulse on what people wanted from technology. In fact, later that same year, my co-worker Cherlynn Low penned a story titled How Google won software in 2018. "Companies don't often make features that are truly helpful, but in 2018, Google proved its software can change your life," she wrote at the time, referencing the Pixel 3's Call Screening and "magical" Night Sight features. What announcement from Google I/O 2025 comes even close to Night Sight, Google Photos, or, if you're being more generous to the company, Call Screening or Duplex? The only one that comes to my mind is the fact that Google is bringing live language translation to Google Meet. That's a feature that many will find useful, and Google spent all of approximately a minute talking about it. I'm sure there are people who are excited to use Jules to vibe code or Veo 3 to generate video clips, but are either of those products truly transformational? Some "AI filmmakers" may argue otherwise, but when's the last time you thought your life would be dramatically better if you could only get a computer to make you a silly, 30-second clip. By contrast, consider the impact Night Sight has had. With one feature, Google revolutionized phones by showing that software, with the help of AI, could overcome the physical limits of minuscule camera hardware. More importantly, Night Sight was a response to a real problem people had in the real world. It spurred companies like Samsung and Apple to catch up, and now any smartphone worth buying has serious low light capabilities. Night Sight changed the industry, for the better. The fact you have to pay $250 per month to use Veo 3 and Google's other frontier models as much as you want should tell everything you need to know about who the company thinks these tools are for: they're not for you and I. I/O is primarily an event for developers, but the past several I/O conferences have felt like Google flexing its AI muscles rather than using those muscles to do something useful. In the past, the company had a knack for contextualizing what it was showing off in a way that would resonate with the broader public. By 2018, machine learning was already at the forefront of nearly everything Google was doing, and, more so than any other big tech company at the time, Google was on the bleeding edge of that revolution. And yet the difference between now and then was that in 2018 it felt like much of Google's AI might was directed in the service of tools and features that would actually be useful to people. Since then, for Google, AI has gone from a means to an end to an end in and of itself, and we're all the worse for it. Even less dubious features like AI Mode offer questionable usefulness. Google debuted the chatbot earlier this year, and has since then has been making it available to more and more people. The problem with AI Mode is that it's designed to solve a problem of the company's own making. We all know the quality of Google Search results has declined dramatically over the last few years. Rather than fixing what's broken and making its system harder to game by SEO farms, Google tells us AI Mode represents the future of its search engine. The thing is, a chat bot is not a replacement for a proper search engine. I frequently use ChatGPT Search to research things I'm interested in. However, as great as it is to get a detailed and articulate response to a question, ChatGPT can and will often get things wrong. We're all familiar with the errors AI Overviews produced when Google first started rolling out the feature. AI Overviews might not be in the news anymore, but they're still prone to producing embarrassing mistakes. Just take a look at the screenshot my co-worker Kris Holt sent to me recently. Kris Holt for Engadget I don't think it's an accident I/O 2025 ended with a showcase of Android XR, a platform that sees the company revisiting a failed concept. Let's also not forget that Android, an operating system billions of people interact with every day, was relegated to a pre-taped livestream the week before. Right now, Google feels like it's a company eager to repeat the mistakes of Google Glass. Rather than trying to meet people where they need it, Google is creating products few are actually asking for. I don't know about you, but that doesn't make me excited for the company's future.This article originally appeared on Engadget at https://www.engadget.com/ai/googles-most-powerful-ai-tools-arent-for-us-134657007.html?src=rss
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • I let Google's Jules AI agent into my code repo and it did four hours of work in an instant

    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell.Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar".I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in, add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar, and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed thatback to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVNup to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Julescan handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning. Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live withoutThese tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    #let #google039s #jules #agent #into
    I let Google's Jules AI agent into my code repo and it did four hours of work in an instant
    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell.Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar".I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in, add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar, and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed thatback to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVNup to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Julescan handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning. Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live withoutThese tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured #let #google039s #jules #agent #into
    WWW.ZDNET.COM
    I let Google's Jules AI agent into my code repo and it did four hours of work in an instant
    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell (or both).Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar". [Place this in the MAKE SITE PRIVATE block, located just under the Enable login privacy checkbox and before the Site Privacy Mode segment.] I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Save Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in (2), add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar (a WordPress API call is used), and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed that (the full main branch on my local machine) back to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVN (the source code control system used by the WordPress community) up to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Jules (and presumably OpenAI Codex and GitHub Copilot Coding Agent) can handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning (it's about 8 p.m. now as I write this). Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live without (and why they make such a big difference)These tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Google I/O: LLM capabilities power agentic AI search

    BillionPhotos.com - stock.adobe.

    News

    Google I/O: LLM capabilities power agentic AI search
    As Google strives to make AI universal, it is starting to integrate agentic AI into Google Search to fast-track purchasing on websites

    By

    Cliff Saran,
    Managing Editor

    Published: 21 May 2025 17:00

    Google has taken steps to advance artificial intelligencelanguage models closer to what it calls “world models”, as it tries to make them more useful and universal.
    The company used its annual developer event, Google I/O, to showcase the Gemini 2.5 large language model, new application programming interfacesand programming tools and agentic AI functionality built into Google’s internet search engine. 
    Gemini is Google’s primary AI engine, but it offers several others including Gemma 3n, a small language model for mobile devices.
    Demis Hassabis, CEO of Google Deepmind, said: “Our ultimate vision is to transform the Gemini app into a universal AI assistant that will perform everyday tasks for us, take care of our mundane admin and surface delightful new recommendations – making us more productive and enriching our lives.”
    Hassabis said the company was beginning to develop new AI capabilities, following on from work on a research prototype called Project Astra, which explored concepts such as video understanding, screen sharing and memory. “Over the past year, we’ve been integrating capabilities like these into Gemini Live for more people to experience today.”
    Google has been working to make its main AI model, Gemini, a world model. With Gemini 2.5 Pro, Hassabis said the model can make plans and imagine new experiences by understanding and simulating aspects of the world.
    Hassabis said the progress the company has made is based on training AI agents to master complex games such as Go and StarCraft, with its Genie 2 software able to generate 3D-simulated interactive worlds.
    According to Hassabis, Gemini is making use of this work in how it handles world knowledge and reasoning to represent and simulate natural environments. Other examples include Veo, Google’s AI-based video content generator, which Hassabis said has a deep understanding of “intuitive physics”.
    As it strives to make its AI more useful, the company has released a Gemini 2.5-powered feature called AI Mode, on its North American internet search site, to provide more in-depth querying than just what is possible with the AI Overview functionality currently available.
    An agentic AI feature called Project Mariner is also now part of AI Mode, which Google said can help people searching the internet get tasks done quicker. As an example, Google said a query to find affordable tickets would use AI Mode to look across multiple websites, analysing hundreds of potential ticket options with real-time pricing and inventory, and handle the work of filling in forms.
    “AI Mode will present ticket options that meet your exact criteria, and you can complete the purchase on whichever site you prefer, saving you time while keeping you in control,” Google said.
    Another agentic AI feature uses AI Mode to fast-track browsing and purchases on websites, with the entire payment process automated using Google Pay.
    To support software developers, Google has integrated Gemini 2.5 Pro into the native code editor of Google AI Studio, which it said would help programmers prototype faster.
    It has also released a beta version of Jules, an asynchronous code agent, which works directly with a software developer’s GitHub repositories.
    Google said users can ask Jules to take on tasks such as version upgrades, writing tests, updating features and bug fixes.

    about Google AI models

    Gemini vs. ChatGPT – what’s the difference: ChatGPT took the early lead among AI-generated chatbots before Google answered with Gemini. While ChatGPT and Gemini perform similar tasks, there are differences.
    Google Gemini 2.5 Pro explained – Everything you need to know: Google’s latest multimodal model – Gemini 2.5 Pro – entered the AI race with enhanced reasoning and improved performance across coding, math and science benchmarks.

    In The Current Issue:

    UK critical systems at risk from ‘digital divide’ created by AI threats
    UK at risk of Russian cyber and physical attacks as Ukraine seeks peace deal
    Standard Chartered grounds AI ambitions in data governance

    Download Current Issue

    Microsoft entices developers to build more Windows AI apps
    – Cliff Saran's Enterprise blog

    Red Hat launches llm-d community & project
    – Open Source Insider

    View All Blogs
    #google #llm #capabilities #power #agentic
    Google I/O: LLM capabilities power agentic AI search
    BillionPhotos.com - stock.adobe. News Google I/O: LLM capabilities power agentic AI search As Google strives to make AI universal, it is starting to integrate agentic AI into Google Search to fast-track purchasing on websites By Cliff Saran, Managing Editor Published: 21 May 2025 17:00 Google has taken steps to advance artificial intelligencelanguage models closer to what it calls “world models”, as it tries to make them more useful and universal. The company used its annual developer event, Google I/O, to showcase the Gemini 2.5 large language model, new application programming interfacesand programming tools and agentic AI functionality built into Google’s internet search engine.  Gemini is Google’s primary AI engine, but it offers several others including Gemma 3n, a small language model for mobile devices. Demis Hassabis, CEO of Google Deepmind, said: “Our ultimate vision is to transform the Gemini app into a universal AI assistant that will perform everyday tasks for us, take care of our mundane admin and surface delightful new recommendations – making us more productive and enriching our lives.” Hassabis said the company was beginning to develop new AI capabilities, following on from work on a research prototype called Project Astra, which explored concepts such as video understanding, screen sharing and memory. “Over the past year, we’ve been integrating capabilities like these into Gemini Live for more people to experience today.” Google has been working to make its main AI model, Gemini, a world model. With Gemini 2.5 Pro, Hassabis said the model can make plans and imagine new experiences by understanding and simulating aspects of the world. Hassabis said the progress the company has made is based on training AI agents to master complex games such as Go and StarCraft, with its Genie 2 software able to generate 3D-simulated interactive worlds. According to Hassabis, Gemini is making use of this work in how it handles world knowledge and reasoning to represent and simulate natural environments. Other examples include Veo, Google’s AI-based video content generator, which Hassabis said has a deep understanding of “intuitive physics”. As it strives to make its AI more useful, the company has released a Gemini 2.5-powered feature called AI Mode, on its North American internet search site, to provide more in-depth querying than just what is possible with the AI Overview functionality currently available. An agentic AI feature called Project Mariner is also now part of AI Mode, which Google said can help people searching the internet get tasks done quicker. As an example, Google said a query to find affordable tickets would use AI Mode to look across multiple websites, analysing hundreds of potential ticket options with real-time pricing and inventory, and handle the work of filling in forms. “AI Mode will present ticket options that meet your exact criteria, and you can complete the purchase on whichever site you prefer, saving you time while keeping you in control,” Google said. Another agentic AI feature uses AI Mode to fast-track browsing and purchases on websites, with the entire payment process automated using Google Pay. To support software developers, Google has integrated Gemini 2.5 Pro into the native code editor of Google AI Studio, which it said would help programmers prototype faster. It has also released a beta version of Jules, an asynchronous code agent, which works directly with a software developer’s GitHub repositories. Google said users can ask Jules to take on tasks such as version upgrades, writing tests, updating features and bug fixes. about Google AI models Gemini vs. ChatGPT – what’s the difference: ChatGPT took the early lead among AI-generated chatbots before Google answered with Gemini. While ChatGPT and Gemini perform similar tasks, there are differences. Google Gemini 2.5 Pro explained – Everything you need to know: Google’s latest multimodal model – Gemini 2.5 Pro – entered the AI race with enhanced reasoning and improved performance across coding, math and science benchmarks. In The Current Issue: UK critical systems at risk from ‘digital divide’ created by AI threats UK at risk of Russian cyber and physical attacks as Ukraine seeks peace deal Standard Chartered grounds AI ambitions in data governance Download Current Issue Microsoft entices developers to build more Windows AI apps – Cliff Saran's Enterprise blog Red Hat launches llm-d community & project – Open Source Insider View All Blogs #google #llm #capabilities #power #agentic
    WWW.COMPUTERWEEKLY.COM
    Google I/O: LLM capabilities power agentic AI search
    BillionPhotos.com - stock.adobe. News Google I/O: LLM capabilities power agentic AI search As Google strives to make AI universal, it is starting to integrate agentic AI into Google Search to fast-track purchasing on websites By Cliff Saran, Managing Editor Published: 21 May 2025 17:00 Google has taken steps to advance artificial intelligence (AI) language models closer to what it calls “world models”, as it tries to make them more useful and universal. The company used its annual developer event, Google I/O, to showcase the Gemini 2.5 large language model (LLM), new application programming interfaces (APIs) and programming tools and agentic AI functionality built into Google’s internet search engine.  Gemini is Google’s primary AI engine, but it offers several others including Gemma 3n, a small language model for mobile devices. Demis Hassabis, CEO of Google Deepmind, said: “Our ultimate vision is to transform the Gemini app into a universal AI assistant that will perform everyday tasks for us, take care of our mundane admin and surface delightful new recommendations – making us more productive and enriching our lives.” Hassabis said the company was beginning to develop new AI capabilities, following on from work on a research prototype called Project Astra, which explored concepts such as video understanding, screen sharing and memory. “Over the past year, we’ve been integrating capabilities like these into Gemini Live for more people to experience today.” Google has been working to make its main AI model, Gemini, a world model. With Gemini 2.5 Pro, Hassabis said the model can make plans and imagine new experiences by understanding and simulating aspects of the world. Hassabis said the progress the company has made is based on training AI agents to master complex games such as Go and StarCraft, with its Genie 2 software able to generate 3D-simulated interactive worlds. According to Hassabis, Gemini is making use of this work in how it handles world knowledge and reasoning to represent and simulate natural environments. Other examples include Veo, Google’s AI-based video content generator, which Hassabis said has a deep understanding of “intuitive physics”. As it strives to make its AI more useful, the company has released a Gemini 2.5-powered feature called AI Mode, on its North American internet search site, to provide more in-depth querying than just what is possible with the AI Overview functionality currently available. An agentic AI feature called Project Mariner is also now part of AI Mode, which Google said can help people searching the internet get tasks done quicker. As an example, Google said a query to find affordable tickets would use AI Mode to look across multiple websites, analysing hundreds of potential ticket options with real-time pricing and inventory, and handle the work of filling in forms. “AI Mode will present ticket options that meet your exact criteria, and you can complete the purchase on whichever site you prefer, saving you time while keeping you in control,” Google said. Another agentic AI feature uses AI Mode to fast-track browsing and purchases on websites, with the entire payment process automated using Google Pay. To support software developers, Google has integrated Gemini 2.5 Pro into the native code editor of Google AI Studio, which it said would help programmers prototype faster. It has also released a beta version of Jules, an asynchronous code agent, which works directly with a software developer’s GitHub repositories. Google said users can ask Jules to take on tasks such as version upgrades, writing tests, updating features and bug fixes. Read more about Google AI models Gemini vs. ChatGPT – what’s the difference: ChatGPT took the early lead among AI-generated chatbots before Google answered with Gemini. While ChatGPT and Gemini perform similar tasks, there are differences. Google Gemini 2.5 Pro explained – Everything you need to know: Google’s latest multimodal model – Gemini 2.5 Pro – entered the AI race with enhanced reasoning and improved performance across coding, math and science benchmarks. In The Current Issue: UK critical systems at risk from ‘digital divide’ created by AI threats UK at risk of Russian cyber and physical attacks as Ukraine seeks peace deal Standard Chartered grounds AI ambitions in data governance Download Current Issue Microsoft entices developers to build more Windows AI apps – Cliff Saran's Enterprise blog Red Hat launches llm-d community & project – Open Source Insider View All Blogs
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • All the New Google I/O Features You Can Try Right Now

    Google I/O 2025 was chock full of announcements. The problem is, Google isn't always clear about which features are new, which have already been released, and which are coming out in the future. While there are plenty of features to look out for on the horizon, and a number still that you've been able to use for some time, there are brand new features Google rolled out immediately after announcing them. Here are all the Google I/O features you can check out right now—though some do require you to pay.Imagen 4

    Credit: Google

    Google's latest AI image generation model, Imagen 4, is available today. Google was sparse on too many specific upgrades with this new model, but says that Imagen is faster, and now capable of images up to 2K resolution with additional aspect ratios. The change the company focused most on is typography: Google says Imagen 4 can generate text without any of the usual AI errors you associate with AI image generators. On top of that, the model can incorporate different art styles and design choices, depending on the context of the prompt. You can see that in the image above, which uses a pixelated design for the text to match the 8-bit comic strip look.You can try the latest Imagen model in the Gemini app, Whisk, Vertex AI, and through Workspace apps like Slides, Vids, and Docs. AI Mode

    Credit: Lifehacker

    AI Mode essentially turns Search into a Gemini chat: It allows you to ask more complicated and multi-step questions. Google then uses a "query fan-out" technique to scan the web for relevant links and generate a complete answer from those results. I haven't dived too deep into this feature, but it does largely work as advertised—I'm just not sure if that's all that much more useful than searching through links myself.Google has been testing AI Mode since March, but now it's available to everyone in the U.S. If you want to use it, you should see the new AI Mode option on the right side of the search bar on Google's homepage. "Try it on"

    Credit: Google

    Shopping online is so much more convenient than going in-person, in all ways but one: You can't try on any of the clothes ahead of time. Once they arrive, you try them on, and if they don't fit, or you don't like the look, back to the store they go. Google wants to eliminatethis from happening. Its new "try it on" feature scans an image you provide of yourself to get an understanding of your body. Then, when you're browsing for new clothes online, you can choose to "try it on," and Google's AI will generate an image of you wearing the article of clothing. It's an interesting concept, but also a bit creepy. I personally do not want Google analyzing images of myself so that it can more accurately map different types of clothes on me. Personally, I'd rather run the risk of making a return. But if you want to give it a go, you can try the experimental feature in Google Labs today.Jules

    Jules is Google's "asynchronous, agentic coding assistant." According to Google, the assistant clones your codebase into a secure Google Cloud virtual machine, so that it can execute tasks like writing tests, building features, generating audio changelogs, fixing bugs, and bumping dependency versions.The assistant works in the background and doesn't use your code for training, which is a bit refreshing from a company like Google. I'm not a coder, so I can't say for sure whether Jules seems useful. But if you are a coder, you can test it for yourself. As of today, Jules is available as a free public beta for anyone who wants to try it out—though Google says usage limits apply, and that they will charge for different Jules plans once the "platform matures." Speech translation in Google Meet

    Credit: Google

    If you're a Google Workspace subscriber, this next feature is pretty great. As shown off during the I/O keynote, Google Meet now has live speech translation. Here's how it works: Let's say you're talking to someone on a Google Meet call who speaks Spanish, but you only speak English. You'll hear the other caller speak in Spanish for a moment or two, before an AI voice dubs over them with the translation in English. They'll receive the opposite on their end after you start speaking. Google is working on adding more languages in the coming weeks.Google AI Ultra subscription

    Credit: Google

    There's a new subscription in town, though it's not for the faint of heart. Google announced a new "AI Ultra" subscription at I/O yesterday, that costs a whopping per month. That extraordinary price tag comes with some major AI features: You get access to the highest limits for all of Google's AI models, including Gemini 2.5 Deep Think, Veo 3, and Project Mariner. It also comes with 30TB of cloud storage, and, amusingly, a YouTube Premium subscription. You really have to be a big believer in AI to drop upwards of a year on this subscription. If you have a budding curiosity for AI, perhaps Google's "AI Pro" plan is more your speed—this is the new name for Google's AI Premium subscription, and comes with the same perks, plus now access to Flow. Veo

    Veo 3 is Google's latest AI video model. Unlike Imagen 4, however, it's only available to AI Ultra subscribers. If you're not comfortable with spending a month on Google's services, you'll have to stick with Veo 2. Google says Veo 3 is better at real-world physics than Veo 2 and can handle realistic lip-syncing. You can see that in the clip above, which shows an "old sailor" reciting a poem. His lips do indeed match the speech, and the video is crisp with elements of realism. I personally don't think it looks "real," and it still has plenty of tells that it's an AI video, but there's no doubt we are entering some dangerous waters with AI video.AI Pro subscribers with access to Veo 2 have some new video model capabilities, as well, however. You now have camera controls to dictate how you want shots to look; options for adjusting the aspect ratio of the clip; tools to add or remove objects from a scene; and controls to "outpaint," or to add on to the scene of a clip. Flow

    Google didn't just upgrade its AI video model: It also released an AI video editor, called Flow. Flow lets you generate videos using Veo 2 and Veo 3, but it also lets you cut together those clips on a timeline and control the camera movements of your clips. You can use Imagen to generate an element you want to add to a scene, then ask Veo to generate a clip with that element in it. I'm sure AI film enthusiasts are going to love this, but I remain skeptical. I could see this being a useful tool for story boarding ideas, but for creating real content? I know I don't want to watch full shows or movies generated by AI. Maybe the odd Instagram video gets a chuckle out of me, but I don't think Reels are Google's end goal here.Flow is available for both AI Pro and AI Ultra subscribers. If you have AI Pro, you can access Veo 2, but AI Ultra subscribers can choose between Veo 2 and Veo 3.Gemini in Chrome

    Credit: Google

    AI Pro and AI Ultra subscribers now have access to Gemini in Google Chrome, which appears in the toolbar of your browser window. You can ask the assistant to summarize a web page, as well as inquire about elements of that web page. There are plans for agentic features in the future, so Gemini could check out websites for you, but, for now, you're really limited to two functions.
    #all #new #google #features #you
    All the New Google I/O Features You Can Try Right Now
    Google I/O 2025 was chock full of announcements. The problem is, Google isn't always clear about which features are new, which have already been released, and which are coming out in the future. While there are plenty of features to look out for on the horizon, and a number still that you've been able to use for some time, there are brand new features Google rolled out immediately after announcing them. Here are all the Google I/O features you can check out right now—though some do require you to pay.Imagen 4 Credit: Google Google's latest AI image generation model, Imagen 4, is available today. Google was sparse on too many specific upgrades with this new model, but says that Imagen is faster, and now capable of images up to 2K resolution with additional aspect ratios. The change the company focused most on is typography: Google says Imagen 4 can generate text without any of the usual AI errors you associate with AI image generators. On top of that, the model can incorporate different art styles and design choices, depending on the context of the prompt. You can see that in the image above, which uses a pixelated design for the text to match the 8-bit comic strip look.You can try the latest Imagen model in the Gemini app, Whisk, Vertex AI, and through Workspace apps like Slides, Vids, and Docs. AI Mode Credit: Lifehacker AI Mode essentially turns Search into a Gemini chat: It allows you to ask more complicated and multi-step questions. Google then uses a "query fan-out" technique to scan the web for relevant links and generate a complete answer from those results. I haven't dived too deep into this feature, but it does largely work as advertised—I'm just not sure if that's all that much more useful than searching through links myself.Google has been testing AI Mode since March, but now it's available to everyone in the U.S. If you want to use it, you should see the new AI Mode option on the right side of the search bar on Google's homepage. "Try it on" Credit: Google Shopping online is so much more convenient than going in-person, in all ways but one: You can't try on any of the clothes ahead of time. Once they arrive, you try them on, and if they don't fit, or you don't like the look, back to the store they go. Google wants to eliminatethis from happening. Its new "try it on" feature scans an image you provide of yourself to get an understanding of your body. Then, when you're browsing for new clothes online, you can choose to "try it on," and Google's AI will generate an image of you wearing the article of clothing. It's an interesting concept, but also a bit creepy. I personally do not want Google analyzing images of myself so that it can more accurately map different types of clothes on me. Personally, I'd rather run the risk of making a return. But if you want to give it a go, you can try the experimental feature in Google Labs today.Jules Jules is Google's "asynchronous, agentic coding assistant." According to Google, the assistant clones your codebase into a secure Google Cloud virtual machine, so that it can execute tasks like writing tests, building features, generating audio changelogs, fixing bugs, and bumping dependency versions.The assistant works in the background and doesn't use your code for training, which is a bit refreshing from a company like Google. I'm not a coder, so I can't say for sure whether Jules seems useful. But if you are a coder, you can test it for yourself. As of today, Jules is available as a free public beta for anyone who wants to try it out—though Google says usage limits apply, and that they will charge for different Jules plans once the "platform matures." Speech translation in Google Meet Credit: Google If you're a Google Workspace subscriber, this next feature is pretty great. As shown off during the I/O keynote, Google Meet now has live speech translation. Here's how it works: Let's say you're talking to someone on a Google Meet call who speaks Spanish, but you only speak English. You'll hear the other caller speak in Spanish for a moment or two, before an AI voice dubs over them with the translation in English. They'll receive the opposite on their end after you start speaking. Google is working on adding more languages in the coming weeks.Google AI Ultra subscription Credit: Google There's a new subscription in town, though it's not for the faint of heart. Google announced a new "AI Ultra" subscription at I/O yesterday, that costs a whopping per month. That extraordinary price tag comes with some major AI features: You get access to the highest limits for all of Google's AI models, including Gemini 2.5 Deep Think, Veo 3, and Project Mariner. It also comes with 30TB of cloud storage, and, amusingly, a YouTube Premium subscription. You really have to be a big believer in AI to drop upwards of a year on this subscription. If you have a budding curiosity for AI, perhaps Google's "AI Pro" plan is more your speed—this is the new name for Google's AI Premium subscription, and comes with the same perks, plus now access to Flow. Veo Veo 3 is Google's latest AI video model. Unlike Imagen 4, however, it's only available to AI Ultra subscribers. If you're not comfortable with spending a month on Google's services, you'll have to stick with Veo 2. Google says Veo 3 is better at real-world physics than Veo 2 and can handle realistic lip-syncing. You can see that in the clip above, which shows an "old sailor" reciting a poem. His lips do indeed match the speech, and the video is crisp with elements of realism. I personally don't think it looks "real," and it still has plenty of tells that it's an AI video, but there's no doubt we are entering some dangerous waters with AI video.AI Pro subscribers with access to Veo 2 have some new video model capabilities, as well, however. You now have camera controls to dictate how you want shots to look; options for adjusting the aspect ratio of the clip; tools to add or remove objects from a scene; and controls to "outpaint," or to add on to the scene of a clip. Flow Google didn't just upgrade its AI video model: It also released an AI video editor, called Flow. Flow lets you generate videos using Veo 2 and Veo 3, but it also lets you cut together those clips on a timeline and control the camera movements of your clips. You can use Imagen to generate an element you want to add to a scene, then ask Veo to generate a clip with that element in it. I'm sure AI film enthusiasts are going to love this, but I remain skeptical. I could see this being a useful tool for story boarding ideas, but for creating real content? I know I don't want to watch full shows or movies generated by AI. Maybe the odd Instagram video gets a chuckle out of me, but I don't think Reels are Google's end goal here.Flow is available for both AI Pro and AI Ultra subscribers. If you have AI Pro, you can access Veo 2, but AI Ultra subscribers can choose between Veo 2 and Veo 3.Gemini in Chrome Credit: Google AI Pro and AI Ultra subscribers now have access to Gemini in Google Chrome, which appears in the toolbar of your browser window. You can ask the assistant to summarize a web page, as well as inquire about elements of that web page. There are plans for agentic features in the future, so Gemini could check out websites for you, but, for now, you're really limited to two functions. #all #new #google #features #you
    LIFEHACKER.COM
    All the New Google I/O Features You Can Try Right Now
    Google I/O 2025 was chock full of announcements. The problem is, Google isn't always clear about which features are new, which have already been released, and which are coming out in the future. While there are plenty of features to look out for on the horizon, and a number still that you've been able to use for some time, there are brand new features Google rolled out immediately after announcing them. Here are all the Google I/O features you can check out right now—though some do require you to pay.Imagen 4 Credit: Google Google's latest AI image generation model, Imagen 4, is available today. Google was sparse on too many specific upgrades with this new model, but says that Imagen is faster, and now capable of images up to 2K resolution with additional aspect ratios. The change the company focused most on is typography: Google says Imagen 4 can generate text without any of the usual AI errors you associate with AI image generators. On top of that, the model can incorporate different art styles and design choices, depending on the context of the prompt. You can see that in the image above, which uses a pixelated design for the text to match the 8-bit comic strip look.You can try the latest Imagen model in the Gemini app, Whisk, Vertex AI, and through Workspace apps like Slides, Vids, and Docs. AI Mode Credit: Lifehacker AI Mode essentially turns Search into a Gemini chat: It allows you to ask more complicated and multi-step questions. Google then uses a "query fan-out" technique to scan the web for relevant links and generate a complete answer from those results. I haven't dived too deep into this feature, but it does largely work as advertised—I'm just not sure if that's all that much more useful than searching through links myself.Google has been testing AI Mode since March, but now it's available to everyone in the U.S. If you want to use it, you should see the new AI Mode option on the right side of the search bar on Google's homepage. "Try it on" Credit: Google Shopping online is so much more convenient than going in-person, in all ways but one: You can't try on any of the clothes ahead of time. Once they arrive, you try them on, and if they don't fit, or you don't like the look, back to the store they go. Google wants to eliminate (or, at least, greatly cut down on) this from happening. Its new "try it on" feature scans an image you provide of yourself to get an understanding of your body. Then, when you're browsing for new clothes online, you can choose to "try it on," and Google's AI will generate an image of you wearing the article of clothing. It's an interesting concept, but also a bit creepy. I personally do not want Google analyzing images of myself so that it can more accurately map different types of clothes on me. Personally, I'd rather run the risk of making a return. But if you want to give it a go, you can try the experimental feature in Google Labs today.Jules Jules is Google's "asynchronous, agentic coding assistant." According to Google, the assistant clones your codebase into a secure Google Cloud virtual machine, so that it can execute tasks like writing tests, building features, generating audio changelogs, fixing bugs, and bumping dependency versions.The assistant works in the background and doesn't use your code for training, which is a bit refreshing from a company like Google. I'm not a coder, so I can't say for sure whether Jules seems useful. But if you are a coder, you can test it for yourself. As of today, Jules is available as a free public beta for anyone who wants to try it out—though Google says usage limits apply, and that they will charge for different Jules plans once the "platform matures." Speech translation in Google Meet Credit: Google If you're a Google Workspace subscriber, this next feature is pretty great. As shown off during the I/O keynote, Google Meet now has live speech translation. Here's how it works: Let's say you're talking to someone on a Google Meet call who speaks Spanish, but you only speak English. You'll hear the other caller speak in Spanish for a moment or two, before an AI voice dubs over them with the translation in English. They'll receive the opposite on their end after you start speaking. Google is working on adding more languages in the coming weeks.Google AI Ultra subscription Credit: Google There's a new subscription in town, though it's not for the faint of heart. Google announced a new "AI Ultra" subscription at I/O yesterday, that costs a whopping $250 per month. That extraordinary price tag comes with some major AI features: You get access to the highest limits for all of Google's AI models, including Gemini 2.5 Deep Think, Veo 3, and Project Mariner. It also comes with 30TB of cloud storage, and, amusingly, a YouTube Premium subscription. You really have to be a big believer in AI to drop upwards of $3,000 a year on this subscription. If you have a budding curiosity for AI, perhaps Google's "AI Pro" plan is more your speed—this is the new name for Google's AI Premium subscription, and comes with the same perks, plus now access to Flow (which I'll cover below). Veo Veo 3 is Google's latest AI video model. Unlike Imagen 4, however, it's only available to AI Ultra subscribers. If you're not comfortable with spending $250 a month on Google's services, you'll have to stick with Veo 2. Google says Veo 3 is better at real-world physics than Veo 2 and can handle realistic lip-syncing. You can see that in the clip above, which shows an "old sailor" reciting a poem. His lips do indeed match the speech, and the video is crisp with elements of realism. I personally don't think it looks "real," and it still has plenty of tells that it's an AI video, but there's no doubt we are entering some dangerous waters with AI video.AI Pro subscribers with access to Veo 2 have some new video model capabilities, as well, however. You now have camera controls to dictate how you want shots to look; options for adjusting the aspect ratio of the clip; tools to add or remove objects from a scene; and controls to "outpaint," or to add on to the scene of a clip. Flow Google didn't just upgrade its AI video model: It also released an AI video editor, called Flow. Flow lets you generate videos using Veo 2 and Veo 3, but it also lets you cut together those clips on a timeline and control the camera movements of your clips. You can use Imagen to generate an element you want to add to a scene, then ask Veo to generate a clip with that element in it. I'm sure AI film enthusiasts are going to love this, but I remain skeptical. I could see this being a useful tool for story boarding ideas, but for creating real content? I know I don't want to watch full shows or movies generated by AI. Maybe the odd Instagram video gets a chuckle out of me, but I don't think Reels are Google's end goal here.Flow is available for both AI Pro and AI Ultra subscribers. If you have AI Pro, you can access Veo 2, but AI Ultra subscribers can choose between Veo 2 and Veo 3.Gemini in Chrome Credit: Google AI Pro and AI Ultra subscribers now have access to Gemini in Google Chrome, which appears in the toolbar of your browser window. You can ask the assistant to summarize a web page, as well as inquire about elements of that web page. There are plans for agentic features in the future, so Gemini could check out websites for you, but, for now, you're really limited to two functions.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Google’s Jules aims to out-code Codex in battle for the AI developer stack

    Google released Jules, its coding agent, into beta as autonomous coding agents are quickly gaining market share.Read More
    #googles #jules #aims #outcode #codex
    Google’s Jules aims to out-code Codex in battle for the AI developer stack
    Google released Jules, its coding agent, into beta as autonomous coding agents are quickly gaining market share.Read More #googles #jules #aims #outcode #codex
    VENTUREBEAT.COM
    Google’s Jules aims to out-code Codex in battle for the AI developer stack
    Google released Jules, its coding agent, into beta as autonomous coding agents are quickly gaining market share.Read More
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Everything you need to know from Google I/O 2025

    From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago.But, we know what you're really here for: Product updates and new product announcements.

    You May Also Like

    Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year.

    We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them.Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at per month, the Ultra plan comes in at a whopping per month for its entire suite of products with the highest rate limits available.Google Search AI ModeGoogle tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top.At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means.

    Google announces AI Mode in Google Search
    Credit: Google

    AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail. In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable.According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today.AI ShoppingOut of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees.Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description.

    Google AI Shopping
    Credit: Google

    Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet.The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change.Google said users will be able to try on different looks via AI in Google Labs starting today.Android XRGoogle's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR.One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that.While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year.

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Project Moohan
    Credit: Google

    In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker.Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that.GeminiEasily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first.Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files.

    Jules coding agent
    Credit: Google

    Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages.The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts.Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer.Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S.Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs.Generative AI updatesGemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too.Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward.

    Flow AI video tool
    Credit: Google

    A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue. 

    Related Stories

    Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2.Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform.Project Starline aka Google BeamAnother big announcement out of Google I/O: Project Starline is no more.Google's immersive communication project will now be known as Google Beam, an AI-first communication platform.As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks.

    Google Meet translations
    Credit: Google

    Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display.Project AstraWhile Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now.Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it.Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users.

    Gemini Live
    Credit: Google

    It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement.Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities.Project MarinerAnother one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner. While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future.Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.
    #everything #you #need #know #google
    Everything you need to know from Google I/O 2025
    From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago.But, we know what you're really here for: Product updates and new product announcements. You May Also Like Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year. We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them.Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at per month, the Ultra plan comes in at a whopping per month for its entire suite of products with the highest rate limits available.Google Search AI ModeGoogle tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top.At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means. Google announces AI Mode in Google Search Credit: Google AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail. In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable.According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today.AI ShoppingOut of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees.Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description. Google AI Shopping Credit: Google Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet.The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change.Google said users will be able to try on different looks via AI in Google Labs starting today.Android XRGoogle's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR.One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that.While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Project Moohan Credit: Google In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker.Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that.GeminiEasily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first.Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files. Jules coding agent Credit: Google Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages.The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts.Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer.Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S.Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs.Generative AI updatesGemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too.Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward. Flow AI video tool Credit: Google A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue.  Related Stories Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2.Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform.Project Starline aka Google BeamAnother big announcement out of Google I/O: Project Starline is no more.Google's immersive communication project will now be known as Google Beam, an AI-first communication platform.As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks. Google Meet translations Credit: Google Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display.Project AstraWhile Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now.Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it.Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users. Gemini Live Credit: Google It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement.Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities.Project MarinerAnother one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner. While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future.Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app. #everything #you #need #know #google
    MASHABLE.COM
    Everything you need to know from Google I/O 2025
    From the opening AI-influenced intro video set to "You Get What You Give" by New Radicals to CEO Sundar Pichai's sign-off, Google I/O 2025 was packed with news and updates for the tech giant and its products. And when we say packed, we mean it, as this year's Google I/O clocked in at nearly two hours long. During that time, Google shared some big wins for its AI products, such as Gemini topping various categories on the LMArena leaderboard. Another example that Google seemed really proud of was the fact that Gemini completed Pokémon Blue a few weeks ago.But, we know what you're really here for: Product updates and new product announcements. You May Also Like Aside from a few braggadocious moments, Google spent most of those 117 minutes talking about what's coming out next. Google I/O mixes consumer-facing product announcements with more developer-oriented ones, from the latest Gmail updates to Google's powerful new chip, Ironwood, coming to Google Cloud customers later this year. We're going to break down what product updates and announcements you need to know from the full two-hour event, so you can walk away with all the takeaways without spending the same time it takes to watch a major motion picture to learn about them.Before we dive in though, here's the most shocking news out of Google I/O: The subscription pricing that Google has for its Google AI Ultra plan. While Google provides a base subscription at $19.99 per month, the Ultra plan comes in at a whopping $249.99 per month for its entire suite of products with the highest rate limits available.Google Search AI ModeGoogle tucked away what will easily be its most visible feature way too far back into the event, but we'll surface it to the top.At Google I/O, Google announced that the new AI Mode feature for Google Search is launching today to everyone in the United States. Basically, it will allow users to use Google's search feature but with longer, more complex queries. Using a "query fan-out technique," AI Mode will be able to break a search into multiple parts in order to process each part of the query, then pull all the information together to present to the user. Google says AI Mode "checks its work" too, but its unclear at this time exactly what that means. Google announces AI Mode in Google Search Credit: Google AI Mode is available now. Later in the summer, Google will launch Personal Context in AI Mode, which will make suggestions based on a user's past searches and other contextual information about the user from other Google products like Gmail. In addition, other new features will soon come to AI Mode, such as Deep Search, which can dive deeper into queries by searching through multiple websites, and data visualization features, which can take the search results and present them in a visual graph when applicable.According to Google, its AI overviews in search are viewed by 1.5 billion users every month, so AI Mode clearly has the largest potential user base out of all of Google's announcements today.AI ShoppingOut of all the announcements at the event, these AI shopping features seemed to spark the biggest reaction from Google I/O live attendees.Connected to AI Mode, Google showed off its Shopping Graph, which includes more than 50 billion products globally. Users can just describe the type of product they are looking for – say a specific type of couch, and Google will present options that match that description. Google AI Shopping Credit: Google Google also had a significant presentation that showed its presenter upload a photo of themselves so that AI could create a visual of what she'd look like in a dress. This virtual try-on feature will be available in Google Labs, and it's the IRL version of Cher's Clueless closet.The presenter was then able to use an AI shopping agent to keep tabs on the item's availability and track its price. When the price dropped, the user received a notification of the pricing change.Google said users will be able to try on different looks via AI in Google Labs starting today.Android XRGoogle's long-awaited post-Google Glass AR/VR plans were finally presented at Google I/O. The company also unveiled a number of wearable products utilizing its AR/VR operating system, Android XR.One important part of the Android XR announcement is that Google seems to understand the different use cases for an immersive headset and an on-the-go pair of smartglasses and have built Android XR to accommodate that.While Samsung has previously teased its Project Moohan XR headset, Google I/O marked the first time that Google revealed the product, which is being built in partnership with the mobile giant and chipmaker Qualcomm. Google shared that the Project Moohan headset should be available later this year. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Project Moohan Credit: Google In addition to the XR headset, Google announced Glasses with Android XR, smartglasses that incorporate a camera, speakers, and in-lens display that connect with a user's smartphone. Unlike Google Glass, these smart glasses will incorporate more fashionable looks thanks to partnerships with Gentle Monster and Warby Parker.Google shared that developers will be able to start developing for Glasses starting next year, so it's likely that a release date for the smartglasses will follow after that.GeminiEasily the star of Google I/O 2025 was the company's AI model, Gemini. Google announced a new updated Gemini 2.5 Pro, which it says is its most powerful model yet. The company showed Gemini 2.5 Pro being used to turn sketches into full applications in a demo. Along with that, Google introduced Gemini 2.5 Flash, which is a more affordable version of the powerful Pro model. The latter will be released in early June with the former coming out soon after. Google also revealed Gemini 2.5 Pro Deep Think for complex math and coding, which will only be available to "trusted testers" at first.Speaking of coding, Google shared its asynchronous coding agent Jules, which is currently in public beta. Developers will be able to utilize Jules in order to tackle codebase tasks and modify files. Jules coding agent Credit: Google Developers will also have access to a new Native Audio Output text-to-speech model which can replicate the same voice in different languages.The Gemini app will soon see a new Agent Mode, bringing users an AI agent who can research and complete tasks based on a user's prompts.Gemini will also be deeply integrated into Google products like Workspace with Personalized Smart Replies. Gemini will use personal context via documents, emails, and more from across a user's Google apps in order to match their tone, voice, and style in order to generate automatic replies. Workspace users will find the feature available in Gmail this summer.Other features announced for Gemini include Deep Research, which lets users upload their own files to guide the AI agent when asking questions, and Gemini in Chrome, an AI Assistant that answers queries using the context on the web page that a user is on. The latter feature is rolling out this week for Gemini subscribers in the U.S.Google intends to bring Gemini to all of its devices, including smartwatches, smart cars, and smart TVs.Generative AI updatesGemini's AI assistant capabilities and language model updates were only a small piece of Google's broader AI puzzle. The company had a slew of generative AI announcements to make too.Google announced Imagen 4, its latest image generation model. According to Google, Imagen 4 provides richer details and better visuals. In addition, Imagen 4 is apparently much better at generating text and typography in its graphics. This is an area which AI models are notoriously bad at, so Imagen 4 appears to be a big step forward. Flow AI video tool Credit: Google A new video generation model, Veo 3, was also unveiled with a video generation tool called Flow. Google claims Veo 3 has a stronger understanding of physics when generating scenes and can also create accompanying sound effects, background noise, and dialogue.  Related Stories Both Veo 3 and Flow are available today alongside a new generative music model called Lyria 2.Google I/O also saw the debut of Gemini Canvas, which Google describes as a co-creation platform.Project Starline aka Google BeamAnother big announcement out of Google I/O: Project Starline is no more.Google's immersive communication project will now be known as Google Beam, an AI-first communication platform.As part of Google Beam, Google announced Google Meet translations, which basically provides real-time speech translation during meetings on the platform. AI will be able to match a speaker's voice and tone, so it sounds like the translation is coming directly from them. Google Meet translations are available in English and Spanish starting today with more language on the way in the coming weeks. Google Meet translations Credit: Google Google also had another work-in-progress project to tease under Google Beam: A 3-D conferencing platform that uses multiple cameras to capture a user from different angles in order to render the individual on a 3-D light-field display.Project AstraWhile Project Starline may have undergone a name change, it appears Project Astra is still kicking it at Google, at least for now.Project Astra is Google's real-world universal AI assistant and Google had plenty to announce as part of it.Gemini Live is a new AI assistant feature that can interact with a user's surroundings via their mobile device's camera and audio input. Users can ask Gemini Live questions about what they're capturing on camera and the AI assistant will be able to answer queries based on those visuals. According to Google, Gemini Live is rolling out today to Gemini users. Gemini Live Credit: Google It appears Google has plans to implement Project Astra's live AI capabilities into Google Search's AI mode as a Google Lens visual search enhancement.Google also highlighted some of its hopes for Gemini Live, such as being able to help as an accessibility tool for those with disabilities.Project MarinerAnother one of Google's AI projects is an AI agent that can interact with the web in order to complete tasks for the user known as Project Mariner. While Project Mariner was previously announced late last year, Google had some updates such as a multi-tasking feature which would allow an AI agent to work on up to 10 different tasks simultaneously. Another new feature is Teach and Repeat, which would provide the AI agent with the ability to learn from previously completed tasks in order to complete similar ones without the need for the same detailed direction in the future.Google announced plans to bring these agentic AI capabilities to Chrome, Google Search via AI Mode, and the Gemini app.
    0 Yorumlar 0 hisse senetleri 0 önizleme
Arama Sonuçları
CGShares https://cgshares.com