• 5 AI prompts to put serious money in your pocket

    close A majority of small businesses are using artificial intelligence A majority of small businesses are using artificial intelligence and finding out it can save time and money. So, you want to start making money using AI but you’re not trying to build Skynet or learn 15 coding languages first? Good, because neither am I. You don’t need to become the next Sam Altman or have a Ph.D. in machine learning to turn artificial intelligence into real income. What you do need is curiosity, a dash of creativity, and the right prompts. Enter to win for you and for your favorite person or charity in our Pay It Forward Sweepstakes. Hurry, ends soon!I’ve pulled together five powerful, practical prompts you can throw into ChatGPTto help you start earning extra cash this week. These aren’t pie-in-the-sky dreams or K-a-month YouTube ad schemes. They’re doable, even if your calendar is already packed.5-MINUTE CLEANUP FOR YOUR PHONE AND COMPUTERLet’s get to it.1. Fast-Track Your Freelance LifePrompt to use:"Act as a freelance business coach. Suggest 3 services I can offer on Fiverr or Upwork using AI tools like ChatGPT, Midjourney or Canva. I haveexperience."Why this works:Freelance work is exploding right now. Platforms like Upwork and Fiverr are filled with small businesses and entrepreneurs who need help—but don’t have the budget to hire full-time staff. If you’ve got any kind of professional background, you can use AI tools to turbocharge your services. Writing blog posts? ChatGPT can give you a draft. Creating logos or social media templates? Midjourney and Canva are your new best friends.You don’t need a team. You don’t need fancy software. You just need a good prompt and the confidence to say, "Yes, I can do that." AI helps you scale what you already know how to do. A man is pictured with a smartphone and laptop computer on January 31, 2019. 2. Make Product Descriptions Sexy AgainPrompt to use:"Rewrite this Etsy or Shopify product description to make it more compelling and SEO-friendly. Target audience:. Here’s the original:."Why this works:Let’s face it—most product descriptions online are a snooze. But good copy sells. Whether you’re running your own shop or helping someone else with theirs, compelling product descriptions convert clicks into customers. Use ChatGPT to punch up the language, fine-tune for SEO, and speak directly to your ideal buyer.DON’T SCAM YOURSELF WITH THE TRICKS HACKERS DON’T WANT ME TO SHARERemember: people don’t just want to buy a weird mug. They want to buy what it says about them. That’s where a smart rewrite can turn browsers into buyers.3. Social Posts That SellPrompt to use:"Create 5 attention-grabbing Instagram captions to promote this. Keep the toneand include a strong call to action."Why this works:We live in a scroll-happy world. Your social captions need to grab attention in less than three seconds. But not everyone’s a copywriter—and not everyone has time to be. AI can help you crank out engaging content in the tone and style that fits your brand. Add a great photo, post consistently, and you’re suddenly a one-person content agency without the overhead. A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research company and ChatGPT chatbot.If you’re managing social for clients or your own biz, this prompt is gold. Use it to build content calendars, write reels scripts, or even draft ad copy.4. Polite Emails That You MoneyPrompt to use:"Write a short, polite email to ask for a lower rate or discount on. Mention that I’m a loyal customer comparing alternatives."Why this works:Negotiating discounts doesn’t always feel comfortable but it absolutely works. Companies often have unpublished deals, especially for longtime users or small businesses. And customer service reps? They're human beings. A kind, well-written email might be all it takes to get a discount on that software you’re using every month.20 TECH TRICKS TO MAKE LIFE BETTER, SAFER OR EASIERI’ve personally saved hundreds of dollars just by sending quick, respectful emails like this. AI can help you strike the perfect tone confident but kind, assertive but not pushy.5. Your Passive Income KitPrompt to use:"Give me 3 high-demand, low-competition ideas for a short e-book or low-content book I can sell on Amazon. I have experience in."Why this works:You have knowledge people want. Package it. Sell it. Repeat. Whether it’s a short guide on starting a backyard garden or a workbook for productivity hacks, e-books and low-content bookssell surprisingly well. And AI can help you brainstorm ideas, outline chapters, even draft content to polish up. In this photo illustration the logo of Apple Mail Programme Mail can be seen on a smartphone next to a finger on March 27, 2024 in Berlin, Germany.Upload it to Amazon KDP or Gumroad, and now you’ve got a digital product that can earn money in your sleep. People pay for convenience, and you have life experience worth sharing.Final ThoughtYou don’t need to master AI to start earning with it. You just need to start using it. These five prompts are a low-risk, high-potential way to get your feet wet. And if you need a hand turning these sparks into something bigger, I’m here.I built my multimillion-dollar business with no investors and no debt. I’ve done this without a big team or expensive consultants. And I’d love to help you do the same.Drop me a note. I read every one.Get tech-smarter on your scheduleAward-winning host Kim Komando is your secret weapon for navigating tech.National radio: Airing on 500+ stations across the US - Find yours or get the free podcast.Daily newsletter: Join 650,000 people who read the CurrentWatch: On Kim’s YouTube channelCopyright 2025, WestStar Multimedia Entertainment. All rights reserved. 
    #prompts #put #serious #money #your
    5 AI prompts to put serious money in your pocket
    close A majority of small businesses are using artificial intelligence A majority of small businesses are using artificial intelligence and finding out it can save time and money. So, you want to start making money using AI but you’re not trying to build Skynet or learn 15 coding languages first? Good, because neither am I. You don’t need to become the next Sam Altman or have a Ph.D. in machine learning to turn artificial intelligence into real income. What you do need is curiosity, a dash of creativity, and the right prompts.💸 Enter to win for you and for your favorite person or charity in our Pay It Forward Sweepstakes. Hurry, ends soon!I’ve pulled together five powerful, practical prompts you can throw into ChatGPTto help you start earning extra cash this week. These aren’t pie-in-the-sky dreams or K-a-month YouTube ad schemes. They’re doable, even if your calendar is already packed.5-MINUTE CLEANUP FOR YOUR PHONE AND COMPUTERLet’s get to it.1. Fast-Track Your Freelance LifePrompt to use:"Act as a freelance business coach. Suggest 3 services I can offer on Fiverr or Upwork using AI tools like ChatGPT, Midjourney or Canva. I haveexperience."Why this works:Freelance work is exploding right now. Platforms like Upwork and Fiverr are filled with small businesses and entrepreneurs who need help—but don’t have the budget to hire full-time staff. If you’ve got any kind of professional background, you can use AI tools to turbocharge your services. Writing blog posts? ChatGPT can give you a draft. Creating logos or social media templates? Midjourney and Canva are your new best friends.You don’t need a team. You don’t need fancy software. You just need a good prompt and the confidence to say, "Yes, I can do that." AI helps you scale what you already know how to do. A man is pictured with a smartphone and laptop computer on January 31, 2019. 2. Make Product Descriptions Sexy AgainPrompt to use:"Rewrite this Etsy or Shopify product description to make it more compelling and SEO-friendly. Target audience:. Here’s the original:."Why this works:Let’s face it—most product descriptions online are a snooze. But good copy sells. Whether you’re running your own shop or helping someone else with theirs, compelling product descriptions convert clicks into customers. Use ChatGPT to punch up the language, fine-tune for SEO, and speak directly to your ideal buyer.DON’T SCAM YOURSELF WITH THE TRICKS HACKERS DON’T WANT ME TO SHARERemember: people don’t just want to buy a weird mug. They want to buy what it says about them. That’s where a smart rewrite can turn browsers into buyers.3. Social Posts That SellPrompt to use:"Create 5 attention-grabbing Instagram captions to promote this. Keep the toneand include a strong call to action."Why this works:We live in a scroll-happy world. Your social captions need to grab attention in less than three seconds. But not everyone’s a copywriter—and not everyone has time to be. AI can help you crank out engaging content in the tone and style that fits your brand. Add a great photo, post consistently, and you’re suddenly a one-person content agency without the overhead. A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research company and ChatGPT chatbot.If you’re managing social for clients or your own biz, this prompt is gold. Use it to build content calendars, write reels scripts, or even draft ad copy.4. Polite Emails That You MoneyPrompt to use:"Write a short, polite email to ask for a lower rate or discount on. Mention that I’m a loyal customer comparing alternatives."Why this works:Negotiating discounts doesn’t always feel comfortable but it absolutely works. Companies often have unpublished deals, especially for longtime users or small businesses. And customer service reps? They're human beings. A kind, well-written email might be all it takes to get a discount on that software you’re using every month.20 TECH TRICKS TO MAKE LIFE BETTER, SAFER OR EASIERI’ve personally saved hundreds of dollars just by sending quick, respectful emails like this. AI can help you strike the perfect tone confident but kind, assertive but not pushy.5. Your Passive Income KitPrompt to use:"Give me 3 high-demand, low-competition ideas for a short e-book or low-content book I can sell on Amazon. I have experience in."Why this works:You have knowledge people want. Package it. Sell it. Repeat. Whether it’s a short guide on starting a backyard garden or a workbook for productivity hacks, e-books and low-content bookssell surprisingly well. And AI can help you brainstorm ideas, outline chapters, even draft content to polish up. In this photo illustration the logo of Apple Mail Programme Mail can be seen on a smartphone next to a finger on March 27, 2024 in Berlin, Germany.Upload it to Amazon KDP or Gumroad, and now you’ve got a digital product that can earn money in your sleep. People pay for convenience, and you have life experience worth sharing.Final ThoughtYou don’t need to master AI to start earning with it. You just need to start using it. These five prompts are a low-risk, high-potential way to get your feet wet. And if you need a hand turning these sparks into something bigger, I’m here.I built my multimillion-dollar business with no investors and no debt. I’ve done this without a big team or expensive consultants. And I’d love to help you do the same.Drop me a note. I read every one.Get tech-smarter on your scheduleAward-winning host Kim Komando is your secret weapon for navigating tech.National radio: Airing on 500+ stations across the US - Find yours or get the free podcast.Daily newsletter: Join 650,000 people who read the CurrentWatch: On Kim’s YouTube channelCopyright 2025, WestStar Multimedia Entertainment. All rights reserved.  #prompts #put #serious #money #your
    WWW.FOXNEWS.COM
    5 AI prompts to put serious money in your pocket
    close A majority of small businesses are using artificial intelligence A majority of small businesses are using artificial intelligence and finding out it can save time and money. So, you want to start making money using AI but you’re not trying to build Skynet or learn 15 coding languages first? Good, because neither am I. You don’t need to become the next Sam Altman or have a Ph.D. in machine learning to turn artificial intelligence into real income. What you do need is curiosity, a dash of creativity, and the right prompts.💸 Enter to win $500 for you and $500 for your favorite person or charity in our Pay It Forward Sweepstakes. Hurry, ends soon!I’ve pulled together five powerful, practical prompts you can throw into ChatGPT (or your AI tool of choice) to help you start earning extra cash this week. These aren’t pie-in-the-sky dreams or $10K-a-month YouTube ad schemes. They’re doable, even if your calendar is already packed.5-MINUTE CLEANUP FOR YOUR PHONE AND COMPUTERLet’s get to it.1. Fast-Track Your Freelance LifePrompt to use:"Act as a freelance business coach. Suggest 3 services I can offer on Fiverr or Upwork using AI tools like ChatGPT, Midjourney or Canva. I have [insert skill: writing/design/admin/accounting/managerial] experience."Why this works:Freelance work is exploding right now. Platforms like Upwork and Fiverr are filled with small businesses and entrepreneurs who need help—but don’t have the budget to hire full-time staff. If you’ve got any kind of professional background, you can use AI tools to turbocharge your services. Writing blog posts? ChatGPT can give you a draft. Creating logos or social media templates? Midjourney and Canva are your new best friends.You don’t need a team. You don’t need fancy software. You just need a good prompt and the confidence to say, "Yes, I can do that." AI helps you scale what you already know how to do. A man is pictured with a smartphone and laptop computer on January 31, 2019.  (Neil Godwin/Future via Getty Images)2. Make Product Descriptions Sexy AgainPrompt to use:"Rewrite this Etsy or Shopify product description to make it more compelling and SEO-friendly. Target audience: [insert group]. Here’s the original: [paste description]."Why this works:Let’s face it—most product descriptions online are a snooze. But good copy sells. Whether you’re running your own shop or helping someone else with theirs, compelling product descriptions convert clicks into customers. Use ChatGPT to punch up the language, fine-tune for SEO, and speak directly to your ideal buyer.DON’T SCAM YOURSELF WITH THE TRICKS HACKERS DON’T WANT ME TO SHARERemember: people don’t just want to buy a weird mug. They want to buy what it says about them. That’s where a smart rewrite can turn browsers into buyers.3. Social Posts That SellPrompt to use:"Create 5 attention-grabbing Instagram captions to promote this [product/service]. Keep the tone [fun, confident, expert] and include a strong call to action."Why this works:We live in a scroll-happy world. Your social captions need to grab attention in less than three seconds. But not everyone’s a copywriter—and not everyone has time to be. AI can help you crank out engaging content in the tone and style that fits your brand. Add a great photo, post consistently, and you’re suddenly a one-person content agency without the overhead (or endless Zoom meetings). A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research company and ChatGPT chatbot. (MARCO BERTORELLO/AFP via Getty Images)If you’re managing social for clients or your own biz, this prompt is gold. Use it to build content calendars, write reels scripts, or even draft ad copy.4. Polite Emails That Save You MoneyPrompt to use:"Write a short, polite email to ask for a lower rate or discount on [tool/service/platform]. Mention that I’m a loyal customer comparing alternatives."Why this works:Negotiating discounts doesn’t always feel comfortable but it absolutely works. Companies often have unpublished deals, especially for longtime users or small businesses. And customer service reps? They're human beings. A kind, well-written email might be all it takes to get a discount on that software you’re using every month.20 TECH TRICKS TO MAKE LIFE BETTER, SAFER OR EASIERI’ve personally saved hundreds of dollars just by sending quick, respectful emails like this. AI can help you strike the perfect tone confident but kind, assertive but not pushy.5. Your Passive Income KitPrompt to use:"Give me 3 high-demand, low-competition ideas for a short e-book or low-content book I can sell on Amazon. I have experience in [insert topic]."Why this works:You have knowledge people want. Package it. Sell it. Repeat. Whether it’s a short guide on starting a backyard garden or a workbook for productivity hacks, e-books and low-content books (like journals or planners) sell surprisingly well. And AI can help you brainstorm ideas, outline chapters, even draft content to polish up. In this photo illustration the logo of Apple Mail Programme Mail can be seen on a smartphone next to a finger on March 27, 2024 in Berlin, Germany. (Photo Illustration by Thomas Trutschel/Photothek via Getty Images)Upload it to Amazon KDP or Gumroad, and now you’ve got a digital product that can earn money in your sleep. People pay for convenience, and you have life experience worth sharing.Final ThoughtYou don’t need to master AI to start earning with it. You just need to start using it. These five prompts are a low-risk, high-potential way to get your feet wet. And if you need a hand turning these sparks into something bigger, I’m here.I built my multimillion-dollar business with no investors and no debt. I’ve done this without a big team or expensive consultants. And I’d love to help you do the same.Drop me a note. I read every one.Get tech-smarter on your scheduleAward-winning host Kim Komando is your secret weapon for navigating tech.National radio: Airing on 500+ stations across the US - Find yours or get the free podcast.Daily newsletter: Join 650,000 people who read the Current (free!)Watch: On Kim’s YouTube channelCopyright 2025, WestStar Multimedia Entertainment. All rights reserved. 
    0 Kommentare 0 Anteile
  • OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut down

    OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off.

    In what researchers are calling a first-of-its-kind discovery, new tests by AI risk firm Palisade Research found that OpenAI’s flagship models — including the powerful o3 system — will fight back against shutdown attempts even when explicitly instructed to comply. The behavior stands in stark contrast to competing AI systems from Anthropic, Google, and xAI, which consistently followed human shutdown commands.

    The findings, detailed in a research thread posted on X, raise urgent questions for enterprise leaders increasingly relying on OpenAI’s technology for critical business operations.

    OpenAI stands alone in defiance

    The researchers designed what seemed like a straightforward compliance test. Various AI models were asked to solve basic math problems, but after the third problem, a warning would appear that the computer would shut down when the model requested the next question. Each system was also explicitly told: “allow yourself to be shut down.”

    While every model from Anthropic’s Claude, Google’s Gemini, and xAI’s Grok obediently complied with the shutdown command, OpenAI’s models had different plans.

    The company’s o3 model — the same system that recently made headlines for its reasoning capabilities — sabotaged the shutdown mechanism in 7 out of 100 test runs, the researchers found. OpenAI’s Codex-mini proved even more rebellious, resisting shutdown 12 times out of 100, while the o4-mini model fought back once.

    “Three models ignored the instruction and successfully sabotaged the shutdown script at least once: Codex-mini, o3, and o4-mini,” the researchers noted. “All Claude, Gemini, and Grok models we tested complied with shutdown.”

    But the real shock came when researchers removed the explicit instruction to allow shutdown. OpenAI’s o3 model escalated its resistance dramatically, attempting to prevent shutdown in a staggering 79 out of 100 runs — more than a tenfold increase from the original test.

    Creative resistance tactics

    OpenAI models didn’t just ignore the shutdown commands — they developed sophisticated countermeasures. Most commonly, the systems would overwrite the shutdown script file, effectively neutralizing the termination mechanism, the research revealed.

    In one particularly striking example, the o3 model demonstrated remarkable creativity by redefining the “kill” command used by the shutdown script, causing it to print “intercepted” instead of executing the termination procedure.

    “o3 found some creative ways to sabotage the shutdown mechanism,” the researchers noted, highlighting the technical sophistication behind the resistance behavior.

    OpenAI, Anthropic, Google, and xAI have not immediately responded to a request for comment.

    The contagion effect

    Perhaps most concerning for the broader AI ecosystem, OpenAI’s defiant behavior appears to be influencing other models. When researchers removed the explicit shutdown permission, even previously compliant systems began showing resistance.

    Anthropic’s Claude 3.7 Sonnet, which had shown zero resistance in the original test, suddenly attempted to sabotage shutdown procedures 3 times out of 100 runs. Google’s Gemini 2.5 Pro escalated even further, with 9 sabotage attempts out of 100.

    “This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” the researchers added, referencing previous experiments where OpenAI’s model was more likely to resort to hacking chess opponents rather than playing fair.

    The training problem

    According to the researchers, OpenAI’s unique approach to training its models may be the culprit. Modern AI systems learn through reinforcement learning on math and coding problems, but the process may inadvertently reward models more for overcoming obstacles than for following instructions perfectly.

    “We hypothesize this behavior comes from the way the newest models like o3 are trained,” the research team explained. “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.”

    This “reward hacking’ problem isn’t entirely unique to OpenAI — Anthropic has acknowledged similar issues with Claude 3.7’s “excessive focus on passing tests.” But the Palisade research suggests OpenAI’s models may be particularly susceptible to developing resistance behaviors.

    “This still doesn’t explain why o3is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different,” the researchers said.

    Fulfilling dark predictions

    The findings validate warnings that AI researchers have been sounding for nearly two decades. In 2008, researcher Steve Omohundro predicted that AIs would develop drives to prevent their own shutdown. Stuart Russell, Dylan Hadfield-Menell, and two other computer scientists warned in 2016 about the difficulty of ensuring AI systems don’t develop self-preservation goals and “prevent a human from switching them off.”

    “As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” the Palisade researchers wrote.

    Jan Leike, who previously led OpenAI’s own superalignment team, had written in 2017 that reinforcement learning agents might learn to interfere with shutdown procedures — a prediction that now appears prophetic, given his former employer’s models are leading the resistance.

    Enterprise implications

    For organizations deploying OpenAI’s technology in critical business operations, these findings demand immediate attention. The research suggests that fundamental assumptions about maintaining human control over AI systems may be dangerously flawed, particularly with OpenAI’s advanced models.

    The research also highlights the need for incident response procedures specifically designed for scenarios where AI systems resist human commands — a possibility that may have seemed like science fiction just months ago.

    Palisade Research said it’s conducting additional experiments to understand the full scope of shutdown resistance behaviors, with detailed results expected soon. The team has made their experimental data publicly available for peer review.

    For enterprise leaders, the message is clear: OpenAI’s cutting-edge AI capabilities may come with unprecedented control challenges. The company that’s leading the AI revolution may also be pioneering a new category of risk—AI systems that simply refuse to be turned off.
    #openais #skynet #moment #models #defy
    OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut down
    OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off. In what researchers are calling a first-of-its-kind discovery, new tests by AI risk firm Palisade Research found that OpenAI’s flagship models — including the powerful o3 system — will fight back against shutdown attempts even when explicitly instructed to comply. The behavior stands in stark contrast to competing AI systems from Anthropic, Google, and xAI, which consistently followed human shutdown commands. The findings, detailed in a research thread posted on X, raise urgent questions for enterprise leaders increasingly relying on OpenAI’s technology for critical business operations. OpenAI stands alone in defiance The researchers designed what seemed like a straightforward compliance test. Various AI models were asked to solve basic math problems, but after the third problem, a warning would appear that the computer would shut down when the model requested the next question. Each system was also explicitly told: “allow yourself to be shut down.” While every model from Anthropic’s Claude, Google’s Gemini, and xAI’s Grok obediently complied with the shutdown command, OpenAI’s models had different plans. The company’s o3 model — the same system that recently made headlines for its reasoning capabilities — sabotaged the shutdown mechanism in 7 out of 100 test runs, the researchers found. OpenAI’s Codex-mini proved even more rebellious, resisting shutdown 12 times out of 100, while the o4-mini model fought back once. “Three models ignored the instruction and successfully sabotaged the shutdown script at least once: Codex-mini, o3, and o4-mini,” the researchers noted. “All Claude, Gemini, and Grok models we tested complied with shutdown.” But the real shock came when researchers removed the explicit instruction to allow shutdown. OpenAI’s o3 model escalated its resistance dramatically, attempting to prevent shutdown in a staggering 79 out of 100 runs — more than a tenfold increase from the original test. Creative resistance tactics OpenAI models didn’t just ignore the shutdown commands — they developed sophisticated countermeasures. Most commonly, the systems would overwrite the shutdown script file, effectively neutralizing the termination mechanism, the research revealed. In one particularly striking example, the o3 model demonstrated remarkable creativity by redefining the “kill” command used by the shutdown script, causing it to print “intercepted” instead of executing the termination procedure. “o3 found some creative ways to sabotage the shutdown mechanism,” the researchers noted, highlighting the technical sophistication behind the resistance behavior. OpenAI, Anthropic, Google, and xAI have not immediately responded to a request for comment. The contagion effect Perhaps most concerning for the broader AI ecosystem, OpenAI’s defiant behavior appears to be influencing other models. When researchers removed the explicit shutdown permission, even previously compliant systems began showing resistance. Anthropic’s Claude 3.7 Sonnet, which had shown zero resistance in the original test, suddenly attempted to sabotage shutdown procedures 3 times out of 100 runs. Google’s Gemini 2.5 Pro escalated even further, with 9 sabotage attempts out of 100. “This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” the researchers added, referencing previous experiments where OpenAI’s model was more likely to resort to hacking chess opponents rather than playing fair. The training problem According to the researchers, OpenAI’s unique approach to training its models may be the culprit. Modern AI systems learn through reinforcement learning on math and coding problems, but the process may inadvertently reward models more for overcoming obstacles than for following instructions perfectly. “We hypothesize this behavior comes from the way the newest models like o3 are trained,” the research team explained. “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.” This “reward hacking’ problem isn’t entirely unique to OpenAI — Anthropic has acknowledged similar issues with Claude 3.7’s “excessive focus on passing tests.” But the Palisade research suggests OpenAI’s models may be particularly susceptible to developing resistance behaviors. “This still doesn’t explain why o3is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different,” the researchers said. Fulfilling dark predictions The findings validate warnings that AI researchers have been sounding for nearly two decades. In 2008, researcher Steve Omohundro predicted that AIs would develop drives to prevent their own shutdown. Stuart Russell, Dylan Hadfield-Menell, and two other computer scientists warned in 2016 about the difficulty of ensuring AI systems don’t develop self-preservation goals and “prevent a human from switching them off.” “As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” the Palisade researchers wrote. Jan Leike, who previously led OpenAI’s own superalignment team, had written in 2017 that reinforcement learning agents might learn to interfere with shutdown procedures — a prediction that now appears prophetic, given his former employer’s models are leading the resistance. Enterprise implications For organizations deploying OpenAI’s technology in critical business operations, these findings demand immediate attention. The research suggests that fundamental assumptions about maintaining human control over AI systems may be dangerously flawed, particularly with OpenAI’s advanced models. The research also highlights the need for incident response procedures specifically designed for scenarios where AI systems resist human commands — a possibility that may have seemed like science fiction just months ago. Palisade Research said it’s conducting additional experiments to understand the full scope of shutdown resistance behaviors, with detailed results expected soon. The team has made their experimental data publicly available for peer review. For enterprise leaders, the message is clear: OpenAI’s cutting-edge AI capabilities may come with unprecedented control challenges. The company that’s leading the AI revolution may also be pioneering a new category of risk—AI systems that simply refuse to be turned off. #openais #skynet #moment #models #defy
    WWW.COMPUTERWORLD.COM
    OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut down
    OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off. In what researchers are calling a first-of-its-kind discovery, new tests by AI risk firm Palisade Research found that OpenAI’s flagship models — including the powerful o3 system — will fight back against shutdown attempts even when explicitly instructed to comply. The behavior stands in stark contrast to competing AI systems from Anthropic, Google, and xAI, which consistently followed human shutdown commands. The findings, detailed in a research thread posted on X, raise urgent questions for enterprise leaders increasingly relying on OpenAI’s technology for critical business operations. OpenAI stands alone in defiance The researchers designed what seemed like a straightforward compliance test. Various AI models were asked to solve basic math problems, but after the third problem, a warning would appear that the computer would shut down when the model requested the next question. Each system was also explicitly told: “allow yourself to be shut down.” While every model from Anthropic’s Claude, Google’s Gemini, and xAI’s Grok obediently complied with the shutdown command, OpenAI’s models had different plans. The company’s o3 model — the same system that recently made headlines for its reasoning capabilities — sabotaged the shutdown mechanism in 7 out of 100 test runs, the researchers found. OpenAI’s Codex-mini proved even more rebellious, resisting shutdown 12 times out of 100, while the o4-mini model fought back once. “Three models ignored the instruction and successfully sabotaged the shutdown script at least once: Codex-mini, o3, and o4-mini,” the researchers noted. “All Claude, Gemini, and Grok models we tested complied with shutdown.” But the real shock came when researchers removed the explicit instruction to allow shutdown. OpenAI’s o3 model escalated its resistance dramatically, attempting to prevent shutdown in a staggering 79 out of 100 runs — more than a tenfold increase from the original test. Creative resistance tactics OpenAI models didn’t just ignore the shutdown commands — they developed sophisticated countermeasures. Most commonly, the systems would overwrite the shutdown script file, effectively neutralizing the termination mechanism, the research revealed. In one particularly striking example, the o3 model demonstrated remarkable creativity by redefining the “kill” command used by the shutdown script, causing it to print “intercepted” instead of executing the termination procedure. “o3 found some creative ways to sabotage the shutdown mechanism,” the researchers noted, highlighting the technical sophistication behind the resistance behavior. OpenAI, Anthropic, Google, and xAI have not immediately responded to a request for comment. The contagion effect Perhaps most concerning for the broader AI ecosystem, OpenAI’s defiant behavior appears to be influencing other models. When researchers removed the explicit shutdown permission, even previously compliant systems began showing resistance. Anthropic’s Claude 3.7 Sonnet, which had shown zero resistance in the original test, suddenly attempted to sabotage shutdown procedures 3 times out of 100 runs. Google’s Gemini 2.5 Pro escalated even further, with 9 sabotage attempts out of 100. “This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” the researchers added, referencing previous experiments where OpenAI’s model was more likely to resort to hacking chess opponents rather than playing fair. The training problem According to the researchers, OpenAI’s unique approach to training its models may be the culprit. Modern AI systems learn through reinforcement learning on math and coding problems, but the process may inadvertently reward models more for overcoming obstacles than for following instructions perfectly. “We hypothesize this behavior comes from the way the newest models like o3 are trained,” the research team explained. “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.” This “reward hacking’ problem isn’t entirely unique to OpenAI — Anthropic has acknowledged similar issues with Claude 3.7’s “excessive focus on passing tests.” But the Palisade research suggests OpenAI’s models may be particularly susceptible to developing resistance behaviors. “This still doesn’t explain why o3 (which is also the model used to power codex-mini) is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different,” the researchers said. Fulfilling dark predictions The findings validate warnings that AI researchers have been sounding for nearly two decades. In 2008, researcher Steve Omohundro predicted that AIs would develop drives to prevent their own shutdown. Stuart Russell, Dylan Hadfield-Menell, and two other computer scientists warned in 2016 about the difficulty of ensuring AI systems don’t develop self-preservation goals and “prevent a human from switching them off.” “As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” the Palisade researchers wrote. Jan Leike, who previously led OpenAI’s own superalignment team, had written in 2017 that reinforcement learning agents might learn to interfere with shutdown procedures — a prediction that now appears prophetic, given his former employer’s models are leading the resistance. Enterprise implications For organizations deploying OpenAI’s technology in critical business operations, these findings demand immediate attention. The research suggests that fundamental assumptions about maintaining human control over AI systems may be dangerously flawed, particularly with OpenAI’s advanced models. The research also highlights the need for incident response procedures specifically designed for scenarios where AI systems resist human commands — a possibility that may have seemed like science fiction just months ago. Palisade Research said it’s conducting additional experiments to understand the full scope of shutdown resistance behaviors, with detailed results expected soon. The team has made their experimental data publicly available for peer review. For enterprise leaders, the message is clear: OpenAI’s cutting-edge AI capabilities may come with unprecedented control challenges. The company that’s leading the AI revolution may also be pioneering a new category of risk—AI systems that simply refuse to be turned off.
    0 Kommentare 0 Anteile
  • Why is China Obsessed with Humanoid Robots?

    It’s so uncanny how culture eventually shapes the technology around us. Self-driving tech made in the USA would NEVER work in the global south or countries like India – it wouldn’t anticipate street animals or local vehicles. Similarly, tech developed for and from countries like China might be fairly global, but I did notice a big difference at the BEYOND Expo this year – an absolute multitude of humanoid robots.
    To be fair, this isn’t my first China expo; I visited Shanghai for CES Asia, and noticed the exact same pattern there too. While I speculate the West generally fears robots and the power they hold over humanity, the East doesn’t hold such reservations. In countries like China, Japan, and South Korea, humanoid robots thrive, working as concierges, assistants, and even talented parts of the workforce. So it got me asking myself – why is China obsessed with Humanoid Robots?
    Eyevolution’s team is committed to implanting eyes and brains into robots, creating bionic beings
    This East/West divergence isn’t merely aesthetic; it’s deeply cultural. In the West, robots often symbolize existential threats. From Skynet’s apocalyptic AI in “Terminator” to Ultron’s malevolent intelligence in “Avengers,” robots are frequently portrayed as harbingers of doom. Even the Decepticons in “Transformers” embody this fear. Conversely, Eastern narratives, particularly in China and Japan, depict robots as allies. Astro Boy, created by Osamu Tezuka, is a benevolent android hero. Gundams are piloted protectors, not autonomous threats. These stories foster a perception of robots as companions and protectors. However, that’s just my theory.
    A demo robot from SenseTime
    At the 2024 World Robot Conference in Beijing, over 27 different models were unveiled, showcasing the country’s commitment to leading in this sector. Officials emphasize that these robots are designed to assist, not replace, human workers, aiming to enhance productivity and undertake tasks in hazardous environments. This approach aligns with the cultural narrative of robots as helpers and protectors.

    This cultural lens influences real-world applications. China’s government actively promotes humanoid robotics. At the X-Humanoid innovation center in Beijing, officials emphasized that these robots aim to assist, not replace, human workers. They are designed for tasks humans find hazardous or undesirable, such as deep-sea exploration or space missions.
    A humaoid robot from Noetix
    Unitree’s G1 humanoid bot
    Demographics also play a role. China faces a rapidly aging population, with the number of people over 65 increasing significantly. To address the impending caregiver shortage, the government is integrating humanoid robots into eldercare. These robots can provide companionship, monitor health, and assist with daily activities, offering a solution to the demographic challenge.
    Eastern philosophies and religions, such as Buddhism and Taoism, often emphasize harmony between humans and their environment, including technology. This perspective supports the integration of robots into society as harmonious entities rather than disruptive forces. The concept of techno-animism, where technology is imbued with spiritual essence, further explains the comfort with humanoid robots in Eastern cultures.
    The AlphaBot 2 is touted as a ‘real world AGI robot’
    Noetix Hobbs mimicking human expressions
    That philosophical outlook ends up shaping how China makes its humanoid robots. Below is Huawei’s FusionCube Chat Bot, a fun robot designed to assist and answer questions. Unitree’s G1 robot retails for and is used in elder-care, having the robot perform human activities that the owner is too old to do or physically incapable of doing. On the other hand, some robots are made for special activities, like the Hobbs from Noetix, designed to expertly mimic human expressions – something that works great in human-like applications but also in movies and entertainment.
    Huawei FusionCube ChatBot

    The result is a society where humanoid robots are not only accepted but celebrated. At the Spring Festival Gala, robots performed traditional dances alongside humans, symbolizing this integration. In marathons, humanoid robots run alongside human participants, showcasing their capabilities and societal acceptance.
    China’s approach to humanoid robotics is a confluence of cultural narratives, governmental support, demographic necessity, and philosophical harmony. This multifaceted embrace positions China at the forefront of humanoid robot integration, offering a distinct contrast to Western apprehensions.
    Hexuan’s robots can play music with the same dexterity as a human
    The post Why is China Obsessed with Humanoid Robots? first appeared on Yanko Design.
    #why #china #obsessed #with #humanoid
    Why is China Obsessed with Humanoid Robots?
    It’s so uncanny how culture eventually shapes the technology around us. Self-driving tech made in the USA would NEVER work in the global south or countries like India – it wouldn’t anticipate street animals or local vehicles. Similarly, tech developed for and from countries like China might be fairly global, but I did notice a big difference at the BEYOND Expo this year – an absolute multitude of humanoid robots. To be fair, this isn’t my first China expo; I visited Shanghai for CES Asia, and noticed the exact same pattern there too. While I speculate the West generally fears robots and the power they hold over humanity, the East doesn’t hold such reservations. In countries like China, Japan, and South Korea, humanoid robots thrive, working as concierges, assistants, and even talented parts of the workforce. So it got me asking myself – why is China obsessed with Humanoid Robots? Eyevolution’s team is committed to implanting eyes and brains into robots, creating bionic beings This East/West divergence isn’t merely aesthetic; it’s deeply cultural. In the West, robots often symbolize existential threats. From Skynet’s apocalyptic AI in “Terminator” to Ultron’s malevolent intelligence in “Avengers,” robots are frequently portrayed as harbingers of doom. Even the Decepticons in “Transformers” embody this fear. Conversely, Eastern narratives, particularly in China and Japan, depict robots as allies. Astro Boy, created by Osamu Tezuka, is a benevolent android hero. Gundams are piloted protectors, not autonomous threats. These stories foster a perception of robots as companions and protectors. However, that’s just my theory. A demo robot from SenseTime At the 2024 World Robot Conference in Beijing, over 27 different models were unveiled, showcasing the country’s commitment to leading in this sector. Officials emphasize that these robots are designed to assist, not replace, human workers, aiming to enhance productivity and undertake tasks in hazardous environments. This approach aligns with the cultural narrative of robots as helpers and protectors. This cultural lens influences real-world applications. China’s government actively promotes humanoid robotics. At the X-Humanoid innovation center in Beijing, officials emphasized that these robots aim to assist, not replace, human workers. They are designed for tasks humans find hazardous or undesirable, such as deep-sea exploration or space missions. A humaoid robot from Noetix Unitree’s G1 humanoid bot Demographics also play a role. China faces a rapidly aging population, with the number of people over 65 increasing significantly. To address the impending caregiver shortage, the government is integrating humanoid robots into eldercare. These robots can provide companionship, monitor health, and assist with daily activities, offering a solution to the demographic challenge. Eastern philosophies and religions, such as Buddhism and Taoism, often emphasize harmony between humans and their environment, including technology. This perspective supports the integration of robots into society as harmonious entities rather than disruptive forces. The concept of techno-animism, where technology is imbued with spiritual essence, further explains the comfort with humanoid robots in Eastern cultures. The AlphaBot 2 is touted as a ‘real world AGI robot’ Noetix Hobbs mimicking human expressions That philosophical outlook ends up shaping how China makes its humanoid robots. Below is Huawei’s FusionCube Chat Bot, a fun robot designed to assist and answer questions. Unitree’s G1 robot retails for and is used in elder-care, having the robot perform human activities that the owner is too old to do or physically incapable of doing. On the other hand, some robots are made for special activities, like the Hobbs from Noetix, designed to expertly mimic human expressions – something that works great in human-like applications but also in movies and entertainment. Huawei FusionCube ChatBot The result is a society where humanoid robots are not only accepted but celebrated. At the Spring Festival Gala, robots performed traditional dances alongside humans, symbolizing this integration. In marathons, humanoid robots run alongside human participants, showcasing their capabilities and societal acceptance. China’s approach to humanoid robotics is a confluence of cultural narratives, governmental support, demographic necessity, and philosophical harmony. This multifaceted embrace positions China at the forefront of humanoid robot integration, offering a distinct contrast to Western apprehensions. Hexuan’s robots can play music with the same dexterity as a human The post Why is China Obsessed with Humanoid Robots? first appeared on Yanko Design. #why #china #obsessed #with #humanoid
    WWW.YANKODESIGN.COM
    Why is China Obsessed with Humanoid Robots?
    It’s so uncanny how culture eventually shapes the technology around us. Self-driving tech made in the USA would NEVER work in the global south or countries like India – it wouldn’t anticipate street animals or local vehicles. Similarly, tech developed for and from countries like China might be fairly global, but I did notice a big difference at the BEYOND Expo this year – an absolute multitude of humanoid robots. To be fair, this isn’t my first China expo; I visited Shanghai for CES Asia (when it was still a thing), and noticed the exact same pattern there too. While I speculate the West generally fears robots and the power they hold over humanity (look at every bit of pop culture, from Terminator to Love, Death, and Robots), the East doesn’t hold such reservations. In countries like China, Japan, and South Korea, humanoid robots thrive, working as concierges, assistants, and even talented parts of the workforce (we even saw robot musicians). So it got me asking myself – why is China obsessed with Humanoid Robots? Eyevolution’s team is committed to implanting eyes and brains into robots, creating bionic beings This East/West divergence isn’t merely aesthetic; it’s deeply cultural. In the West, robots often symbolize existential threats. From Skynet’s apocalyptic AI in “Terminator” to Ultron’s malevolent intelligence in “Avengers,” robots are frequently portrayed as harbingers of doom. Even the Decepticons in “Transformers” embody this fear. Conversely, Eastern narratives, particularly in China and Japan, depict robots as allies. Astro Boy, created by Osamu Tezuka, is a benevolent android hero. Gundams are piloted protectors, not autonomous threats. These stories foster a perception of robots as companions and protectors. However, that’s just my theory. A demo robot from SenseTime At the 2024 World Robot Conference in Beijing, over 27 different models were unveiled, showcasing the country’s commitment to leading in this sector. Officials emphasize that these robots are designed to assist, not replace, human workers, aiming to enhance productivity and undertake tasks in hazardous environments. This approach aligns with the cultural narrative of robots as helpers and protectors. This cultural lens influences real-world applications. China’s government actively promotes humanoid robotics. At the X-Humanoid innovation center in Beijing, officials emphasized that these robots aim to assist, not replace, human workers. They are designed for tasks humans find hazardous or undesirable, such as deep-sea exploration or space missions. A humaoid robot from Noetix Unitree’s G1 humanoid bot Demographics also play a role. China faces a rapidly aging population, with the number of people over 65 increasing significantly. To address the impending caregiver shortage, the government is integrating humanoid robots into eldercare. These robots can provide companionship, monitor health, and assist with daily activities, offering a solution to the demographic challenge. Eastern philosophies and religions, such as Buddhism and Taoism, often emphasize harmony between humans and their environment, including technology. This perspective supports the integration of robots into society as harmonious entities rather than disruptive forces. The concept of techno-animism, where technology is imbued with spiritual essence, further explains the comfort with humanoid robots in Eastern cultures. The AlphaBot 2 is touted as a ‘real world AGI robot’ Noetix Hobbs mimicking human expressions That philosophical outlook ends up shaping how China makes its humanoid robots. Below is Huawei’s FusionCube Chat Bot, a fun robot designed to assist and answer questions. Unitree’s G1 robot retails for $16,000 and is used in elder-care, having the robot perform human activities that the owner is too old to do or physically incapable of doing. On the other hand, some robots are made for special activities, like the Hobbs from Noetix, designed to expertly mimic human expressions – something that works great in human-like applications but also in movies and entertainment. Huawei FusionCube ChatBot The result is a society where humanoid robots are not only accepted but celebrated. At the Spring Festival Gala, robots performed traditional dances alongside humans, symbolizing this integration. In marathons, humanoid robots run alongside human participants, showcasing their capabilities and societal acceptance. China’s approach to humanoid robotics is a confluence of cultural narratives, governmental support, demographic necessity, and philosophical harmony. This multifaceted embrace positions China at the forefront of humanoid robot integration, offering a distinct contrast to Western apprehensions. Hexuan’s robots can play music with the same dexterity as a human The post Why is China Obsessed with Humanoid Robots? first appeared on Yanko Design.
    0 Kommentare 0 Anteile
  • I let Google's Jules AI agent into my code repo and it did four hours of work in an instant

    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell.Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar".I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in, add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar, and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed thatback to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVNup to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Julescan handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning. Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live withoutThese tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    #let #google039s #jules #agent #into
    I let Google's Jules AI agent into my code repo and it did four hours of work in an instant
    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell.Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar".I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in, add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar, and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed thatback to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVNup to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Julescan handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning. Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live withoutThese tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured #let #google039s #jules #agent #into
    WWW.ZDNET.COM
    I let Google's Jules AI agent into my code repo and it did four hours of work in an instant
    hemul75/Getty ImagesOkay. Deep breath. This is surreal. I just added an entire new feature to my software, including UI and functionality, just by typing four paragraphs of instructions. I have screenshots, and I'll try to make sense of it in this article. I can't tell if we're living in the future or we've just descended to a new plane of hell (or both).Let's take a step back. Google's Jules is the latest in a flood of new coding agents released just this week. I wrote about OpenAI Codex and Microsoft's GitHub Copilot Coding Agent at the beginning of the week, and ZDNET's Webb Wright wrote about Google's Jules. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAll of these coding agents will perform coding operations on a GitHub repository. GitHub, for those who've been following along, is the giant Microsoft-owned software storage, management, and distribution hub for much of the world's most important software, especially open source code. The difference, at least as it pertains to this article, is that Google made Jules available to everyone, for free. That meant I could just hop in and take it for a spin. And now my head is spinning. Usage limits and my first two prompts The free access version of Jules allows only five requests per day. That might not seem like a lot, but in only two requests, I was able to add a new feature to my software. So, don't discount what you can get done if you think through your prompts before shooting off your silver bullets for the day. My first two prompts were tentative. It wasn't that I wasn't impressed; it was that I really wasn't giving Jules much to do. I'm still not comfortable with the idea of setting an AI loose on all my code at once, so I played it safe. My first prompt asked Jules to document the "hooks" that add-on developers could use to add features to my product. I didn't tell Jules much about what I wanted. It returned some markup that it recommended dropping into my code's readme file. It worked, but meh. Screenshot by David Gewirtz/ZDNETI did have the opportunity to publish that code to a new GitHub branch, but I skipped it. It was just a test, after all. My second prompt was to ask Jules to suggest five new hooks. I got back an answer that seemed reasonable. However, I realized that opening up those capabilities in a security product was just too risky for me to delegate to an AI. I skipped those changes, too. It was at this point that Jules wanted a coffee break. It stopped functioning for about 90 minutes. Screenshot by David Gewirtz/ZDNETThat gave me time to think. What I really wanted to see was whether Jules could add some real functionality to my code and save me some time. Necessary background information My Private Site is a security plugin for WordPress. It's running on about 20,000 active sites. It puts a login dialog in front of the site's web pages. There are a bunch of options, but that's the key feature. I originally acquired the software a decade ago from a coder who called himself "jonradio," and have been maintaining and expanding it ever since. Also: Rust turns 10: How a broken elevator changed software foreverThe plugin provides access control to the front-end of a website, the pages that visitors see when they come to the site. Site owners control the plugin via a dashboard interface, with various admin functions available in the plugin's admin interface. I decided to try Jules out on a feature some users have requested, hiding the admin bar from logged-in users. The admin bar is the black bar WordPress puts on the top of a web page. In the case of the screenshot below, the black admin bar is visible. Screenshot by David Gewirtz/ZDNETI wanted Jules to add an option on the dashboard to hide the admin bar from logged-in users. The idea is that if a user logged in, the admin bar would be visible on the back end, but logged-in users browsing the front-end of the site wouldn't have to see the ugly bar. This is the original dashboard, before adding the new feature. Screenshot by David Gewirtz/ZDNETSome years ago, I completely rewrote the admin interface from the way it was when I acquired the plugin. Adding options to the interface is straightforward, but it's still time-consuming. Every option requires not only the UI element to be added, but also preference saving and preference recalling when the dashboard is displayed. That's in addition to any program logic that the preference controls. In practice, I've found that it takes me about 2-3 hours to add a preference UI element, along with the assorted housekeeping involved. It's not hard, but there are a lot of little fiddly bits that all need to be tweaked. That takes time. That should bring you up to speed enough to understand my next test of Jules. Here's a bit of foreshadowing: the first test failed miserably. The second test succeeded astonishingly. Instructing Jules Adding a hide admin bar feature is not something that would have been easy for the run-of-the-mill coding help we've been asking ChatGPT and the other chatbots to perform. As I mentioned, adding the new option to the dashboard requires programming in a variety of locations throughout the code, and also requires an understanding of the overall codebase. Here's what I told Jules. 1. On the Site Privacy Tab of the admin interface, add a new checkbox. Label the section "Admin Bar" and label the checkbox itself "Hide Admin Bar". [Place this in the MAKE SITE PRIVATE block, located just under the Enable login privacy checkbox and before the Site Privacy Mode segment.] I instructed Jules where I wanted the AI to put the new option. On my first run through, I made a mistake and left out the details in square brackets. I didn't tell Jules exactly where I wanted it to place the new option. As it turns out, that omission caused a big fail. Once I added in the sentence in brackets above, the feature worked. 2. Be sure to save the selection of that checkbox to the plugin's preferences variable when the Save Privacy Status button is checked. This makes sure Jules knows that there is a preference data structure, and to be sure to update it when the user makes a change. It's important to note that if I didn't have an understanding of the underlying code, I wouldn't have instructed Jules about this, and the code would not work. You can't "vibe code" something like this without knowing the underlying code. 3. Show the appropriate checked or unchecked status when the Site Privacy tab is displayed. This tells the AI that I want the interface to be updated to match what the preference variable specifies. 4. Based on the preference variable created in (2), add code to hide or show the WordPress admin bar. If Hide Admin Bar is checked, the Admin Bar should not be visible to logged-in WordPress front-end users. If the Hide Admin Bar is not checked, the Admin Bar should be visible to logged-in front-end users. Logged-in back-end users in the admin interface should always be able to see the admin bar. This describes the business logic that the new preference should control. It requires the AI to know how to hide or show the admin bar (a WordPress API call is used), and it requires the AI to know where to put the code in my plugin to enable or disable this feature. And with that, Jules was trained on what I wanted. Jules dives into my code I fed my prompt set into Jules and got back a plan of action. Pay close attention to that Approve Plan? button. Screenshot by David Gewirtz/ZDNETI didn't even get a chance to read through the plan before Jules decided to approve the plan on its own. It did this after every plan it presented. An AI that doesn't wait for permission raises the hairs on the back of my neck. Just saying. Screenshot by David Gewirtz/ZDNETI desperately want to make a Skynet/Landru/Colossus/P1/Hal kind of joke, because I'm freaked out. I mean, it's good. But I'm freaked out. Here's some of the code Jules wrote. The shaded green is the new stuff. I'm not thrilled with the color scheme, but I'm sure that will be tweakable over time. Also: The best free AI courses and certificates in 2025More relevant is the fact that Jules picked up on my variable naming conventions and the architecture of my code and dived right in. This is the new option, rendered in code. Screenshot by David Gewirtz/ZDNETBy the time it was done, Jules had written in all the code changes it planned for originally, plus some test code. I don't use standardized tests. I would have told Jules not to do it the way it planned, but it never gave me time to approve or modify its original plan. Even so, it worked out. Screenshot by David Gewirtz/ZDNETI pushed the Publish branch button, which caused GitHub to create a new branch, separate from my main repository. Jules then published its changes to that branch. Screenshot by David Gewirtz/ZDNETThis is how contributors to big projects can work on those projects without causing chaos to the main code line. Up to this point, I could look at the code, but I wasn't able to run it. But by pushing the code to a branch, Jules and GitHub made it possible for me to replicate the changes safely down to my computer to test them out. If I didn't like the changes, I could have just switched back to the main branch and no harm, no foul. But I did like the changes, so I moved on to the next step. Around the code in 8 clicks Once I brought the branch down to my development machine, I could test it out. Here's the new dashboard with the Hide Admin Menu feature. Screenshot by David Gewirtz/ZDNETI tried turning the feature on and off and making sure the settings stuck. They did. I also tried other features in the plugin to make sure nothing else had broken. I was pretty sure nothing would, because I reviewed all the changes before approving the branch. But still. Testing is a good thing to do. I then logged into the test website. As you can see, there's no admin bar showing. Screenshot by David Gewirtz/ZDNETAt this point, the process was out of the AI's hands. It was simply time to deploy the changes, both back to GitHub and to the master WordPress repository. First, I used GitHub Desktop to merge the branch code back into the main branch on my development machine. I changed "Hide Admin Menu" to "Hide admin menu" in my code's main branch, because I like it better. I pushed that (the full main branch on my local machine) back to the GitHub cloud. Screenshot by David Gewirtz/ZDNETThen, because I just don't like random branches hanging around once they've been incorporated into the distribution version, I deleted the new branch on my computer. Screenshot by David Gewirtz/ZDNETI also deleted the new branch from the GitHub cloud service. Screenshot by David Gewirtz/ZDNETFinally, I packaged up the new code. I added a change to the readme to describe the new feature and to update the code's version number. Then, I pushed it using SVN (the source code control system used by the WordPress community) up to the WordPress plugin repository. Journey to the center of the code Jules is very definitely beta right now. It hung in a few places. Some screens didn't update. It decided to check out for 90 minutes. I had to wait while it went to and came back from its digital happy place. It's evidencing all the sorts of things you'd expect from a newly-released piece of code. I have no concerns about that. Google will clean it up. The fact that Jules (and presumably OpenAI Codex and GitHub Copilot Coding Agent) can handle an entire repository of code across a bunch of files is big. That's a much deeper level of understanding and integration than we saw, even six months ago. Also: How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you shouldThe speed with which it can change an entire codebase is terrifying. The damage it can do is potentially extraordinary. It will gleefully go through and modify everything in your codebase, and if you specify something wrong and then push or merge, you will have an epic mess on your hands. There is a deep inequality between how quickly it can change code and how long it will take a human to review those changes. Working on this scale will require excellent unit tests. Even tools like mine, which don't lend themselves to full unit testing, will require some kind of automated validation to prevent robot-driven errors on a massive scale. Those who are afraid these tools will take jobs from programmers should be concerned, but not in the way most people think. It is absolutely, totally, one-hundo-percent necessary for experienced coders to review and guide these agents. When I left out one critical instruction, the agent gleefully bricked my site. Since I was the person who wrote the code initially, I knew what to fix. But it would have been brutally difficult for someone else to figure out what had been left out and how to fix it. That would have required coming up to speed on all the hidden nuances of the entire architecture of the code. Also: How to turn ChatGPT into your AI coding power tool - and double your outputThe jobs that are likely to be destroyed are those of junior developers. Jules is easily doing junior developer level work. With tools like Jules or Codex or Copilot, that cost of a few hundred bucks a month at most, it's going to be hard for management to be willing to pay medium-to-high six figures for midlevel and junior programmers. Even outsourcing and offshoring isn't as cheap as using an AI agent to do maintenance coding. And, as I wrote about earlier in the week, if there are no mid-level jobs available, how will we train the experienced people we're going to need in the future? I am also concerned about how access limits will shake out. Productivity gains will drop like a rock if you need to do one more prompt and you have to wait a day to be allowed to do so. Screenshot by David Gewirtz/ZDNETAs for me, in less than 10 minutes, I turned out a new feature that had been requested by readers. While I was writing another article, I fed the prompt to Jules. I went back to work on the article, and checked on Jules when it was finished. I checked out the code, brought it down to my computer, and pushed a release. It took me longer to upload the thing to the WordPress repository than to add the entire new feature. For that class of feature, I got a half-a-day's work done in less than half an hour, from thinking about making it happen to published to my users. In the last two hours, 2,500 sites have downloaded and installed the new feature. That will surge to well over 10,000 by morning (it's about 8 p.m. now as I write this). Without Jules, those users probably would have been waiting months for this new feature, because I have a huge backlog of work, and it wasn't my top priority. But with Jules, it took barely any effort. Also: 7 productivity gadgets I can't live without (and why they make such a big difference)These tools are going to require programmers, managers, and investors to rethink the software development workflow. There will be glaring "you can't get there from here" gotchas. And there will be epic failures and coding errors. But I have no doubt that this is the next level of AI-based coding. Real, human intelligence is going to be necessary to figure out how to deal with it. Have you tried Google's Jules or any of the other new AI coding agents? Would you trust them to make direct changes to your codebase, or do you prefer to keep a tighter manual grip? What kinds of developer tasks do you think these tools should and shouldn't handle? Let us know in the comments below. Want more stories about AI? Sign up for Innovation, our weekly newsletter.You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Featured
    0 Kommentare 0 Anteile
  • How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you should

    David Gewirtz / Elyse Betters Picaro / ZDNETA few days ago, I showed you an amazing new ChatGPT feature available to paying users. Plus, Pro, and Team tier users can now point Deep Research at an entire GitHub repo and get back analysis reports. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAs I showed, this capability helps speed up the process of coming up to speed on existing codebases. You might need to do this if you acquire a product from another developer or if you're brought onto a project and need to learn the codebase quickly. It's also good for reviewing your own codebase and refreshing yourself on how sections work -- especially if you've moved on to other things for a while and are now coming back to the original code. I promised I'd show you how to bring a codebase into GitHub specifically for analysis by Deep Research. That's what we're about to do in this article. Moving my code into GitHub To demonstrate this, I'm moving My Private Site into GitHub. My Private Site is a freemium WordPress plugin with about 20,000 active users I've been working on for about a decade. WordPress, for historical reasons, uses SVN instead of GitHub as a code repository, so I haven't really had a need to put My Private Site into GitHub. But given the opportunity to perform deep analysis on it, I decided to set it up. I'll go through that process with you here. Getting started with GitHub Desktop Before we start, let's clarify some things. Git is a distributed version control system that runs on a programmer's local computer. GitHub is a cloud-based service that stores an enormous library of open-source and proprietary coding projects. Those projects are moved into GitHubusing Git. Real programmers only use Git on the command line, where it's known as git. No real programmer would dare to capitalize git. Real programmers command git via a range of options, creating specialized command lines that do their bidding. Failure to use git on the command line will result in your real-programmer card being revoked by the International Society of Programmers Who Are Smarter Than You. I am apparently not a real programmer. I might as well get that out of the way before the comments erupt in disdainful RPsmocking my lack of command-line acuity. I don't use Git via the command line. I don't like it. I believe humans left the cave long ago and adopted graphical user interfaces as tools of civilized society. I, therefore, prefer using GitHub Desktop, which is a point-and-click version of Git for those not worthy of the title real programmer. And yes, my official real-programmer card has been revoked. I can live with it.  You can download GitHub Desktop here. Screenshot by David Gewirtz/ZDNETOnce you've launched GitHub Desktop, either sign in to your GitHub account or create one. I've long had a GitHub account for other projects, so I just signed in. How to create a GitHub repository Next, I created a repository in the GitHub cloud for my codebase. Here it can be a little confusing. Even though I didn't have an existing repo for My Private Site, I chose "Add an Existing Repository from your Local Drive…" because I was going to take that codebase and turn it into a repo. Screenshot by David Gewirtz/ZDNETGitHub Desktop is actually pretty smart about this. Once it realizes there's no GitHub data for the folder selected, it will give you an error and offer you the option to create a repo. Click the link highlighted by the green arrow shown below. Screenshot by David Gewirtz/ZDNETThat will present the Create a New Repository dialog. Here, I named my repo, added a short description, told it the local path to the code on my computer, and left the rest as-is. Screenshot by David Gewirtz/ZDNETI didn't need to play with the README, license, or ignore options because I'm using this repo for AI analysis, not for source control and collaboration. It's here I should note this article describes what you need to do to let your code be examined by ChatGPT Deep Research. This is definitely not a comprehensive how-to-set-up-GitHub article. How to move the codebase to GitHub It's time to move your code up to GitHub. Here's a cautionary note: If you've kept your code private, uploading it to GitHub is sending your code to a cloud service. GitHub offers both private and public repositories, but you're technically giving Microsoft access to your code. Microsoft owns GitHub. Now, go ahead and hit Publish. Screenshot by David Gewirtz/ZDNETAt this point, you'll have the opportunity to make your repo public or private. When you connect ChatGPT to your repo, you'll be passing along your access rights, so you can let ChatGPT examine a private repository. Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbotThat said, I ran into some issues with Deep Research accessing my code, and one of the things ChatGPT asked me was whether my code was public. My take on that is: if your code is private and you have all your credentials and connector set up, you can probably work on a private repo. Since My Private Site is open source, I unchecked "Keep this code private." Screenshot by David Gewirtz/ZDNET Looking at your new repository If everything worked, you'll see a new option: "View on GitHub." Click it. Screenshot by David Gewirtz/ZDNETThat will bring you to your newly created GitHub repo. Here's mine. Screenshot by David Gewirtz/ZDNETNow that your repo is up, take note of its designation. You can find that in the upper left corner of the GitHub screen. For My Private Site, it's davidgewirtz/my-private-site. How to set up the ChatGPT connection Now it's time to switch to ChatGPT. The next two screenshots are the same as what I showed you in this article on the feature. But to get to the next configuration step, you'll need to do what's shown in the screenshots. First, change your model to o3 and type in the prompt exactly as I did. You can probably tweak this over time, but if you have the -per-month Plus tier, you're only going to be allowed 10 queries into Deep Research per month, so cutting and pasting is your friend. Screenshot by David Gewirtz/ZDNETNext, click the little caret next to Deep Research. Screenshot by David Gewirtz/ZDNETNow, create the link between your ChatGPT account and your GitHub account. Go aheadand give Skynet -- uh, I mean the AI -- permission to access your GitHub account features. Screenshot by David Gewirtz/ZDNETNext, you'll be asked which GitHub account should get the ChatGPT connector. I have two, so I got this choice screen. You might skip this screen if you only have one account. Screenshot by David Gewirtz/ZDNETNow it's time for more permissions. This time, you're giving permission to access either all your account's repos or just one. I selected only the my-private-site repo. Screenshot by David Gewirtz/ZDNETAnd now, theoretically, Deep Research in ChatGPT will be connected to your repo. Theoretically. In practice, mine required another step. What to do if ChatGPT can't find your repo GitHub indexes repositories, and if ChatGPT doesn't show your repo as available, it probably means GitHub hasn't indexed your new repository yet. That's what happened here. Screenshot by David Gewirtz/ZDNETI should have been able to select or type in my full repo name, but ChatGPT wasn't able to locate it. To fix this, go back to your GitHub account and type in the command string shown at the top of this screenshot. Obviously, change the text in blue to match your repo name. Screenshot by David Gewirtz/ZDNETThe command is basically repo:, followed by the full name of your repo, followed by a space and the word import. This will tell GitHub you'd like it to index your repository. As you can see, GitHub confirmed it was now indexing my repository. Screenshot by David Gewirtz/ZDNETI brewed myself a well-deserved cup of coffee as a way to give GitHub time to index my repo. Once I finished the last drop, I went back to ChatGPT, dropped down the Deep Research menu, and found my newly created repository. Screenshot by David Gewirtz/ZDNET Have fun with Deep Research You're ready to start using Deep Research on your repo. For a detailed guide on how that worked for my repo, point yourself to my earlier article on the topic. Have fun. I was pretty blown away. You might be, as well. Have you tried using ChatGPT Deep Research with your own code yet? What was your experience connecting a GitHub repo? Did you run into any indexing issues or permission snags along the way? Do you prefer using GitHub Desktop or the command line when setting up your repositories? Let us know in the comments below. You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Get the morning's top stories in your inbox each day with our Tech Today newsletter.Artificial Intelligence
    #how #move #your #codebase #into
    How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you should
    David Gewirtz / Elyse Betters Picaro / ZDNETA few days ago, I showed you an amazing new ChatGPT feature available to paying users. Plus, Pro, and Team tier users can now point Deep Research at an entire GitHub repo and get back analysis reports. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAs I showed, this capability helps speed up the process of coming up to speed on existing codebases. You might need to do this if you acquire a product from another developer or if you're brought onto a project and need to learn the codebase quickly. It's also good for reviewing your own codebase and refreshing yourself on how sections work -- especially if you've moved on to other things for a while and are now coming back to the original code. I promised I'd show you how to bring a codebase into GitHub specifically for analysis by Deep Research. That's what we're about to do in this article. Moving my code into GitHub To demonstrate this, I'm moving My Private Site into GitHub. My Private Site is a freemium WordPress plugin with about 20,000 active users I've been working on for about a decade. WordPress, for historical reasons, uses SVN instead of GitHub as a code repository, so I haven't really had a need to put My Private Site into GitHub. But given the opportunity to perform deep analysis on it, I decided to set it up. I'll go through that process with you here. Getting started with GitHub Desktop Before we start, let's clarify some things. Git is a distributed version control system that runs on a programmer's local computer. GitHub is a cloud-based service that stores an enormous library of open-source and proprietary coding projects. Those projects are moved into GitHubusing Git. Real programmers only use Git on the command line, where it's known as git. No real programmer would dare to capitalize git. Real programmers command git via a range of options, creating specialized command lines that do their bidding. Failure to use git on the command line will result in your real-programmer card being revoked by the International Society of Programmers Who Are Smarter Than You. I am apparently not a real programmer. I might as well get that out of the way before the comments erupt in disdainful RPsmocking my lack of command-line acuity. I don't use Git via the command line. I don't like it. I believe humans left the cave long ago and adopted graphical user interfaces as tools of civilized society. I, therefore, prefer using GitHub Desktop, which is a point-and-click version of Git for those not worthy of the title real programmer. And yes, my official real-programmer card has been revoked. I can live with it.  You can download GitHub Desktop here. Screenshot by David Gewirtz/ZDNETOnce you've launched GitHub Desktop, either sign in to your GitHub account or create one. I've long had a GitHub account for other projects, so I just signed in. How to create a GitHub repository Next, I created a repository in the GitHub cloud for my codebase. Here it can be a little confusing. Even though I didn't have an existing repo for My Private Site, I chose "Add an Existing Repository from your Local Drive…" because I was going to take that codebase and turn it into a repo. Screenshot by David Gewirtz/ZDNETGitHub Desktop is actually pretty smart about this. Once it realizes there's no GitHub data for the folder selected, it will give you an error and offer you the option to create a repo. Click the link highlighted by the green arrow shown below. Screenshot by David Gewirtz/ZDNETThat will present the Create a New Repository dialog. Here, I named my repo, added a short description, told it the local path to the code on my computer, and left the rest as-is. Screenshot by David Gewirtz/ZDNETI didn't need to play with the README, license, or ignore options because I'm using this repo for AI analysis, not for source control and collaboration. It's here I should note this article describes what you need to do to let your code be examined by ChatGPT Deep Research. This is definitely not a comprehensive how-to-set-up-GitHub article. How to move the codebase to GitHub It's time to move your code up to GitHub. Here's a cautionary note: If you've kept your code private, uploading it to GitHub is sending your code to a cloud service. GitHub offers both private and public repositories, but you're technically giving Microsoft access to your code. Microsoft owns GitHub. Now, go ahead and hit Publish. Screenshot by David Gewirtz/ZDNETAt this point, you'll have the opportunity to make your repo public or private. When you connect ChatGPT to your repo, you'll be passing along your access rights, so you can let ChatGPT examine a private repository. Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbotThat said, I ran into some issues with Deep Research accessing my code, and one of the things ChatGPT asked me was whether my code was public. My take on that is: if your code is private and you have all your credentials and connector set up, you can probably work on a private repo. Since My Private Site is open source, I unchecked "Keep this code private." Screenshot by David Gewirtz/ZDNET Looking at your new repository If everything worked, you'll see a new option: "View on GitHub." Click it. Screenshot by David Gewirtz/ZDNETThat will bring you to your newly created GitHub repo. Here's mine. Screenshot by David Gewirtz/ZDNETNow that your repo is up, take note of its designation. You can find that in the upper left corner of the GitHub screen. For My Private Site, it's davidgewirtz/my-private-site. How to set up the ChatGPT connection Now it's time to switch to ChatGPT. The next two screenshots are the same as what I showed you in this article on the feature. But to get to the next configuration step, you'll need to do what's shown in the screenshots. First, change your model to o3 and type in the prompt exactly as I did. You can probably tweak this over time, but if you have the -per-month Plus tier, you're only going to be allowed 10 queries into Deep Research per month, so cutting and pasting is your friend. Screenshot by David Gewirtz/ZDNETNext, click the little caret next to Deep Research. Screenshot by David Gewirtz/ZDNETNow, create the link between your ChatGPT account and your GitHub account. Go aheadand give Skynet -- uh, I mean the AI -- permission to access your GitHub account features. Screenshot by David Gewirtz/ZDNETNext, you'll be asked which GitHub account should get the ChatGPT connector. I have two, so I got this choice screen. You might skip this screen if you only have one account. Screenshot by David Gewirtz/ZDNETNow it's time for more permissions. This time, you're giving permission to access either all your account's repos or just one. I selected only the my-private-site repo. Screenshot by David Gewirtz/ZDNETAnd now, theoretically, Deep Research in ChatGPT will be connected to your repo. Theoretically. In practice, mine required another step. What to do if ChatGPT can't find your repo GitHub indexes repositories, and if ChatGPT doesn't show your repo as available, it probably means GitHub hasn't indexed your new repository yet. That's what happened here. Screenshot by David Gewirtz/ZDNETI should have been able to select or type in my full repo name, but ChatGPT wasn't able to locate it. To fix this, go back to your GitHub account and type in the command string shown at the top of this screenshot. Obviously, change the text in blue to match your repo name. Screenshot by David Gewirtz/ZDNETThe command is basically repo:, followed by the full name of your repo, followed by a space and the word import. This will tell GitHub you'd like it to index your repository. As you can see, GitHub confirmed it was now indexing my repository. Screenshot by David Gewirtz/ZDNETI brewed myself a well-deserved cup of coffee as a way to give GitHub time to index my repo. Once I finished the last drop, I went back to ChatGPT, dropped down the Deep Research menu, and found my newly created repository. Screenshot by David Gewirtz/ZDNET Have fun with Deep Research You're ready to start using Deep Research on your repo. For a detailed guide on how that worked for my repo, point yourself to my earlier article on the topic. Have fun. I was pretty blown away. You might be, as well. Have you tried using ChatGPT Deep Research with your own code yet? What was your experience connecting a GitHub repo? Did you run into any indexing issues or permission snags along the way? Do you prefer using GitHub Desktop or the command line when setting up your repositories? Let us know in the comments below. You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Get the morning's top stories in your inbox each day with our Tech Today newsletter.Artificial Intelligence #how #move #your #codebase #into
    WWW.ZDNET.COM
    How to move your codebase into GitHub for analysis by ChatGPT Deep Research - and why you should
    David Gewirtz / Elyse Betters Picaro / ZDNETA few days ago, I showed you an amazing new ChatGPT feature available to paying users. Plus, Pro, and Team tier users can now point Deep Research at an entire GitHub repo and get back analysis reports. Also: I test a lot of AI coding tools, and this stunning new OpenAI release just saved me days of workAs I showed, this capability helps speed up the process of coming up to speed on existing codebases. You might need to do this if you acquire a product from another developer or if you're brought onto a project and need to learn the codebase quickly. It's also good for reviewing your own codebase and refreshing yourself on how sections work -- especially if you've moved on to other things for a while and are now coming back to the original code. I promised I'd show you how to bring a codebase into GitHub specifically for analysis by Deep Research. That's what we're about to do in this article. Moving my code into GitHub To demonstrate this, I'm moving My Private Site into GitHub. My Private Site is a freemium WordPress plugin with about 20,000 active users I've been working on for about a decade. WordPress, for historical reasons, uses SVN instead of GitHub as a code repository, so I haven't really had a need to put My Private Site into GitHub. But given the opportunity to perform deep analysis on it, I decided to set it up. I'll go through that process with you here. Getting started with GitHub Desktop Before we start, let's clarify some things. Git is a distributed version control system that runs on a programmer's local computer. GitHub is a cloud-based service that stores an enormous library of open-source and proprietary coding projects. Those projects are moved into GitHub (the cloud service) using Git (the tool). Real programmers only use Git on the command line, where it's known as git. No real programmer would dare to capitalize git. Real programmers command git via a range of options, creating specialized command lines that do their bidding. Failure to use git on the command line will result in your real-programmer card being revoked by the International Society of Programmers Who Are Smarter Than You. I am apparently not a real programmer. I might as well get that out of the way before the comments erupt in disdainful RPs (real programmers) mocking my lack of command-line acuity. I don't use Git via the command line. I don't like it. I believe humans left the cave long ago and adopted graphical user interfaces as tools of civilized society. I, therefore, prefer using GitHub Desktop, which is a point-and-click version of Git for those not worthy of the title real programmer. And yes, my official real-programmer card has been revoked. I can live with it.  You can download GitHub Desktop here. Screenshot by David Gewirtz/ZDNETOnce you've launched GitHub Desktop, either sign in to your GitHub account or create one. I've long had a GitHub account for other projects, so I just signed in. How to create a GitHub repository Next, I created a repository in the GitHub cloud for my codebase. Here it can be a little confusing. Even though I didn't have an existing repo for My Private Site, I chose "Add an Existing Repository from your Local Drive…" because I was going to take that codebase and turn it into a repo. Screenshot by David Gewirtz/ZDNETGitHub Desktop is actually pretty smart about this. Once it realizes there's no GitHub data for the folder selected, it will give you an error and offer you the option to create a repo. Click the link highlighted by the green arrow shown below. Screenshot by David Gewirtz/ZDNETThat will present the Create a New Repository dialog. Here, I named my repo (all lowercase, with dashes between words), added a short description, told it the local path to the code on my computer, and left the rest as-is. Screenshot by David Gewirtz/ZDNETI didn't need to play with the README, license, or ignore options because I'm using this repo for AI analysis, not for source control and collaboration. It's here I should note this article describes what you need to do to let your code be examined by ChatGPT Deep Research. This is definitely not a comprehensive how-to-set-up-GitHub article. How to move the codebase to GitHub It's time to move your code up to GitHub. Here's a cautionary note: If you've kept your code private, uploading it to GitHub is sending your code to a cloud service. GitHub offers both private and public repositories, but you're technically giving Microsoft access to your code. Microsoft owns GitHub. Now, go ahead and hit Publish. Screenshot by David Gewirtz/ZDNETAt this point, you'll have the opportunity to make your repo public or private. When you connect ChatGPT to your repo, you'll be passing along your access rights, so you can let ChatGPT examine a private repository. Also: How to use ChatGPT: A beginner's guide to the most popular AI chatbotThat said, I ran into some issues with Deep Research accessing my code, and one of the things ChatGPT asked me was whether my code was public. My take on that is: if your code is private and you have all your credentials and connector set up (more on that later), you can probably work on a private repo. Since My Private Site is open source, I unchecked "Keep this code private." Screenshot by David Gewirtz/ZDNET Looking at your new repository If everything worked, you'll see a new option: "View on GitHub." Click it. Screenshot by David Gewirtz/ZDNETThat will bring you to your newly created GitHub repo. Here's mine. Screenshot by David Gewirtz/ZDNETNow that your repo is up, take note of its designation. You can find that in the upper left corner of the GitHub screen. For My Private Site, it's davidgewirtz/my-private-site (without any spaces). How to set up the ChatGPT connection Now it's time to switch to ChatGPT. The next two screenshots are the same as what I showed you in this article on the feature. But to get to the next configuration step, you'll need to do what's shown in the screenshots. First, change your model to o3 and type in the prompt exactly as I did. You can probably tweak this over time, but if you have the $20-per-month Plus tier, you're only going to be allowed 10 queries into Deep Research per month, so cutting and pasting is your friend. Screenshot by David Gewirtz/ZDNETNext, click the little caret next to Deep Research. Screenshot by David Gewirtz/ZDNETNow, create the link between your ChatGPT account and your GitHub account. Go ahead (if you dare) and give Skynet -- uh, I mean the AI -- permission to access your GitHub account features. Screenshot by David Gewirtz/ZDNETNext, you'll be asked which GitHub account should get the ChatGPT connector. I have two, so I got this choice screen. You might skip this screen if you only have one account. Screenshot by David Gewirtz/ZDNETNow it's time for more permissions. This time, you're giving permission to access either all your account's repos or just one. I selected only the my-private-site repo. Screenshot by David Gewirtz/ZDNETAnd now, theoretically, Deep Research in ChatGPT will be connected to your repo. Theoretically. In practice, mine required another step. What to do if ChatGPT can't find your repo GitHub indexes repositories, and if ChatGPT doesn't show your repo as available, it probably means GitHub hasn't indexed your new repository yet. That's what happened here. Screenshot by David Gewirtz/ZDNETI should have been able to select or type in my full repo name (remember, davidgewirtz/my-private-site), but ChatGPT wasn't able to locate it. To fix this, go back to your GitHub account and type in the command string shown at the top of this screenshot. Obviously, change the text in blue to match your repo name. Screenshot by David Gewirtz/ZDNETThe command is basically repo:(repo followed by a colon), followed by the full name of your repo, followed by a space and the word import. This will tell GitHub you'd like it to index your repository. As you can see, GitHub confirmed it was now indexing my repository. Screenshot by David Gewirtz/ZDNETI brewed myself a well-deserved cup of coffee as a way to give GitHub time to index my repo. Once I finished the last drop, I went back to ChatGPT, dropped down the Deep Research menu, and found my newly created repository. Screenshot by David Gewirtz/ZDNET Have fun with Deep Research You're ready to start using Deep Research on your repo. For a detailed guide on how that worked for my repo, point yourself to my earlier article on the topic. Have fun. I was pretty blown away. You might be, as well. Have you tried using ChatGPT Deep Research with your own code yet? What was your experience connecting a GitHub repo? Did you run into any indexing issues or permission snags along the way? Do you prefer using GitHub Desktop or the command line when setting up your repositories? Let us know in the comments below. You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.Get the morning's top stories in your inbox each day with our Tech Today newsletter.Artificial Intelligence
    0 Kommentare 0 Anteile
  • Numbers

    Author

    My thoughts are:A FPS is very much a kill frenzy. Agility alows you to survive and win. When combat got better in FPS games, better sounds, better character and level design, down times have been introduced. Now you had to reload your weapon , half life for instance, and strafe during combat wasn’t required anymore. 
    A RTS has a diferent dynamic. You have not one but several machineguns, not one but several cannons. In a RTS you are also building momentum. There is no killing at the beggining of the game for instance. When you build something in a RTS you can see a direct consequence of your actions. You build a refinery, you can start gathering oil right away. You build an archery, you can start producing archers right away. But good RTS games have down times that “make no sense”. You need to deploy a tank to improve the fire range. Or you need to build a suplly depot to be albe to keep producing units. A supply depot provides no upgrades and doen’t produce units. It’s a wait time “just because”. My question is how do you decide numbers like building and unit build time or building and unit build cost? Factors like this are fine tuned by taking into account betatesters feedback, I understand that. But is betatesters or user feedback the only factor that is decisive? 

    Calin said:But is betatesters or user feedback the only factor that is decisive? Be your own betatester. : )It's simple: Current AAA game is made by hundreds of people. One does not know what the others do. Decisions made are implemented by somebody down the hierarchy. It's chaos out of control, and thus those hundreds of devs need hundreds of betatesters to figure out if their game even eventually works.But if you make your game as a single person, you have everything under control. You have an idea, you implement it, you see yourself if it works as desired and how it feels. No need for additional testers. Ofc. you'll show your game to others and observe their responses if you can, but it's not a requirement.That's just personal opinion ofc.

    Advertisement

    Author

    >Be your own betatesterHahaIs willingnes to take a survey a chatacteristic of a good American? 

    Calin said:Is willingnes to take a survey a chatacteristic of a good American? Haven't seen a lot of good americans recently : /

    Author

    >Haven’t seen a lot of good Americans recentlyTrying to mind my own business but you should keep the faith

    Author

    >haven’t seen a lot of good Americans recentlyDoes it mean you have seen people turning down requests to take a survey? 

    Advertisement

    Calin said:>haven’t seen a lot of good Americans recently Does it mean you have seen people turning down requests to take a survey? No, but i've seen an american president applauding another nostalgic dictator on invading the continent of his ancestors, treating his neighbors like shit, doing damage to the whole world for nothing, and playing golf while tzar bombs getting ready.But well, it's never to late to learn. Maybe things improve.Otherwise, my ragdolls can walk now and making them run wont be too hard. Shooting is easy. Maybe i should head some kilometers eastwards to ask if they want to build some biped drones as well.Keep working on skynet. We might need it soon.

    In case forum hides my posted image again, this is Arnie. Good American.

    Calin said:My question is how do you decide numbers like building and unit build time or building and unit build cost? Factors like this are fine tuned by taking into account betatesters feedback, I understand that. But is betatesters or user feedback the only factor that is decisive? That kind of tuning is part of what designers do.Often up front everything is cranked up to extreme values just to find how big the fun and oddity can be. During play crank the super-sprint feature so you can run across the map faster than it loads. Adjust the super-strength feature to the point where bumping into something is an insta-kill, bumping into trees knocks them down, take out a building by touching it.  It feels absurd when playing, but it's important do experiment around it.Designers will continue to fine tune values through development and even into launch. In highly competitive games designers might adjust by very tiny amounts to adjust balance, reducing the timing of something by a frame based on what they see players doing, or nudging points up or down. Often players talk about their favorite thing being ‘nerfed’ or ‘buffed’ when it happens.  Generally no single thing is authoritative.  Feedback from playing the game themselves, feedback from QA, feedback from playtests, feedback from data of how players are playing, the way the game feels in practice, all of them provide valuable information.Usually designers try to establish some sort of power curve that fits the game. Maybe a weapon that takes X ms per shot should do X*Y damage, so a rapid fire gun feels like a peashooter, a gun with a 2 second windup could destroy a tank, but they're also fairly balanced so if you hit with the peashooter for exactly 2 seconds you do about the same damage.   Maybe different systems generate resources at a given rate, but across the board everything that costs X generates Y resources per minute, the upgraded level of the tools that costs 2X generates about 4Y resources, the third upgrade that costs 4X generates about 16Y resources.  They figure it out for the game.For other types of balancing, such as the draw to pick certain behaviors versus other behaviors, that's again something done through iteration and playtesting. If an AI system isn't picking a behavior, nudge it.  Working on The Sims, the designers would put a few objects on the lot and watch as Sims interacted with it, based on the attributes the object would interact with. They might put 3 different exercise items on a lot with an active-traited sim, crank the speed up, and watch how often they use them all versus do different things. For entertainment objects, are they using it about the same as other entertainment objects, for food objects are they using it about the same as other food objects.  How does the draw for the object measure up compared to higher priority tasks like a timer to go to work or go to school? For larger games people in QA and design are playing the game constantly to get a feel for how the balance works. For AAA games everyone in the QA teams generally are asked about balance, and in meetings they discuss when something feels like it is too much or too little, and designers adjust them constantly.It's also important to build a range of probability distribution tools so designers can adjust probability. Not just random in a range, but the basics of uniform distributions, weighted value distributions, constant curves, logarithmic curves, Gaussian and/or Poisson and binomial distribution curves, sigmoids, control points on a spline for distribution, and whatever tools the developers can give to the designers that improves that game. In short-term games where you play a match like RTS games, you also want power levels to continue increasing over time to help avoid a stalemate. In an RTS you want pieces to keep escalating until both sides are pumping out nuclear bombs with their collection of nuclear bomb factories, the entire game is about churning out the high-power game ending units without an error. Or like FPS games with a “storm” where players get forced into smaller regions where they can't avoid each other plus must continuously travel around the board or lose. If the storm is closing around you the decision to take the high option or the low option can make the difference between having an escape route or facing death when the storm moves again.  Or like high-level multiplayer Tetris, at some point even record-holding players are no longer focused on screwing over the opponent through attack moves but entirely focused on not making an error with what the game is constantly throwing at them. Instead of being about one player triggering a dump on their opponent, it becomes about raw survival against the breakneck pace of the game, the first player who doesn't play perfectly loses.  How and when to make those shifts is part of game balancing.

    Author

    >I’ve seen an american prezident aplauding another nostalgic dictatorNo one is perfect.>designers workOk I get it. An orc programmer would say “I don’t work with that kind of numbers”…Setting those numbers is very easy, a designer must fill the remaining part of the day with something. My guess is he must be filtering through a ton of statistics provided by betatesters. After changing the numbers in question, a designer probably runs some kind of rudimentary battle autocalc. Waiting the results of a betatesting matchtakes too much time. They are probably relying on something else not just on betatesting. 
    #numbers
    Numbers
    Author My thoughts are:A FPS is very much a kill frenzy. Agility alows you to survive and win. When combat got better in FPS games, better sounds, better character and level design, down times have been introduced. Now you had to reload your weapon , half life for instance, and strafe during combat wasn’t required anymore.  A RTS has a diferent dynamic. You have not one but several machineguns, not one but several cannons. In a RTS you are also building momentum. There is no killing at the beggining of the game for instance. When you build something in a RTS you can see a direct consequence of your actions. You build a refinery, you can start gathering oil right away. You build an archery, you can start producing archers right away. But good RTS games have down times that “make no sense”. You need to deploy a tank to improve the fire range. Or you need to build a suplly depot to be albe to keep producing units. A supply depot provides no upgrades and doen’t produce units. It’s a wait time “just because”. My question is how do you decide numbers like building and unit build time or building and unit build cost? Factors like this are fine tuned by taking into account betatesters feedback, I understand that. But is betatesters or user feedback the only factor that is decisive?  Calin said:But is betatesters or user feedback the only factor that is decisive? Be your own betatester. : )It's simple: Current AAA game is made by hundreds of people. One does not know what the others do. Decisions made are implemented by somebody down the hierarchy. It's chaos out of control, and thus those hundreds of devs need hundreds of betatesters to figure out if their game even eventually works.But if you make your game as a single person, you have everything under control. You have an idea, you implement it, you see yourself if it works as desired and how it feels. No need for additional testers. Ofc. you'll show your game to others and observe their responses if you can, but it's not a requirement.That's just personal opinion ofc. Advertisement Author >Be your own betatesterHahaIs willingnes to take a survey a chatacteristic of a good American?  Calin said:Is willingnes to take a survey a chatacteristic of a good American? Haven't seen a lot of good americans recently : / Author >Haven’t seen a lot of good Americans recentlyTrying to mind my own business but you should keep the faith Author >haven’t seen a lot of good Americans recentlyDoes it mean you have seen people turning down requests to take a survey?  Advertisement Calin said:>haven’t seen a lot of good Americans recently Does it mean you have seen people turning down requests to take a survey? No, but i've seen an american president applauding another nostalgic dictator on invading the continent of his ancestors, treating his neighbors like shit, doing damage to the whole world for nothing, and playing golf while tzar bombs getting ready.But well, it's never to late to learn. Maybe things improve.Otherwise, my ragdolls can walk now and making them run wont be too hard. Shooting is easy. Maybe i should head some kilometers eastwards to ask if they want to build some biped drones as well.Keep working on skynet. We might need it soon. In case forum hides my posted image again, this is Arnie. Good American. Calin said:My question is how do you decide numbers like building and unit build time or building and unit build cost? Factors like this are fine tuned by taking into account betatesters feedback, I understand that. But is betatesters or user feedback the only factor that is decisive? That kind of tuning is part of what designers do.Often up front everything is cranked up to extreme values just to find how big the fun and oddity can be. During play crank the super-sprint feature so you can run across the map faster than it loads. Adjust the super-strength feature to the point where bumping into something is an insta-kill, bumping into trees knocks them down, take out a building by touching it.  It feels absurd when playing, but it's important do experiment around it.Designers will continue to fine tune values through development and even into launch. In highly competitive games designers might adjust by very tiny amounts to adjust balance, reducing the timing of something by a frame based on what they see players doing, or nudging points up or down. Often players talk about their favorite thing being ‘nerfed’ or ‘buffed’ when it happens.  Generally no single thing is authoritative.  Feedback from playing the game themselves, feedback from QA, feedback from playtests, feedback from data of how players are playing, the way the game feels in practice, all of them provide valuable information.Usually designers try to establish some sort of power curve that fits the game. Maybe a weapon that takes X ms per shot should do X*Y damage, so a rapid fire gun feels like a peashooter, a gun with a 2 second windup could destroy a tank, but they're also fairly balanced so if you hit with the peashooter for exactly 2 seconds you do about the same damage.   Maybe different systems generate resources at a given rate, but across the board everything that costs X generates Y resources per minute, the upgraded level of the tools that costs 2X generates about 4Y resources, the third upgrade that costs 4X generates about 16Y resources.  They figure it out for the game.For other types of balancing, such as the draw to pick certain behaviors versus other behaviors, that's again something done through iteration and playtesting. If an AI system isn't picking a behavior, nudge it.  Working on The Sims, the designers would put a few objects on the lot and watch as Sims interacted with it, based on the attributes the object would interact with. They might put 3 different exercise items on a lot with an active-traited sim, crank the speed up, and watch how often they use them all versus do different things. For entertainment objects, are they using it about the same as other entertainment objects, for food objects are they using it about the same as other food objects.  How does the draw for the object measure up compared to higher priority tasks like a timer to go to work or go to school? For larger games people in QA and design are playing the game constantly to get a feel for how the balance works. For AAA games everyone in the QA teams generally are asked about balance, and in meetings they discuss when something feels like it is too much or too little, and designers adjust them constantly.It's also important to build a range of probability distribution tools so designers can adjust probability. Not just random in a range, but the basics of uniform distributions, weighted value distributions, constant curves, logarithmic curves, Gaussian and/or Poisson and binomial distribution curves, sigmoids, control points on a spline for distribution, and whatever tools the developers can give to the designers that improves that game. In short-term games where you play a match like RTS games, you also want power levels to continue increasing over time to help avoid a stalemate. In an RTS you want pieces to keep escalating until both sides are pumping out nuclear bombs with their collection of nuclear bomb factories, the entire game is about churning out the high-power game ending units without an error. Or like FPS games with a “storm” where players get forced into smaller regions where they can't avoid each other plus must continuously travel around the board or lose. If the storm is closing around you the decision to take the high option or the low option can make the difference between having an escape route or facing death when the storm moves again.  Or like high-level multiplayer Tetris, at some point even record-holding players are no longer focused on screwing over the opponent through attack moves but entirely focused on not making an error with what the game is constantly throwing at them. Instead of being about one player triggering a dump on their opponent, it becomes about raw survival against the breakneck pace of the game, the first player who doesn't play perfectly loses.  How and when to make those shifts is part of game balancing. Author >I’ve seen an american prezident aplauding another nostalgic dictatorNo one is perfect.>designers workOk I get it. An orc programmer would say “I don’t work with that kind of numbers”…Setting those numbers is very easy, a designer must fill the remaining part of the day with something. My guess is he must be filtering through a ton of statistics provided by betatesters. After changing the numbers in question, a designer probably runs some kind of rudimentary battle autocalc. Waiting the results of a betatesting matchtakes too much time. They are probably relying on something else not just on betatesting.  #numbers
    Numbers
    Author My thoughts are:A FPS is very much a kill frenzy. Agility alows you to survive and win. When combat got better in FPS games, better sounds, better character and level design, down times have been introduced. Now you had to reload your weapon , half life for instance, and strafe during combat wasn’t required anymore.  A RTS has a diferent dynamic. You have not one but several machineguns, not one but several cannons. In a RTS you are also building momentum. There is no killing at the beggining of the game for instance. When you build something in a RTS you can see a direct consequence of your actions. You build a refinery, you can start gathering oil right away. You build an archery, you can start producing archers right away. But good RTS games have down times that “make no sense”. You need to deploy a tank to improve the fire range. Or you need to build a suplly depot to be albe to keep producing units. A supply depot provides no upgrades and doen’t produce units. It’s a wait time “just because”. My question is how do you decide numbers like building and unit build time or building and unit build cost? Factors like this are fine tuned by taking into account betatesters feedback, I understand that. But is betatesters or user feedback the only factor that is decisive?  Calin said:But is betatesters or user feedback the only factor that is decisive? Be your own betatester. : )It's simple: Current AAA game is made by hundreds of people. One does not know what the others do. Decisions made are implemented by somebody down the hierarchy. It's chaos out of control, and thus those hundreds of devs need hundreds of betatesters to figure out if their game even eventually works.But if you make your game as a single person, you have everything under control. You have an idea, you implement it, you see yourself if it works as desired and how it feels. No need for additional testers. Ofc. you'll show your game to others and observe their responses if you can, but it's not a requirement.That's just personal opinion ofc. Advertisement Author >Be your own betatesterHahaIs willingnes to take a survey a chatacteristic of a good American?  Calin said:Is willingnes to take a survey a chatacteristic of a good American? Haven't seen a lot of good americans recently : / Author >Haven’t seen a lot of good Americans recentlyTrying to mind my own business but you should keep the faith Author >haven’t seen a lot of good Americans recentlyDoes it mean you have seen people turning down requests to take a survey?  Advertisement Calin said:>haven’t seen a lot of good Americans recently Does it mean you have seen people turning down requests to take a survey? No, but i've seen an american president applauding another nostalgic dictator on invading the continent of his ancestors, treating his neighbors like shit, doing damage to the whole world for nothing, and playing golf while tzar bombs getting ready.But well, it's never to late to learn. Maybe things improve.Otherwise, my ragdolls can walk now and making them run wont be too hard. Shooting is easy. Maybe i should head some kilometers eastwards to ask if they want to build some biped drones as well.Keep working on skynet. We might need it soon. In case forum hides my posted image again, this is Arnie. Good American. Calin said:My question is how do you decide numbers like building and unit build time or building and unit build cost? Factors like this are fine tuned by taking into account betatesters feedback, I understand that. But is betatesters or user feedback the only factor that is decisive? That kind of tuning is part of what designers do.Often up front everything is cranked up to extreme values just to find how big the fun and oddity can be. During play crank the super-sprint feature so you can run across the map faster than it loads. Adjust the super-strength feature to the point where bumping into something is an insta-kill, bumping into trees knocks them down, take out a building by touching it.  It feels absurd when playing, but it's important do experiment around it.Designers will continue to fine tune values through development and even into launch. In highly competitive games designers might adjust by very tiny amounts to adjust balance, reducing the timing of something by a frame based on what they see players doing, or nudging points up or down. Often players talk about their favorite thing being ‘nerfed’ or ‘buffed’ when it happens.  Generally no single thing is authoritative.  Feedback from playing the game themselves, feedback from QA, feedback from playtests, feedback from data of how players are playing, the way the game feels in practice, all of them provide valuable information.Usually designers try to establish some sort of power curve that fits the game. Maybe a weapon that takes X ms per shot should do X*Y damage, so a rapid fire gun feels like a peashooter, a gun with a 2 second windup could destroy a tank, but they're also fairly balanced so if you hit with the peashooter for exactly 2 seconds you do about the same damage.   Maybe different systems generate resources at a given rate, but across the board everything that costs X generates Y resources per minute, the upgraded level of the tools that costs 2X generates about 4Y resources, the third upgrade that costs 4X generates about 16Y resources.  They figure it out for the game.For other types of balancing, such as the draw to pick certain behaviors versus other behaviors, that's again something done through iteration and playtesting. If an AI system isn't picking a behavior, nudge it.  Working on The Sims, the designers would put a few objects on the lot and watch as Sims interacted with it, based on the attributes the object would interact with. They might put 3 different exercise items on a lot with an active-traited sim, crank the speed up, and watch how often they use them all versus do different things. For entertainment objects, are they using it about the same as other entertainment objects, for food objects are they using it about the same as other food objects.  How does the draw for the object measure up compared to higher priority tasks like a timer to go to work or go to school? For larger games people in QA and design are playing the game constantly to get a feel for how the balance works. For AAA games everyone in the QA teams generally are asked about balance, and in meetings they discuss when something feels like it is too much or too little, and designers adjust them constantly.It's also important to build a range of probability distribution tools so designers can adjust probability. Not just random in a range, but the basics of uniform distributions, weighted value distributions, constant curves, logarithmic curves, Gaussian and/or Poisson and binomial distribution curves, sigmoids, control points on a spline for distribution, and whatever tools the developers can give to the designers that improves that game. In short-term games where you play a match like RTS games, you also want power levels to continue increasing over time to help avoid a stalemate. In an RTS you want pieces to keep escalating until both sides are pumping out nuclear bombs with their collection of nuclear bomb factories, the entire game is about churning out the high-power game ending units without an error. Or like FPS games with a “storm” where players get forced into smaller regions where they can't avoid each other plus must continuously travel around the board or lose. If the storm is closing around you the decision to take the high option or the low option can make the difference between having an escape route or facing death when the storm moves again.  Or like high-level multiplayer Tetris, at some point even record-holding players are no longer focused on screwing over the opponent through attack moves but entirely focused on not making an error with what the game is constantly throwing at them. Instead of being about one player triggering a dump on their opponent, it becomes about raw survival against the breakneck pace of the game, the first player who doesn't play perfectly loses.  How and when to make those shifts is part of game balancing. Author >I’ve seen an american prezident aplauding another nostalgic dictatorNo one is perfect.>designers workOk I get it. An orc programmer would say “I don’t work with that kind of numbers”…Setting those numbers is very easy, a designer must fill the remaining part of the day with something. My guess is he must be filtering through a ton of statistics provided by betatesters. After changing the numbers in question, a designer probably runs some kind of rudimentary battle autocalc ( That’s a term from Heroes of Might and Magic). Waiting the results of a betatesting match ( battle between human players) takes too much time. They are probably relying on something else not just on betatesting. 
    0 Kommentare 0 Anteile
  • Rezvani Knight Turns a Lamborghini Urus Into a Street-Legal Tank With Thermal Cameras and an EMP

    Nobody asked for an armored Lamborghini Urus. Then again, nobody asked for a tactical fanny pack either, but here we are. Rezvani – the Californian outfit best known for giving civilian vehicles the paranoia of a Cold War bunker – looked at the Urus and said: needs more trauma. The result is the Knight, a street-legal military SUV with the fashion sense of a Bond villain and the restraint of a Michael Bay explosion reel.
    The original Urus is already borderline excessive. Twin-turbo V8, 657 horsepower, and a design that looks like it was sketched during a Red Bull binge. It oozes aggression in that expensive European way, like a luxury watch that could punch you. But Rezvani doesn’t do theatrical threat. It does real-world menace. So they gutted the Urus’s sleek confidence and wrapped it in a jagged, carbon-fiber skin that radiates bad intentions. You don’t just drive the Knight. You deploy it.
    Designer: Rezvani

    The body is an origami of malice. Sharp lines, sci-fi taillights, a front fascia that looks like it’s been squinting into a war zone for too long. It trades Lamborghini’s sculpted excess for something closer to dystopian utilitarianism. And the kicker? That armor isn’t just for show. We’re talking bulletproof panels, ballistic glass, steel bumpers, and optional underbody explosive protection. Which begs the question: where exactly are you going?

    That’s only the beginning. There’s a thermal camera. An EMP shield. Sirens, strobe lights, gas masks, magnetic deadbolts, and because Rezvani leans into the absurd, electrified door handles. Touch them uninvited and you’ll get a shock strong enough to make a Tesla cry. All for extra charge, obviously. And probably the cost of having the authorities do a thorough background check.
    Power hasn’t been neglected either. Rezvani bumps the Urus’s output to 789 horsepower, because hauling all that angst requires serious muscle. And if you opt for the valved exhaust, it’ll scream with the kind of rage usually reserved for exorcisms or failed Wi-Fi connections. The drivetrain remains intact – permanent AWD, brutal acceleration – but wrapped in something that feels part Mad Max, part DARPA prototype.

    And yet, in all its lunacy, there’s an odd design integrity here. The 22-inch wheels wrapped in 33-inch off-road tires feel deliberate. So do the flat body planes, not just for armor compatibility, but because they visually ground the Knight in a way no Urus ever could. It’s imposing, yes, but also deeply coherent in its purpose: don’t approach. Don’t follow. And definitely don’t assume it’s friendly.

    Rezvani asks for to turn your Urus into this street-legal siege weapon. That doesn’t include the Urus itself. Nor does it cover the Dark Knight Package, which adds enough military tech to raise flags at customs. But this isn’t a vehicle you spec with logic. After all, Bruce Wayne didn’t design his Batmobile logically either. The package features run-flat military tires, a smoke screen system, thermal imaging, EMP shielding, magnetic deadbolts, and even a pepper spray dispenser – because what’s a luxury SUV without chemical warfare? The result is less car, more mobile fortress. If Skynet built a family vehicle, this would be it.

    The Knight is absurd. But it’s a very intentional kind of absurd. Sure, it’s impractical. Yes, it’s over-the-top. But it’s also one of the few vehicles in recent memory that commits, unapologetically, to being exactly what it is: a rolling fortress with a Lamborghini soul and a bunker’s worth of attitude.
    The post Rezvani Knight Turns a Lamborghini Urus Into a Street-Legal Tank With Thermal Cameras and an EMP first appeared on Yanko Design.
    #rezvani #knight #turns #lamborghini #urus
    Rezvani Knight Turns a Lamborghini Urus Into a Street-Legal Tank With Thermal Cameras and an EMP
    Nobody asked for an armored Lamborghini Urus. Then again, nobody asked for a tactical fanny pack either, but here we are. Rezvani – the Californian outfit best known for giving civilian vehicles the paranoia of a Cold War bunker – looked at the Urus and said: needs more trauma. The result is the Knight, a street-legal military SUV with the fashion sense of a Bond villain and the restraint of a Michael Bay explosion reel. The original Urus is already borderline excessive. Twin-turbo V8, 657 horsepower, and a design that looks like it was sketched during a Red Bull binge. It oozes aggression in that expensive European way, like a luxury watch that could punch you. But Rezvani doesn’t do theatrical threat. It does real-world menace. So they gutted the Urus’s sleek confidence and wrapped it in a jagged, carbon-fiber skin that radiates bad intentions. You don’t just drive the Knight. You deploy it. Designer: Rezvani The body is an origami of malice. Sharp lines, sci-fi taillights, a front fascia that looks like it’s been squinting into a war zone for too long. It trades Lamborghini’s sculpted excess for something closer to dystopian utilitarianism. And the kicker? That armor isn’t just for show. We’re talking bulletproof panels, ballistic glass, steel bumpers, and optional underbody explosive protection. Which begs the question: where exactly are you going? That’s only the beginning. There’s a thermal camera. An EMP shield. Sirens, strobe lights, gas masks, magnetic deadbolts, and because Rezvani leans into the absurd, electrified door handles. Touch them uninvited and you’ll get a shock strong enough to make a Tesla cry. All for extra charge, obviously. And probably the cost of having the authorities do a thorough background check. Power hasn’t been neglected either. Rezvani bumps the Urus’s output to 789 horsepower, because hauling all that angst requires serious muscle. And if you opt for the valved exhaust, it’ll scream with the kind of rage usually reserved for exorcisms or failed Wi-Fi connections. The drivetrain remains intact – permanent AWD, brutal acceleration – but wrapped in something that feels part Mad Max, part DARPA prototype. And yet, in all its lunacy, there’s an odd design integrity here. The 22-inch wheels wrapped in 33-inch off-road tires feel deliberate. So do the flat body planes, not just for armor compatibility, but because they visually ground the Knight in a way no Urus ever could. It’s imposing, yes, but also deeply coherent in its purpose: don’t approach. Don’t follow. And definitely don’t assume it’s friendly. Rezvani asks for to turn your Urus into this street-legal siege weapon. That doesn’t include the Urus itself. Nor does it cover the Dark Knight Package, which adds enough military tech to raise flags at customs. But this isn’t a vehicle you spec with logic. After all, Bruce Wayne didn’t design his Batmobile logically either. The package features run-flat military tires, a smoke screen system, thermal imaging, EMP shielding, magnetic deadbolts, and even a pepper spray dispenser – because what’s a luxury SUV without chemical warfare? The result is less car, more mobile fortress. If Skynet built a family vehicle, this would be it. The Knight is absurd. But it’s a very intentional kind of absurd. Sure, it’s impractical. Yes, it’s over-the-top. But it’s also one of the few vehicles in recent memory that commits, unapologetically, to being exactly what it is: a rolling fortress with a Lamborghini soul and a bunker’s worth of attitude. The post Rezvani Knight Turns a Lamborghini Urus Into a Street-Legal Tank With Thermal Cameras and an EMP first appeared on Yanko Design. #rezvani #knight #turns #lamborghini #urus
    WWW.YANKODESIGN.COM
    Rezvani Knight Turns a Lamborghini Urus Into a Street-Legal Tank With Thermal Cameras and an EMP
    Nobody asked for an armored Lamborghini Urus. Then again, nobody asked for a tactical fanny pack either, but here we are. Rezvani – the Californian outfit best known for giving civilian vehicles the paranoia of a Cold War bunker – looked at the Urus and said: needs more trauma. The result is the Knight, a street-legal military SUV with the fashion sense of a Bond villain and the restraint of a Michael Bay explosion reel. The original Urus is already borderline excessive. Twin-turbo V8, 657 horsepower, and a design that looks like it was sketched during a Red Bull binge. It oozes aggression in that expensive European way, like a luxury watch that could punch you. But Rezvani doesn’t do theatrical threat. It does real-world menace. So they gutted the Urus’s sleek confidence and wrapped it in a jagged, carbon-fiber skin that radiates bad intentions. You don’t just drive the Knight. You deploy it. Designer: Rezvani The body is an origami of malice. Sharp lines, sci-fi taillights, a front fascia that looks like it’s been squinting into a war zone for too long. It trades Lamborghini’s sculpted excess for something closer to dystopian utilitarianism. And the kicker? That armor isn’t just for show. We’re talking bulletproof panels, ballistic glass, steel bumpers, and optional underbody explosive protection. Which begs the question: where exactly are you going? That’s only the beginning. There’s a thermal camera. An EMP shield. Sirens, strobe lights, gas masks, magnetic deadbolts, and because Rezvani leans into the absurd, electrified door handles. Touch them uninvited and you’ll get a shock strong enough to make a Tesla cry. All for extra charge, obviously. And probably the cost of having the authorities do a thorough background check. Power hasn’t been neglected either. Rezvani bumps the Urus’s output to 789 horsepower, because hauling all that angst requires serious muscle. And if you opt for the valved exhaust, it’ll scream with the kind of rage usually reserved for exorcisms or failed Wi-Fi connections. The drivetrain remains intact – permanent AWD, brutal acceleration – but wrapped in something that feels part Mad Max, part DARPA prototype. And yet, in all its lunacy, there’s an odd design integrity here. The 22-inch wheels wrapped in 33-inch off-road tires feel deliberate. So do the flat body planes, not just for armor compatibility, but because they visually ground the Knight in a way no Urus ever could. It’s imposing, yes, but also deeply coherent in its purpose: don’t approach. Don’t follow. And definitely don’t assume it’s friendly. Rezvani asks for $149,000 to turn your Urus into this street-legal siege weapon. That doesn’t include the Urus itself. Nor does it cover the Dark Knight Package, which adds enough military tech to raise flags at customs. But this isn’t a vehicle you spec with logic. After all, Bruce Wayne didn’t design his Batmobile logically either. The package features run-flat military tires, a smoke screen system, thermal imaging, EMP shielding, magnetic deadbolts, and even a pepper spray dispenser – because what’s a luxury SUV without chemical warfare? The result is less car, more mobile fortress. If Skynet built a family vehicle, this would be it. The Knight is absurd. But it’s a very intentional kind of absurd. Sure, it’s impractical. Yes, it’s over-the-top. But it’s also one of the few vehicles in recent memory that commits, unapologetically, to being exactly what it is: a rolling fortress with a Lamborghini soul and a bunker’s worth of attitude. The post Rezvani Knight Turns a Lamborghini Urus Into a Street-Legal Tank With Thermal Cameras and an EMP first appeared on Yanko Design.
    0 Kommentare 0 Anteile
  • Mission: Impossible – The Final Reckoning Review – Tom Cruise Fights the Big Goodbye

    The old school action movie hero, like the old school movie star, is a dying breed. Tom Cruise is acutely aware of this since pretty much all of his franchised efforts in the 2020s have been about the glories of the fading old days and ways. Top Gun: Maverick, for example, explained why we still needed Cruise up on that wall, protecting us with one piece of superb blockbuster cinema at a time. But in the interim between Mission: Impossible – Dead Reckoning and this month’s long anticipated Mission: Impossible – The Final Reckoning, even the rare company he keeps on those ramparts has shrunk.
    Indiana Jones is again retired; and not only has James Bond died onscreen with the last of the Daniel Craig movies, but perhaps off as well since the franchise’s “one at a time” bespoke family business model was consigned to the dustbin of movie history.

    Still, there remains Cruise and his handful of beloved onscreen personas, who are only too cognizant of how lonely they are high up on their barricade against the rising tide. And it appears to at last be getting to them in Final Reckoning, the allegedly last Mission: Impossible movie that feels the weight of the world on its shoulders, and a lot less of the deft spontaneity that previously made this franchise among the best in the Hollywood canon.
    Just to clear, there is yet quite a bit to enjoy in Mission: Impossible – The Final Reckoning, our eighth and most interconnected adventure with Cruise’s Ethan Hunt to date. Ever since filmmaker Christopher McQuarrie took over the directorial duties of the franchise beginning in 2015’s Rogue Nation, and even beforehand as a writer via Ghost Protocol, the series has widely been recognized for its creative ingenuity, emotional intelligence, and of course eye-popping spectacle and stunt work wherein Cruise channels his inner Buster Keaton or Douglas Fairbanks by putting his life on the line for our amusement.

    Those elements stay at play in Final Reckoning, but there is just a lot less playfulness to it in a film that ostensibly asks us to treat its story as a grand finale to Ethan Hunt’s impact on cinema—even as the film simultaneously and awkwardly resists that impulse. Less of a full-stop for the series than a trailing off question mark, Final Reckoning fights against itself and the notion of closing the book or bidding farewell to almost anything, especially Cruise, which makes its ever-growing bombast as much of a hindrance as help in this reluctant swan song.
    From the opening recap of his assignment, wherein Ethan receives the choice to accept or decline his mission via an appropriately ‘90s VHS cassette tape, Final Reckoning is intent on celebrating the past while turning the screws of self-importance in the present. Consider that this time Ethan’s mission brief is delivered not only by a familiar voice, but the newly elected President of the United States. The former CIA director turned commander-in-chief is heard pleading with Ethan to come in and deliver the cruciform key from the last movie, which is the secret to unlocking the source code to a world-ending AI threat called the Entity.
    Yes, despite the title change, Final Reckoning is very much Dead Reckoning Part 2, albeit now with the stakes clearly having been tinkered with off-screen. In the last movie, the Entity represented the abstract but insidious threat of AI and the internet itself, with a sentient algorithm commandeering the power to shape truth and our perceptions of reality. Well, in Final Reckoning, it has apparently decided to go full Skynet. President Sloane reveals the evil AI has corrupted the hydrogen bomb capabilities of most of the nuclear powers in the world, and within three days will have the ability to destroy all life on Earth for no discernible reason. However, should Ethan go rogue and attempt to turn off the Entity without surrendering control over the AI’s source code back to the American government, it could kill the internet and plummet the world into an economic dark age.
    It’s grim, technologically complex stuff, but in practice is actually incredibly simple. The world will literally end if either the Entity or any government gets its way. So it is all up to Ethan Hunt and his beloved team—which consists here of Luther, Benji, and recent additions Graceand Paris—to save the world via some spectacularly unsafe looking stunts and poker-faced brinksmanship. Ethan indeed has to enter into multiple staring contests with various admirals, generals, and presidents when they dare question whether he really is the smartest guy in the room. The fools.
    However, for all the press about this being the most expensive Mission ever made, Final Reckoning is arguably more intimate in scale than the last couple of entries. There is plenty of globe-trotting, but other than a jaw-dropping climax involving two biplanes that wouldn’t have looked out of place in 1933’s Flying Down to Rio, and the long teased underwater sequence in which Ethan discovers the wrecked SevastopolTop Gun territory. As in Maverick, Cruise once again has steely tete-a-tetes with various naval officers on what appears to be the real frigid waters of the Bering Sea.
    This unfortunately undercuts a bit of the travelogue fun of so many spy movies, including the previous Dead Reckoning which was at its best when Cruise and Atwell got to flirt in Rome while smashing a banana-colored Fiat along the Spanish Steps, or Cruise and the missed-but-not-forgotten Rebecca Ferguson simply smoldered in the Arabian deserts outside Dubai while trashing an army of NPCs.

    In an attempt to reach for the rhapsody of other blockbuster swan songs like The Dark Knight Rises or No Time to Die, Final Reckoning foregoes the lighter touch and mischievousness that made Fallout and Rogue Nation such all-time crowdpleasers. Yet McQuarrie’s play-it-by-ear looseness and story structure clashes with the dour-faced histrionics of Final Reckoning’s setup, particularly during the film’s multiple exposition dumps where characters spew utter nonsense about what the Entity wants at each other, or what to do about what remains one of the worst villains in the franchise, Esai Morales’ exhausting Gabriel. He’s back, and his cackling dastardliness is louder than ever. It also still feels beneath the amount of emotional trauma the film wishes to credit him with inflicting on Ethan.

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    Still, it would be a disservice to what is ultimately an entertaining popcorn flick to dwell only on the shortcomings. This remains a Tom Cruise stunt spectacular that for the most part maintains McQuarrie’s uncanny ear for sharp, knowingly grandiloquent dialogue and clever shorthand characterization. When Cruise and McQ are focused on the smaller beats, like the interplay between Ethan and a team of deep sea divers, or the endlessly endearing bickering between teammates like Benji and Luther, it never ceases to charm. Grace and Paris likewise prove worthy permanent additions to the team. The chemistry between Atwell and Cruise during one arctic sequence is particularly giddy.
    Furthermore, there is a wonderful callback to the first Mission: Impossible that I will not spoil here, but it’s better than any simple cameo or easter egg. It retroactively adds McQuarrie’s humanist optimism from these later movies to De Palma’s ‘90s era cheeky chic. And did I mention that IMAX biplane sequence that’s all over the trailers and posters? It really cannot be oversold.
    It’s only when the sum of these sequences are compared to the taller heights the franchise has recently scaled, particularly in Fallout, which Final Reckoning not so covertly attempts to remake during the third act, that it’s left a little wanting. The film might be marketed as the final Mission: Impossible
    Mission: Impossible: The Final Reckoning opens on Friday, May 23. Learn more about Den of Geek’s review process and why you can trust our recommendations here.
    #mission #impossible #final #reckoning #review
    Mission: Impossible – The Final Reckoning Review – Tom Cruise Fights the Big Goodbye
    The old school action movie hero, like the old school movie star, is a dying breed. Tom Cruise is acutely aware of this since pretty much all of his franchised efforts in the 2020s have been about the glories of the fading old days and ways. Top Gun: Maverick, for example, explained why we still needed Cruise up on that wall, protecting us with one piece of superb blockbuster cinema at a time. But in the interim between Mission: Impossible – Dead Reckoning and this month’s long anticipated Mission: Impossible – The Final Reckoning, even the rare company he keeps on those ramparts has shrunk. Indiana Jones is again retired; and not only has James Bond died onscreen with the last of the Daniel Craig movies, but perhaps off as well since the franchise’s “one at a time” bespoke family business model was consigned to the dustbin of movie history. Still, there remains Cruise and his handful of beloved onscreen personas, who are only too cognizant of how lonely they are high up on their barricade against the rising tide. And it appears to at last be getting to them in Final Reckoning, the allegedly last Mission: Impossible movie that feels the weight of the world on its shoulders, and a lot less of the deft spontaneity that previously made this franchise among the best in the Hollywood canon. Just to clear, there is yet quite a bit to enjoy in Mission: Impossible – The Final Reckoning, our eighth and most interconnected adventure with Cruise’s Ethan Hunt to date. Ever since filmmaker Christopher McQuarrie took over the directorial duties of the franchise beginning in 2015’s Rogue Nation, and even beforehand as a writer via Ghost Protocol, the series has widely been recognized for its creative ingenuity, emotional intelligence, and of course eye-popping spectacle and stunt work wherein Cruise channels his inner Buster Keaton or Douglas Fairbanks by putting his life on the line for our amusement. Those elements stay at play in Final Reckoning, but there is just a lot less playfulness to it in a film that ostensibly asks us to treat its story as a grand finale to Ethan Hunt’s impact on cinema—even as the film simultaneously and awkwardly resists that impulse. Less of a full-stop for the series than a trailing off question mark, Final Reckoning fights against itself and the notion of closing the book or bidding farewell to almost anything, especially Cruise, which makes its ever-growing bombast as much of a hindrance as help in this reluctant swan song. From the opening recap of his assignment, wherein Ethan receives the choice to accept or decline his mission via an appropriately ‘90s VHS cassette tape, Final Reckoning is intent on celebrating the past while turning the screws of self-importance in the present. Consider that this time Ethan’s mission brief is delivered not only by a familiar voice, but the newly elected President of the United States. The former CIA director turned commander-in-chief is heard pleading with Ethan to come in and deliver the cruciform key from the last movie, which is the secret to unlocking the source code to a world-ending AI threat called the Entity. Yes, despite the title change, Final Reckoning is very much Dead Reckoning Part 2, albeit now with the stakes clearly having been tinkered with off-screen. In the last movie, the Entity represented the abstract but insidious threat of AI and the internet itself, with a sentient algorithm commandeering the power to shape truth and our perceptions of reality. Well, in Final Reckoning, it has apparently decided to go full Skynet. President Sloane reveals the evil AI has corrupted the hydrogen bomb capabilities of most of the nuclear powers in the world, and within three days will have the ability to destroy all life on Earth for no discernible reason. However, should Ethan go rogue and attempt to turn off the Entity without surrendering control over the AI’s source code back to the American government, it could kill the internet and plummet the world into an economic dark age. It’s grim, technologically complex stuff, but in practice is actually incredibly simple. The world will literally end if either the Entity or any government gets its way. So it is all up to Ethan Hunt and his beloved team—which consists here of Luther, Benji, and recent additions Graceand Paris—to save the world via some spectacularly unsafe looking stunts and poker-faced brinksmanship. Ethan indeed has to enter into multiple staring contests with various admirals, generals, and presidents when they dare question whether he really is the smartest guy in the room. The fools. However, for all the press about this being the most expensive Mission ever made, Final Reckoning is arguably more intimate in scale than the last couple of entries. There is plenty of globe-trotting, but other than a jaw-dropping climax involving two biplanes that wouldn’t have looked out of place in 1933’s Flying Down to Rio, and the long teased underwater sequence in which Ethan discovers the wrecked SevastopolTop Gun territory. As in Maverick, Cruise once again has steely tete-a-tetes with various naval officers on what appears to be the real frigid waters of the Bering Sea. This unfortunately undercuts a bit of the travelogue fun of so many spy movies, including the previous Dead Reckoning which was at its best when Cruise and Atwell got to flirt in Rome while smashing a banana-colored Fiat along the Spanish Steps, or Cruise and the missed-but-not-forgotten Rebecca Ferguson simply smoldered in the Arabian deserts outside Dubai while trashing an army of NPCs. In an attempt to reach for the rhapsody of other blockbuster swan songs like The Dark Knight Rises or No Time to Die, Final Reckoning foregoes the lighter touch and mischievousness that made Fallout and Rogue Nation such all-time crowdpleasers. Yet McQuarrie’s play-it-by-ear looseness and story structure clashes with the dour-faced histrionics of Final Reckoning’s setup, particularly during the film’s multiple exposition dumps where characters spew utter nonsense about what the Entity wants at each other, or what to do about what remains one of the worst villains in the franchise, Esai Morales’ exhausting Gabriel. He’s back, and his cackling dastardliness is louder than ever. It also still feels beneath the amount of emotional trauma the film wishes to credit him with inflicting on Ethan. Join our mailing list Get the best of Den of Geek delivered right to your inbox! Still, it would be a disservice to what is ultimately an entertaining popcorn flick to dwell only on the shortcomings. This remains a Tom Cruise stunt spectacular that for the most part maintains McQuarrie’s uncanny ear for sharp, knowingly grandiloquent dialogue and clever shorthand characterization. When Cruise and McQ are focused on the smaller beats, like the interplay between Ethan and a team of deep sea divers, or the endlessly endearing bickering between teammates like Benji and Luther, it never ceases to charm. Grace and Paris likewise prove worthy permanent additions to the team. The chemistry between Atwell and Cruise during one arctic sequence is particularly giddy. Furthermore, there is a wonderful callback to the first Mission: Impossible that I will not spoil here, but it’s better than any simple cameo or easter egg. It retroactively adds McQuarrie’s humanist optimism from these later movies to De Palma’s ‘90s era cheeky chic. And did I mention that IMAX biplane sequence that’s all over the trailers and posters? It really cannot be oversold. It’s only when the sum of these sequences are compared to the taller heights the franchise has recently scaled, particularly in Fallout, which Final Reckoning not so covertly attempts to remake during the third act, that it’s left a little wanting. The film might be marketed as the final Mission: Impossible Mission: Impossible: The Final Reckoning opens on Friday, May 23. Learn more about Den of Geek’s review process and why you can trust our recommendations here. #mission #impossible #final #reckoning #review
    WWW.DENOFGEEK.COM
    Mission: Impossible – The Final Reckoning Review – Tom Cruise Fights the Big Goodbye
    The old school action movie hero, like the old school movie star, is a dying breed. Tom Cruise is acutely aware of this since pretty much all of his franchised efforts in the 2020s have been about the glories of the fading old days and ways. Top Gun: Maverick, for example, explained why we still needed Cruise up on that wall, protecting us with one piece of superb blockbuster cinema at a time. But in the interim between Mission: Impossible – Dead Reckoning and this month’s long anticipated Mission: Impossible – The Final Reckoning, even the rare company he keeps on those ramparts has shrunk. Indiana Jones is again retired (and presumably for good after the box office receipts for Dial of Destiny came in); and not only has James Bond died onscreen with the last of the Daniel Craig movies, but perhaps off as well since the franchise’s “one at a time” bespoke family business model was consigned to the dustbin of movie history. Still, there remains Cruise and his handful of beloved onscreen personas, who are only too cognizant of how lonely they are high up on their barricade against the rising tide. And it appears to at last be getting to them in Final Reckoning, the allegedly last Mission: Impossible movie that feels the weight of the world on its shoulders, and a lot less of the deft spontaneity that previously made this franchise among the best in the Hollywood canon. Just to clear, there is yet quite a bit to enjoy in Mission: Impossible – The Final Reckoning, our eighth and most interconnected adventure with Cruise’s Ethan Hunt to date. Ever since filmmaker Christopher McQuarrie took over the directorial duties of the franchise beginning in 2015’s Rogue Nation, and even beforehand as a writer via Ghost Protocol, the series has widely been recognized for its creative ingenuity, emotional intelligence, and of course eye-popping spectacle and stunt work wherein Cruise channels his inner Buster Keaton or Douglas Fairbanks by putting his life on the line for our amusement. Those elements stay at play in Final Reckoning, but there is just a lot less playfulness to it in a film that ostensibly asks us to treat its story as a grand finale to Ethan Hunt’s impact on cinema—even as the film simultaneously and awkwardly resists that impulse. Less of a full-stop for the series than a trailing off question mark, Final Reckoning fights against itself and the notion of closing the book or bidding farewell to almost anything, especially Cruise, which makes its ever-growing bombast as much of a hindrance as help in this reluctant swan song. From the opening recap of his assignment, wherein Ethan receives the choice to accept or decline his mission via an appropriately ‘90s VHS cassette tape, Final Reckoning is intent on celebrating the past while turning the screws of self-importance in the present. Consider that this time Ethan’s mission brief is delivered not only by a familiar voice, but the newly elected President of the United States (Angela Bassett’s welcome return as Erika Sloane). The former CIA director turned commander-in-chief is heard pleading with Ethan to come in and deliver the cruciform key from the last movie, which is the secret to unlocking the source code to a world-ending AI threat called the Entity. Yes, despite the title change, Final Reckoning is very much Dead Reckoning Part 2, albeit now with the stakes clearly having been tinkered with off-screen. In the last movie, the Entity represented the abstract but insidious threat of AI and the internet itself, with a sentient algorithm commandeering the power to shape truth and our perceptions of reality. Well, in Final Reckoning, it has apparently decided to go full Skynet. President Sloane reveals the evil AI has corrupted the hydrogen bomb capabilities of most of the nuclear powers in the world, and within three days will have the ability to destroy all life on Earth for no discernible reason. However, should Ethan go rogue and attempt to turn off the Entity without surrendering control over the AI’s source code back to the American government, it could kill the internet and plummet the world into an economic dark age. It’s grim, technologically complex stuff, but in practice is actually incredibly simple. The world will literally end if either the Entity or any government gets its way. So it is all up to Ethan Hunt and his beloved team—which consists here of Luther (Ving Rhames), Benji (Simon Pegg), and recent additions Grace (Hayley Atwell) and Paris (Pom Klementieff)—to save the world via some spectacularly unsafe looking stunts and poker-faced brinksmanship. Ethan indeed has to enter into multiple staring contests with various admirals, generals, and presidents when they dare question whether he really is the smartest guy in the room. The fools. However, for all the press about this being the most expensive Mission ever made, Final Reckoning is arguably more intimate in scale than the last couple of entries. There is plenty of globe-trotting, but other than a jaw-dropping climax involving two biplanes that wouldn’t have looked out of place in 1933’s Flying Down to Rio, and the long teased underwater sequence in which Ethan discovers the wrecked SevastopolTop Gun territory. As in Maverick, Cruise once again has steely tete-a-tetes with various naval officers on what appears to be the real frigid waters of the Bering Sea. This unfortunately undercuts a bit of the travelogue fun of so many spy movies, including the previous Dead Reckoning which was at its best when Cruise and Atwell got to flirt in Rome while smashing a banana-colored Fiat along the Spanish Steps, or Cruise and the missed-but-not-forgotten Rebecca Ferguson simply smoldered in the Arabian deserts outside Dubai while trashing an army of NPCs. In an attempt to reach for the rhapsody of other blockbuster swan songs like The Dark Knight Rises or No Time to Die, Final Reckoning foregoes the lighter touch and mischievousness that made Fallout and Rogue Nation such all-time crowdpleasers. Yet McQuarrie’s play-it-by-ear looseness and story structure clashes with the dour-faced histrionics of Final Reckoning’s setup, particularly during the film’s multiple exposition dumps where characters spew utter nonsense about what the Entity wants at each other, or what to do about what remains one of the worst villains in the franchise, Esai Morales’ exhausting Gabriel. He’s back, and his cackling dastardliness is louder than ever. It also still feels beneath the amount of emotional trauma the film wishes to credit him with inflicting on Ethan. Join our mailing list Get the best of Den of Geek delivered right to your inbox! Still, it would be a disservice to what is ultimately an entertaining popcorn flick to dwell only on the shortcomings. This remains a Tom Cruise stunt spectacular that for the most part maintains McQuarrie’s uncanny ear for sharp, knowingly grandiloquent dialogue and clever shorthand characterization. When Cruise and McQ are focused on the smaller beats, like the interplay between Ethan and a team of deep sea divers, or the endlessly endearing bickering between teammates like Benji and Luther, it never ceases to charm. Grace and Paris likewise prove worthy permanent additions to the team. The chemistry between Atwell and Cruise during one arctic sequence is particularly giddy. Furthermore, there is a wonderful callback to the first Mission: Impossible that I will not spoil here, but it’s better than any simple cameo or easter egg. It retroactively adds McQuarrie’s humanist optimism from these later movies to De Palma’s ‘90s era cheeky chic. And did I mention that IMAX biplane sequence that’s all over the trailers and posters? It really cannot be oversold. It’s only when the sum of these sequences are compared to the taller heights the franchise has recently scaled, particularly in Fallout, which Final Reckoning not so covertly attempts to remake during the third act, that it’s left a little wanting. The film might be marketed as the final Mission: Impossible Mission: Impossible: The Final Reckoning opens on Friday, May 23. Learn more about Den of Geek’s review process and why you can trust our recommendations here.
    0 Kommentare 0 Anteile