• New Claude 4 AI model refactored code for 7 hours straight

    No sleep till Brooklyn

    New Claude 4 AI model refactored code for 7 hours straight

    Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours.

    Benj Edwards



    May 22, 2025 12:45 pm

    |

    4

    The Claude 4 logo, created by Anthropic.

    Credit:

    Anthropic

    The Claude 4 logo, created by Anthropic.

    Credit:

    Anthropic

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.
    Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly."
    Before we go further, a brief refresher on Claude's three AI model "size" namesis probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price, speed, and capability.
    Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth"and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequentlythan larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks.

    A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown.

    Credit:

    Anthropic

    There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors.

    In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validatedcapabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release.
    Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered"vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below.
    To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions.
    Extended thinking meets tool use
    Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response.
    "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming.

    General Claude 4 benchmark results, provided by Anthropic.

    Credit:

    Anthropic

    In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs.

    “The world’s best coding model”
    Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files."
    In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoftsuggests Anthropic has built something genuinely competitive.

    Software engineering benchmark results, provided by Anthropic.

    Credit:

    Anthropic

    Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours.
    When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part."

    Pricing and availability
    Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs per million tokens for input and per million for output, while Sonnet 4 remains at and The models offer two response modes: traditional LLM and simulated reasoningfor complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild.
    Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription.
    The Claude 4 models also debut Claude Codeas a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework.

    A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game.

    Credit:

    Anthropic

    Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said.
    "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well."

    Benj Edwards
    Senior AI Reporter

    Benj Edwards
    Senior AI Reporter

    Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

    4 Comments
    #new #claude #model #refactored #code
    New Claude 4 AI model refactored code for 7 hours straight
    No sleep till Brooklyn New Claude 4 AI model refactored code for 7 hours straight Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours. Benj Edwards – May 22, 2025 12:45 pm | 4 The Claude 4 logo, created by Anthropic. Credit: Anthropic The Claude 4 logo, created by Anthropic. Credit: Anthropic Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours. Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly." Before we go further, a brief refresher on Claude's three AI model "size" namesis probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price, speed, and capability. Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth"and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequentlythan larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks. A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors. In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validatedcapabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release. Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered"vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below. To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions. Extended thinking meets tool use Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response. "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming. General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs. “The world’s best coding model” Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files." In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoftsuggests Anthropic has built something genuinely competitive. Software engineering benchmark results, provided by Anthropic. Credit: Anthropic Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours. When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part." Pricing and availability Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs per million tokens for input and per million for output, while Sonnet 4 remains at and The models offer two response modes: traditional LLM and simulated reasoningfor complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild. Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription. The Claude 4 models also debut Claude Codeas a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework. A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said. "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well." Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 4 Comments #new #claude #model #refactored #code
    ARSTECHNICA.COM
    New Claude 4 AI model refactored code for 7 hours straight
    No sleep till Brooklyn New Claude 4 AI model refactored code for 7 hours straight Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours. Benj Edwards – May 22, 2025 12:45 pm | 4 The Claude 4 logo, created by Anthropic. Credit: Anthropic The Claude 4 logo, created by Anthropic. Credit: Anthropic Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours. Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly." Before we go further, a brief refresher on Claude's three AI model "size" names (first introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability. Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth" (considering conceptual relationships in the prompt) and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequently (plausibly answering questions based on lack of data) than larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks. A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors. In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validated [Claude's] capabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release. Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered (and ironically-named) "vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below. To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions. Extended thinking meets tool use Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response. "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming. General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs. “The world’s best coding model” Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files." In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoft (which has close ties to OpenAI) suggests Anthropic has built something genuinely competitive. Software engineering benchmark results, provided by Anthropic. Credit: Anthropic Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours. When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part." Pricing and availability Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs $15 per million tokens for input and $75 per million for output, while Sonnet 4 remains at $3 and $15. The models offer two response modes: traditional LLM and simulated reasoning ("extended thinking") for complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild. Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription. The Claude 4 models also debut Claude Code (first introduced in February) as a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework. A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said. "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well." Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 4 Comments
    0 Комментарии 0 Поделились 0 предпросмотр
  • Microsoft Open Sources Windows Subsystem for Linux

    Windows Subsystem for Linuxis now open source, Microsoft said Monday. The tool, which allows developers to run Linux distributions directly in Windows, is available for download, modification, and contribution. "We want Windows to be a great dev box," said Pavan Davuluri, corporate VP at Microsoft. "Having great WSL performance and capabilities" allows developers "to live in the Windows-native experience and take advantage of all they need in Linux."

    First launched in 2016 with an emulated Linux kernel, WSL switched to using the actual Linux kernel in 2019 with WSL 2, improving compatibility. The system has since gained support for GPUs, graphical applications, and systemd. Microsoft significantly refactored core Windows components to make WSL a standalone system before open sourcing it.

    of this story at Slashdot.
    #microsoft #open #sources #windows #subsystem
    Microsoft Open Sources Windows Subsystem for Linux
    Windows Subsystem for Linuxis now open source, Microsoft said Monday. The tool, which allows developers to run Linux distributions directly in Windows, is available for download, modification, and contribution. "We want Windows to be a great dev box," said Pavan Davuluri, corporate VP at Microsoft. "Having great WSL performance and capabilities" allows developers "to live in the Windows-native experience and take advantage of all they need in Linux." First launched in 2016 with an emulated Linux kernel, WSL switched to using the actual Linux kernel in 2019 with WSL 2, improving compatibility. The system has since gained support for GPUs, graphical applications, and systemd. Microsoft significantly refactored core Windows components to make WSL a standalone system before open sourcing it. of this story at Slashdot. #microsoft #open #sources #windows #subsystem
    NEWS.SLASHDOT.ORG
    Microsoft Open Sources Windows Subsystem for Linux
    Windows Subsystem for Linux (WSL) is now open source, Microsoft said Monday. The tool, which allows developers to run Linux distributions directly in Windows, is available for download, modification, and contribution. "We want Windows to be a great dev box," said Pavan Davuluri, corporate VP at Microsoft. "Having great WSL performance and capabilities" allows developers "to live in the Windows-native experience and take advantage of all they need in Linux." First launched in 2016 with an emulated Linux kernel, WSL switched to using the actual Linux kernel in 2019 with WSL 2, improving compatibility. The system has since gained support for GPUs, graphical applications, and systemd. Microsoft significantly refactored core Windows components to make WSL a standalone system before open sourcing it. Read more of this story at Slashdot.
    0 Комментарии 0 Поделились 0 предпросмотр
  • VALHALLA: v1.0.0 Beta 2 Released!

    VALHALLA's second beta dropped on May 01, 2025. Grab it while it's hot!

    Posted by garsipal on May 6th, 2025

    May 01, 2025 - The second beta is out!

    Change log:

    Reworked the game HUD with animations and improved gameplay feedback
    Game version string update
    - Show game version in the server browser- Display warning if client and server versions mismatch before connecting- Show client game version in the main menuAdded disconnection error alert box
    Major Invasion mode refinements for better pacing and balance
    - Fixed monster behaviors - Reworked appearance and attacks for certain monsters - Fixed hitbox issuesRevamped offline match setup and voting interface
    New texture manipulation features
    - Transform textures using matrices- Adjust texture hue- Support for spritesheet animationsOptional IP geolocation and custom flag support
    - Servers can enable GeoIP - Players can customize their profile with country or custom flagsSmoother gameplay feel with updated animations and effects
    - Enhanced camera shake and visual effects - Improved bobbing animations - Added natural weapon swayNew music tracks
    - Win/lose themes - New main menu track - Additional gameplay tracksMore community maps
    New announcer lines and voice lines
    Improved tutorial level
    - Added new tasks - More environmental storytelling - Extra missionsInteractive ragdollsReworked projectile system
    - Projectiles can now be destroyed in both online and offline modes - Server recognizes who destroyed a projectile and credits the kill - Code refactored for better maintainabilityAdded interface to customize controls
    Improved intermissionexperience
    - Shorter delay before exit - Voting interface for next map and mode - Fixed infinite intermission bug in Invasion modeWeapon balancing and feedback improvements
    - Stronger recoil for more powerful weapons - New zoom effect - Tweaked weapon statsFixed various bot behavior issues
    New 3D and 2D assets
    Tons of fixes: UI, editor, memory management, optimization and crashes
    Codebase refactoring and cleanup behind the scenes
    More
    #valhalla #v100 #beta #released
    VALHALLA: v1.0.0 Beta 2 Released!
    VALHALLA's second beta dropped on May 01, 2025. Grab it while it's hot! Posted by garsipal on May 6th, 2025 May 01, 2025 - The second beta is out! Change log: Reworked the game HUD with animations and improved gameplay feedback Game version string update - Show game version in the server browser- Display warning if client and server versions mismatch before connecting- Show client game version in the main menuAdded disconnection error alert box Major Invasion mode refinements for better pacing and balance - Fixed monster behaviors - Reworked appearance and attacks for certain monsters - Fixed hitbox issuesRevamped offline match setup and voting interface New texture manipulation features - Transform textures using matrices- Adjust texture hue- Support for spritesheet animationsOptional IP geolocation and custom flag support - Servers can enable GeoIP - Players can customize their profile with country or custom flagsSmoother gameplay feel with updated animations and effects - Enhanced camera shake and visual effects - Improved bobbing animations - Added natural weapon swayNew music tracks - Win/lose themes - New main menu track - Additional gameplay tracksMore community maps New announcer lines and voice lines Improved tutorial level - Added new tasks - More environmental storytelling - Extra missionsInteractive ragdollsReworked projectile system - Projectiles can now be destroyed in both online and offline modes - Server recognizes who destroyed a projectile and credits the kill - Code refactored for better maintainabilityAdded interface to customize controls Improved intermissionexperience - Shorter delay before exit - Voting interface for next map and mode - Fixed infinite intermission bug in Invasion modeWeapon balancing and feedback improvements - Stronger recoil for more powerful weapons - New zoom effect - Tweaked weapon statsFixed various bot behavior issues New 3D and 2D assets Tons of fixes: UI, editor, memory management, optimization and crashes Codebase refactoring and cleanup behind the scenes More #valhalla #v100 #beta #released
    WWW.INDIEDB.COM
    VALHALLA: v1.0.0 Beta 2 Released!
    VALHALLA's second beta dropped on May 01, 2025. Grab it while it's hot! Posted by garsipal on May 6th, 2025 May 01, 2025 - The second beta is out! Change log: Reworked the game HUD with animations and improved gameplay feedback Game version string update - Show game version in the server browser- Display warning if client and server versions mismatch before connecting- Show client game version in the main menuAdded disconnection error alert box Major Invasion mode refinements for better pacing and balance - Fixed monster behaviors - Reworked appearance and attacks for certain monsters - Fixed hitbox issuesRevamped offline match setup and voting interface New texture manipulation features - Transform textures using matrices (`vmatrix` / `texmatrix`) - Adjust texture hue (`vhue` / `texhue`) - Support for spritesheet animationsOptional IP geolocation and custom flag support - Servers can enable GeoIP - Players can customize their profile with country or custom flagsSmoother gameplay feel with updated animations and effects - Enhanced camera shake and visual effects - Improved bobbing animations - Added natural weapon swayNew music tracks - Win/lose themes - New main menu track - Additional gameplay tracksMore community maps New announcer lines and voice lines Improved tutorial level - Added new tasks - More environmental storytelling - Extra missionsInteractive ragdolls (yes, really) Reworked projectile system - Projectiles can now be destroyed in both online and offline modes - Server recognizes who destroyed a projectile and credits the kill - Code refactored for better maintainabilityAdded interface to customize controls Improved intermission (end-of-match) experience - Shorter delay before exit - Voting interface for next map and mode - Fixed infinite intermission bug in Invasion modeWeapon balancing and feedback improvements - Stronger recoil for more powerful weapons - New zoom effect - Tweaked weapon statsFixed various bot behavior issues New 3D and 2D assets Tons of fixes: UI, editor, memory management, optimization and crashes Codebase refactoring and cleanup behind the scenes More
    0 Комментарии 0 Поделились 0 предпросмотр
  • Free C++ System That Combines ALS, GASP, & Motion Matching For Unreal Engine 5
    Called the RDR System, this C++-only project was developed to showcase how the Advanced Locomotion System Refactored (ALS) can be integrated with Motion Matching features from the Game Animation Sample Project (GASP).While it's not a complete gameplay framework, the RDR System is designed to serve as a learning resource and a foundation for exploring advanced animation workflows.
    It's available for free, as David Pusher combined two separate projects and a third-party asset with the goal of gaining a deeper understanding of how these systems work and, as he explained, sharing something that others might also find helpful.RDR System features:Hybrid locomotion system, built on top of ALS and GASP core logic, clean and extensible C++ foundation, suitable for learning or rapid prototyping;Seamless switching between first-person and third-person perspectives, both views fully supported with dedicated animations and weapon logic;Modular weapon component system, designed to work with both first- and third-person modes, includes ready-to-use animations for equipping, unequipping, and aiming weapons and supports placeholder visibility logic for cinematic transitions and mesh control;Motion Matching integration for smooth, realistic animation transitions, works in both first-person and third-person modes and blends responsive gameplay with high-fidelity motion capture data.
    Source: https://80.lv/articles/free-c-system-that-combines-als-gasp-motion-matching-for-unreal-engine-5/" style="color: #0066cc;">https://80.lv/articles/free-c-system-that-combines-als-gasp-motion-matching-for-unreal-engine-5/
    #free #system #that #combines #als #gasp #ampamp #motion #matching #for #unreal #engine
    Free C++ System That Combines ALS, GASP, & Motion Matching For Unreal Engine 5
    Called the RDR System, this C++-only project was developed to showcase how the Advanced Locomotion System Refactored (ALS) can be integrated with Motion Matching features from the Game Animation Sample Project (GASP).While it's not a complete gameplay framework, the RDR System is designed to serve as a learning resource and a foundation for exploring advanced animation workflows. It's available for free, as David Pusher combined two separate projects and a third-party asset with the goal of gaining a deeper understanding of how these systems work and, as he explained, sharing something that others might also find helpful.RDR System features:Hybrid locomotion system, built on top of ALS and GASP core logic, clean and extensible C++ foundation, suitable for learning or rapid prototyping;Seamless switching between first-person and third-person perspectives, both views fully supported with dedicated animations and weapon logic;Modular weapon component system, designed to work with both first- and third-person modes, includes ready-to-use animations for equipping, unequipping, and aiming weapons and supports placeholder visibility logic for cinematic transitions and mesh control;Motion Matching integration for smooth, realistic animation transitions, works in both first-person and third-person modes and blends responsive gameplay with high-fidelity motion capture data. Source: https://80.lv/articles/free-c-system-that-combines-als-gasp-motion-matching-for-unreal-engine-5/ #free #system #that #combines #als #gasp #ampamp #motion #matching #for #unreal #engine
    80.LV
    Free C++ System That Combines ALS, GASP, & Motion Matching For Unreal Engine 5
    Called the RDR System, this C++-only project was developed to showcase how the Advanced Locomotion System Refactored (ALS) can be integrated with Motion Matching features from the Game Animation Sample Project (GASP).While it's not a complete gameplay framework, the RDR System is designed to serve as a learning resource and a foundation for exploring advanced animation workflows. It's available for free, as David Pusher combined two separate projects and a third-party asset with the goal of gaining a deeper understanding of how these systems work and, as he explained, sharing something that others might also find helpful.RDR System features:Hybrid locomotion system, built on top of ALS and GASP core logic, clean and extensible C++ foundation, suitable for learning or rapid prototyping;Seamless switching between first-person and third-person perspectives, both views fully supported with dedicated animations and weapon logic;Modular weapon component system, designed to work with both first- and third-person modes, includes ready-to-use animations for equipping, unequipping, and aiming weapons and supports placeholder visibility logic for cinematic transitions and mesh control;Motion Matching integration for smooth, realistic animation transitions, works in both first-person and third-person modes and blends responsive gameplay with high-fidelity motion capture data.
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com