• GAMINGBOLT.COM
    Monster Hunter Wilds’ Blossomdance Festival Starts Today, Adds New Quests and Armor
    The first seasonal event for Capcom’s Monster Hunter Wilds will begin today with Festival of Accord: Blossomdance. It offers free (and paid) content but players must first progress through the game enough to unlock the Grand Hub. They’ll find the space sporting Spring-themed decorations while the Diva offers a new song. New meals will be available, and upon obtaining tickets, players can craft the new Sakuratide α (which grants bonus tickets as rewards) and Felyne Papier-Mâché α Palico armor sets. Event quests like Arch-Tempered Rey Dau also return for a limited time on April 29th. Players will also receive two Lucky Vouchers and three Barrel Bowling Vouchers daily. Other notable cosmetics include a new Seikret appearance, nameplate, background, pose, Pop-up Camp decorations, gestures, and more, all obtained by logging in. There’s also the Blossomdance DLC Pack, which offers new stickers, poses, weapon charms, a new Seikret decoration and the Spring Blossom Kimono for Alma. The Blossomdance festival is available until May 6th, but stay tuned for updates on other new content (including the next Capcom collab).
    0 Commentarios 0 Acciones 23 Views
  • WWW.CANADIANARCHITECT.COM
    Ontario ordered to pause Toronto bike lane removal until Charter case decided
    A cyclist rides in a bike lane on University Avenue in Toronto, Friday, Dec. 13, 2024. THE CANADIAN PRESS/Laura Proctor Premier Doug Ford’s government has been ordered to keep its hands off three major Toronto bike lanes until a judge can decide whether a plan to remove them is unconstitutional. The injunction handed down April 22 was heralded as a win by the cyclist group challenging Ontario’s bid to rip up the lanes on Bloor Street, Yonge Street and University Avenue. “It’s definitely a win for anyone who wants fact-based and data-driven decisions,” said Michael Longfield, Cycle Toronto’s executive director. “I hope this gives the province an opportunity to maybe pause and reverse this legislation and instead work on real solutions that will keep Torontonians and Ontarians moving.” A spokesperson for Ontario’s transportation minister said the government intends to respect the court’s decision. Design work will continue so the government can start removing the bike lanes “as soon as possible should the decision uphold the legislation,” wrote spokesperson Dakota Brasier. Ford’s Progressive Conservative government gave itself the power last year to remove 19 kilometres of protected bike lanes, over the objections of the city. It passed a law that also requires cities to seek provincial approval to install new lanes that cut into vehicle traffic. The province suggested that targeting bike lanes on the three major roadways would help reduce Toronto’s traffic congestion. Ontario Superior Court Justice Paul Schabas, who heard a challenge of the law brought by Cycle Toronto and two cyclists, appeared to be skeptical of that justification. “There is evidence that their removal will have little or no impact on the professed objectives of the legislation as stated by the minister of transportation,” Schabas wrote in April 22’s injunction ruling. The ruling said despite the government’s claim that there was an urgent need to cut congestion, it presented no evidence about the process to remove the lanes or plans on what would go in their place. Not granting the pause would mean the government could try to dismantle the bike lanes before he has time to decide the case, Schabas wrote. “It is likely that the bike lanes are more easily removed than rebuilt or restored,” his ruling said. Those challenging the law argue that it violates the Canadian Charter of Rights and Freedoms and that removing bike lanes puts lives at risk. April 22’s injunction ruling said the legal challenge raised “important and complex constitutional issues” and Schabas had not yet formed a “final view on the matter.” But the evidence before him after last week’s hearing, he wrote, is that removing the bike lanes could cause increased collisions, injuries and even deaths of cyclists. Ford’s recent re-election campaign included fresh promises to reduce traffic congestion in Toronto. He has cited the Bloor Street bike lane, not far from his own home, as part of what’s contributing to gridlock. Lawyers for the cyclists used the government’s own internal documents to poke holes in that argument last week. They presented internal ministry documents that stated the government’s plan may not reduce congestion. An engineering report commissioned by the government found any congestion benefits would be negligible or short-lived, a lawyer for the cyclists pointed out in court. The same report found bike lanes were predicted to reduce crashes among all road users by between 35 and 50 per cent. Schabas’s ruling said the government “relied on anecdotal evidence and the opinion of a real estate management professor who does not appear to directly address the key issue of whether removal of the bike lanes will in fact alleviate congestion.” The judge’s ruling said the government’s own internal advice suggested accident and injuries were likely to increase if the lanes were removed. — With files from Liam Casey. The post Ontario ordered to pause Toronto bike lane removal until Charter case decided appeared first on Canadian Architect.
    0 Commentarios 0 Acciones 23 Views
  • WWW.SMITHSONIANMAG.COM
    Conservators Are Puzzling Together Ancient Roman Murals Found in Hundreds of Pieces
    Cool Finds Conservators Are Puzzling Together Ancient Roman Murals Found in Hundreds of Pieces Excavated from a nearly 2,000-year-old villa in Valencia, Spain, the broken-up murals once formed fresco decor The broken walls of the villa are covered in frescoes, or paintings made on wet plaster. Vilamuseu In the ruins of an ancient Roman villa in Spain, researchers have unearthed over 4,000 fragments of murals painted in the early second century. Now, experts are conserving and reassembling these puzzle pieces to revive the decorative walls of this Roman outpost built during the reign of Emperor Trajan. The villa, known as Barberes Sud, is located in Villajoyosa on Spain’s Mediterranean coast, near Alicante. Its latest excavation—carried out by the local municipal archaeology service, housed at the Vilamuseu, and Alebus Historical Heritage Company—covered more than 9,000 square feet. According to a translated statement by the Vilamuseu, archaeologists determined the villa contained an industrial section, a multi-room atrium and a large garden surrounded by “stately,” “richly decorated” rooms. Today, only the foundations of these rooms remain.Their collapsed walls were made of compacted clay and covered in frescoes—watercolor paintings made atop wet plaster. The researchers carefully collected, numbered and documented the painted fragments from a collapsed wall in one of the rooms, then brought them to the Vilamuseu’s restoration laboratory for reconstruction. So far, experts have pieced together 22 of the 866 pieces of one painted panel from the wall. The fresco depicts draped green garlands, cartoonish birds and red motifs. Other fragments from the site appear to have flaked off large columns that once supported the villa’s porticoed garden: They’re composed of curved stucco gouged with decorative vertical lines, meant to make the columns fluted. The excavation covered over 9,000 feet. Vilamuseu The Barberes Sud villa was built nearly 2,000 years ago by Romans, near a road connecting the Roman settlement of Alonís, or Allon, to the sea. The empire had conquered the Iberian Peninsula—now modern Spain and Portugal—between 218 B.C.E. and 19 C.E., fighting first to expel Carthaginians from the land, then various tribes. Dubbed Hispania, the peninsula became an important, incorporated Roman region. Several senators would come from Spain, including Trajan and Hadrian, who later became successive emperors. The fragments are being conserved in Vilamuseu’s restoration laboratory for reconstruction. Vilamuseu The Romans left their mark on the region of modern Villajoyosa. Previous excavations have unearthed Roman baths built in 85 C.E., and a second-century Roman funerary tower still stands near the coast. As Artnet News’ Min Chen reports, the tower’s dedicatee is believed to be a prominent Alonís resident named Lucio Terencio Mancino. Researchers are photographing the pieces at uniform scale, so they can be digitally fit together. Vilamuseu In 1999, divers found a Roman shipwreck off the coast of Villajoyosa. Known as the Bou Ferrer, it’s one of the largest Roman shipwrecks ever found in the Mediterranean. It sank in the first century while carrying a massive cargo of fish sauce: 2,500 amphorae filled with fermented anchovy, mackerel and horse mackerel. The recent excavations of the Barberes Sud villa have helped researchers discern the ancient residence’s layout. Conservators will continue to restore and fit together its broken murals, in an effort to see more of the villa’s rich decoration. Get the latest stories in your inbox every weekday.
    0 Commentarios 0 Acciones 23 Views
  • VENTUREBEAT.COM
    A new, open source text-to-speech model called Dia has arrived to challenge ElevenLabs, OpenAI and more
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More A two-person startup by the name of Nari Labs has introduced Dia, a 1.6 billion parameter text-to-speech (TTS) model designed to produce naturalistic dialogue directly from text prompts — and one of its creators claims it surpasses the performance of competing proprietary offerings from the likes of ElevenLabs, Google’s hit NotebookLM AI podcast generation product. It could also threaten uptake of OpenAI’s recent gpt-4o-mini-tts. “Dia rivals NotebookLM’s podcast feature while surpassing ElevenLabs Studio and Sesame’s open model in quality,” said Toby Kim, one of the co-creators of Nari and Dia, on a post from his account on the social network X. In a separate post, Kim noted that the model was built with “zero funding,” and added across a thread: “…we were not AI experts from the beginning. It all started when we fell in love with NotebookLM’s podcast feature when it was released last year. We wanted more—more control over the voices, more freedom in the script. We tried every TTS API on the market. None of them sounded like real human conversation.” Kim further credited Google for giving him and his collaborator access to the company’s Tensor Processing Unit chips (TPUs) for training Dia through Google’s Research Cloud. Dia’s code and weights — the internal model connection set — is now available for download and local deployment by anyone from Hugging Face or Github. Individual users can try generating speech from it on a Hugging Face Space. Advanced controls and more customizable features Dia supports nuanced features like emotional tone, speaker tagging, and nonverbal audio cues—all from plain text. Users can mark speaker turns with tags like [S1] and [S2], and include cues like (laughs), (coughs), or (clears throat) to enrich the resulting dialogue with nonverbal behaviors. These tags are correctly interpreted by Dia during generation—something not reliably supported by other available models, according to the company’s examples page. The model is currently English-only and not tied to any single speaker’s voice, producing different voices per run unless users fix the generation seed or provide an audio prompt. Audio conditioning, or voice cloning, lets users guide speech tone and voice likeness by uploading a sample clip. Nari Labs offers example code to facilitate this process and a Gradio-based demo so users can try it without setup. Comparison with ElevenLabs and Sesame Nari offers a host of example audio files generated by Dia on its Notion website, comparing it to other leading speech-to-text rivals, specifically ElevenLabs Studio and Sesame CSM-1B, the latter a new text-to-speech model from Oculus VR headset co-creator Brendan Iribe that went somewhat viral on X earlier this year. Side-by-side examples shared by Nari Labs show how Dia outperforms the competition in several areas: In standard dialogue scenarios, Dia handles both natural timing and nonverbal expressions better. For example, in a script ending with (laughs), Dia interprets and delivers actual laughter, whereas ElevenLabs and Sesame output textual substitutions like “haha”. For example, here’s Dia… …and the same sentence spoken by ElevenLabs Studio In multi-turn conversations with emotional range, Dia demonstrates smoother transitions and tone shifts. One test included a dramatic, emotionally-charged emergency scene. Dia rendered the urgency and speaker stress effectively, while competing models often flattened delivery or lost pacing. Dia uniquely handles nonverbal-only scripts, such as a humorous exchange involving coughs, sniffs, and laughs. Competing models failed to recognize these tags or skipped them entirely. Even with rhythmically complex content like rap lyrics, Dia generates fluid, performance-style speech that maintains tempo. This contrasts with more monotone or disjointed outputs from ElevenLabs and Sesame’s 1B model. Using audio prompts, Dia can extend or continue a speaker’s voice style into new lines. An example using a conversational clip as a seed showed how Dia carried vocal traits from the sample through the rest of the scripted dialogue. This feature isn’t robustly supported in other models. In one set of tests, Nari Labs noted that Sesame’s best website demo likely used an internal 8B version of the model rather than the public 1B checkpoint, resulting in a gap between advertised and actual performance. Model access and tech specs Developers can access Dia from Nari Labs’ GitHub repository and its Hugging Face model page. The model runs on PyTorch 2.0+ and CUDA 12.6 and requires about 10GB of VRAM. Inference on enterprise-grade GPUs like the NVIDIA A4000 delivers roughly 40 tokens per second. While the current version only runs on GPU, Nari plans to offer CPU support and a quantized release to improve accessibility. The startup offers both a Python library and CLI tool to further streamline deployment. Dia’s flexibility opens use cases from content creation to assistive technologies and synthetic voiceovers. Nari Labs is also developing a consumer version of Dia aimed at casual users looking to remix or share generated conversations. Interested users can sing up via email to a waitlist for early access. Fully open source The model is distributed under a fully open source Apache 2.0 license, which means it can be used for commercial purposes — something that will obviously appeal to enterprises or indie app developers. Nari Labs explicitly prohibits usage that includes impersonating individuals, spreading misinformation, or engaging in illegal activities. The team encourages responsible experimentation and has taken a stance against unethical deployment. Dia’s development credits support from the Google TPU Research Cloud, Hugging Face’s ZeroGPU grant program, and prior work on SoundStorm, Parakeet, and Descript Audio Codec. Nari Labs itself comprises just two engineers—one full-time and one part-time—but they actively invite community contributions through its Discord server and GitHub. With a clear focus on expressive quality, reproducibility, and open access, Dia adds a distinctive new voice to the landscape of generative speech models. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentarios 0 Acciones 25 Views
  • WWW.THEVERGE.COM
    Google is scrapping its planned changes for third-party cookies in Chrome
    Google’s plan to phase out third-party cookies in Chrome is officially over. In an update on Tuesday, Google Privacy Sandbox VP Anthony Chavez says the company has decided “to maintain our current approach to offering users third-party cookie choice in Chrome.”For years, critics have argued that Google’s Privacy Sandbox could harm advertisers and violate privacy laws, while the Electronic Frontier Foundation (EFF) told users to opt out of the program, saying it “is still tracking your internet use for Google’s behavioral advertising.” Last week, a US judge found that Google “willfully engaged in a series of anticompetitive acts” in the advertising technology industry, and the competition regulator in the UK, the Competition and Markets Authority (CMA), has been investigating its evolving series of proposals to address concerns that they might give Google an unfair advantage.Google’s Privacy Sandbox initiative was announced in 2020 with plans to block third-party cookies in Chrome by default, similar to Firefox and Safari. It has made several changes to the initiative over the years, such as proposals like its Topics API to assign interests to users based on their web activity and other tools it said could target ads while still offering some privacy. But following years of delays and scrutiny, Google said last year that it would let users choose whether to opt into a cookie-less Chrome. Now, it appears the initiative has come almost completely to an end.Related“As we’ve engaged with the ecosystem, including publishers, developers, regulators, and the ads industry, it remains clear that there are divergent perspectives on making changes that could impact the availability of third-party cookies,” Chavez writes, adding that Google “will not be rolling out a new standalone prompt for third-party cookies.”The Movement for an Open Web (MOW), which filed a complaint with the CMA about the initiative in 2020, said Google’s latest update is an “admission” that the Privacy Sandbox is over.“Google’s intention was to remove open and interoperable communications standards to bring digital advertising traffic under their sole control and, with this announcement, that aim is now over,” MOW co-founder James Rosewell said in an emailed statement to The Verge. “They’ve recognised that the regulatory obstacles to their monopolistic project are insurmountable and have given up.”See More:
    0 Commentarios 0 Acciones 28 Views
  • WWW.THEVERGE.COM
    Tesla is making progress on its 1950s diner and drive-in
    Tesla has shared new images of its first retro drive-in restaurant location in Los Angeles.  The company’s CEO Elon Musk dreamt up the idea of a 1950s-themed “Tesla Diner” and Supercharger hub, complete with employees on rollerskates, as far back as 2018. More recently, Musk described it as a “Grease meets Jetsons with Supercharging,” as reported by The New York Times. Last week, The Real Deal reported that Tesla is nearing completion with the diner located on Santa Monica Boulevard. It features two floors, rooftop seating, a bar, two drive-up movie screens, and 30 Supercharger stalls. Tesla got the permit for the diner seven years ago. At the time, California was the de facto home of Tesla and a champion of its electric vehicles, but since then, the company has moved its headquarters to Texas, Musk has gotten into far-right politics with his deep involvement in the Trump Administration, and revenue is tanking as seen in today’s earnings report.
    0 Commentarios 0 Acciones 27 Views
  • WWW.MARKTECHPOST.COM
    A Coding Guide to Build an Agentic AI‑Powered Asynchronous Ticketing Assistant Using PydanticAI Agents, Pydantic v2, and SQLite Database
    In this tutorial, we’ll build an end‑to‑end ticketing assistant powered by Agentic AI using the PydanticAI library. We’ll define our data rules with Pydantic v2 models, store tickets in an in‑memory SQLite database, and generate unique identifiers with Python’s uuid module. Behind the scenes, two agents, one for creating tickets and one for checking status, leverage Google Gemini (via PydanticAI’s google-gla provider) to interpret your natural‑language prompts and call our custom database functions. The result is a clean, type‑safe workflow you can run immediately in Colab. !pip install --upgrade pip !pip install pydantic-ai First, these two commands update your pip installer to the latest version, bringing in new features and security patches, and then install PydanticAI. This library enables the definition of type-safe AI agents and the integration of Pydantic models with LLMs. import os from getpass import getpass if "GEMINI_API_KEY" not in os.environ: os.environ["GEMINI_API_KEY"] = getpass("Enter your Google Gemini API key: ") We check whether the GEMINI_API_KEY environment variable is already set. If not, we securely prompt you (without echoing) to enter your Google Gemini API key at runtime, then store it in os.environ so that your Agentic AI calls can authenticate automatically. !pip install nest_asyncio We install the nest_asyncio package, which lets you patch the existing asyncio event loop so that you can call async functions (or use .run_sync()) inside environments like Colab without running into “event loop already running” errors. import sqlite3 import uuid from dataclasses import dataclass from typing import Literal from pydantic import BaseModel, Field from pydantic_ai import Agent, RunContext We bring in Python’s sqlite3 for our in‑memory database and uuid to generate unique ticket IDs, use dataclass and Literal for clear dependency and type definitions, and load Pydantic’s BaseModel/​Field for enforcing data schemas alongside Agent and RunContext from PydanticAI to wire up and run our conversational agents. conn = sqlite3.connect(":memory:") conn.execute(""" CREATE TABLE tickets ( ticket_id TEXT PRIMARY KEY, summary TEXT NOT NULL, severity TEXT NOT NULL, department TEXT NOT NULL, status TEXT NOT NULL ) """) conn.commit() We set up an in‑memory SQLite database and define a tickets table with columns for ticket_id, summary, severity, department, and status, then commit the schema so you have a lightweight, transient store for managing your ticket records. @dataclass class TicketingDependencies: """Carries our DB connection into system prompts and tools.""" db: sqlite3.Connection class CreateTicketOutput(BaseModel): ticket_id: str = Field(..., description="Unique ticket identifier") summary: str = Field(..., description="Text summary of the issue") severity: Literal["low","medium","high"] = Field(..., description="Urgency level") department: str = Field(..., description="Responsible department") status: Literal["open"] = Field("open", description="Initial ticket status") class TicketStatusOutput(BaseModel): ticket_id: str = Field(..., description="Unique ticket identifier") status: Literal["open","in_progress","resolved"] = Field(..., description="Current ticket status") Here, we define a simple TicketingDependencies dataclass to pass our SQLite connection into each agent call, and then declare two Pydantic models: CreateTicketOutput (with fields for ticket ID, summary, severity, department, and default status “open”) and TicketStatusOutput (with ticket ID and its current status). These models enforce a clear, validated structure on everything our agents return, ensuring you always receive well-formed data. create_agent = Agent( "google-gla:gemini-2.0-flash", deps_type=TicketingDependencies, output_type=CreateTicketOutput, system_prompt="You are a ticketing assistant. Use the `create_ticket` tool to log new issues." ) @create_agent.tool async def create_ticket( ctx: RunContext[TicketingDependencies], summary: str, severity: Literal["low","medium","high"], department: str ) -> CreateTicketOutput: """ Logs a new ticket in the database. """ tid = str(uuid.uuid4()) ctx.deps.db.execute( "INSERT INTO tickets VALUES (?,?,?,?,?)", (tid, summary, severity, department, "open") ) ctx.deps.db.commit() return CreateTicketOutput( ticket_id=tid, summary=summary, severity=severity, department=department, status="open" ) We create a PydanticAI Agent named’ create_agent’ that’s wired to Google Gemini and is aware of our SQLite connection (deps_type=TicketingDependencies) and output schema (CreateTicketOutput). The @create_agent.tool decorator then registers an async create_ticket function, which generates a UUID, inserts a new row into the tickets table, and returns a validated CreateTicketOutput object. status_agent = Agent( "google-gla:gemini-2.0-flash", deps_type=TicketingDependencies, output_type=TicketStatusOutput, system_prompt="You are a ticketing assistant. Use the `get_ticket_status` tool to retrieve current status." ) @status_agent.tool async def get_ticket_status( ctx: RunContext[TicketingDependencies], ticket_id: str ) -> TicketStatusOutput: """ Fetches the ticket status from the database. """ cur = ctx.deps.db.execute( "SELECT status FROM tickets WHERE ticket_id = ?", (ticket_id,) ) row = cur.fetchone() if not row: raise ValueError(f"No ticket found for ID {ticket_id!r}") return TicketStatusOutput(ticket_id=ticket_id, status=row[0]) We set up a second PydanticAI Agent, status_agent, also using the Google Gemini provider and our shared TicketingDependencies. It registers an async get_ticket_status tool that looks up a given ticket_id in the SQLite database and returns a validated TicketStatusOutput, or raises an error if the ticket isn’t found. deps = TicketingDependencies(db=conn) create_result = await create_agent.run( "My printer on 3rd floor shows a paper jam error.", deps=deps ) print("Created Ticket →") print(create_result.output.model_dump_json(indent=2)) tid = create_result.output.ticket_id status_result = await status_agent.run( f"What's the status of ticket {tid}?", deps=deps ) print("Ticket Status →") print(status_result.output.model_dump_json(indent=2)) Finally, we first package your SQLite connection into deps, then ask the create_agent to log a new ticket via a natural‑language prompt, printing the validated ticket data as JSON. It then takes the returned ticket_id, queries the status_agent for that ticket’s current state, and prints the status in JSON form. In conclusion, you have seen how Agentic AI and PydanticAI work together to automate a complete service process, from logging a new issue to retrieving its live status, all managed through conversational prompts. Our use of Pydantic v2 ensures every ticket matches the schema you define, while SQLite provides a lightweight backend that’s easy to replace with any database. With these tools in place, you can expand the assistant, adding new agent functions, integrating other AI models like openai:gpt-4o, or connecting real‑world APIs, confident that your data remains structured and reliable throughout. Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Atla AI Introduces the Atla MCP Server: A Local Interface of Purpose-Built LLM Judges via Model Context Protocol (MCP)Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Long-Context Multimodal Understanding No Longer Requires Massive Models: NVIDIA AI Introduces Eagle 2.5, a Generalist Vision-Language Model that Matches GPT-4o on Video Tasks Using Just 8B ParametersAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Serverless MCP Brings AI-Assisted Debugging to AWS Workflows Within Modern IDEsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Coding Guide to Defining Custom Model Context Protocol (MCP) Server and Client Tools with FastMCP and Integrating Them into Google Gemini 2.0’s Function‑Calling Workflow
    0 Commentarios 0 Acciones 19 Views
  • WWW.MARKTECHPOST.COM
    Researchers at Physical Intelligence Introduce π-0.5: A New AI Framework for Real-Time Adaptive Intelligence in Physical Systems
    Designing intelligent systems that function reliably in dynamic physical environments remains one of the more difficult frontiers in AI. While significant advances have been made in perception and planning within simulated or controlled contexts, the real world is noisy, unpredictable, and resistant to abstraction. Traditional AI systems often rely on high-level representations detached from their physical implementations, leading to inefficiencies in response time, brittleness to unexpected changes, and excessive power consumption. In contrast, humans and animals exhibit remarkable adaptability through tight sensorimotor feedback loops. Reproducing even a fraction of that adaptability in embodied systems is a substantial challenge. Physical Intelligence Introduces π-0.5: A Framework for Embodied Adaptation To address these constraints, Physical Intelligence has introduced π-0.5—a lightweight and modular framework designed to integrate perception, control, and learning directly within physical systems. As described in their recent blog post, π-0.5 serves as a foundational building block for what the team terms “physical intelligence”: systems that learn from and adapt to the physical world through constant interaction, not abstraction alone. Rather than isolating intelligence in a centralized digital core, π-0.5 distributes processing and control throughout the system in compact modules. Each module, termed a “π-node,” encapsulates sensor inputs, local actuation logic, and a small, trainable neural component. These nodes can be chained or scaled across various embodiments, from wearables to autonomous agents, and are designed to react locally before resorting to higher-level computation. This architecture reflects a core assumption of the Physical Intelligence team: cognition emerges from action—not apart from it. Technical Composition and Functional Characteristics π-0.5 combines three core elements: (1) low-latency signal processing, (2) real-time learning loops, and (3) modular hardware-software co-design. Signal processing at the π-node level is tailored to the physical embodiment—allowing for motion-specific or material-specific response strategies. Learning is handled through a minimal but effective reinforcement update rule, enabling nodes to adapt weights in response to performance signals over time. Importantly, this learning is localized: individual modules do not require centralized orchestration to evolve their behavior. A central advantage of this decentralized model is energy efficiency. By distributing computation and minimizing the need for global communication, the system reduces latency and energy draw—key factors for edge devices and embedded systems. Additionally, the modularity of π-0.5 makes it hardware-agnostic, capable of interfacing with a variety of microcontrollers, sensors, and actuators. Another technical innovation is the system’s support for tactile and kinesthetic feedback integration. π-0.5 is built to accommodate proprioceptive sensing, which enhances its capacity to maintain adaptive behavior in response to physical stress, deformation, or external forces—especially relevant for soft robotics and wearable interfaces. Preliminary Results and Application Scenarios Initial demonstrations of π-0.5 showcase its adaptability across a variety of scenarios. In a soft robotic gripper prototype, the inclusion of π-0.5 nodes enabled the system to self-correct grip force based on the texture and compliance of held objects—without relying on pre-programmed models or external computation. Compared to a traditional control loop, this approach yielded a 30% improvement in grip accuracy and a 25% reduction in power consumption under similar test conditions. In wearable prototypes, π-0.5 allowed for localized adaptation to different body movements, achieving smoother haptic feedback and better energy regulation during continuous use. These results highlight π-0.5’s potential not just in robotics but in augmentative human-machine interfaces, where context-sensitive responsiveness is critical. Conclusion π-0.5 marks a deliberate step away from monolithic AI architectures toward systems that closely couple intelligence with physical interaction. Rather than pursuing ever-larger centralized models, Physical Intelligence proposes a distributed, embodied approach grounded in modular design and real-time adaptation. This direction aligns with long-standing goals in cybernetics and biologically inspired computing—treating intelligence not as a product of abstraction, but as a property that emerges from constant physical engagement. As AI continues to move into real-world systems, from wearables to autonomous machines, the need for low-power, adaptive, and resilient architectures will grow. π-0.5 offers a compelling foundation for meeting these requirements, contributing to a more integrated and physically grounded conception of intelligent systems. Check out the Technical details. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Anthropic Releases a Comprehensive Guide to Building Coding Agents with Claude CodeNikhilhttps://www.marktechpost.com/author/nikhil0980/ByteDance Releases UI-TARS-1.5: An Open-Source Multimodal AI Agent Built upon a Powerful Vision-Language ModelNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Can Think While Idle: Researchers from Letta and UC Berkeley Introduce ‘Sleep-Time Compute’ to Slash Inference Costs and Boost Accuracy Without Sacrificing LatencyNikhilhttps://www.marktechpost.com/author/nikhil0980/OpenAI Releases a Practical Guide to Building LLM Agents for Real-World Applications
    0 Commentarios 0 Acciones 19 Views
  • TOWARDSAI.NET
    How to Instantly Explain Your Code with Visuals (Powered by GPT-4)
    How to Instantly Explain Your Code with Visuals (Powered by GPT-4) 0 like April 22, 2025 Share this post Author(s): Mukundan Sankar Originally published on Towards AI. Tired of people not reading your blog or GitHub README? Here’s how to turn your Python script into a visual story anyone can understand in seconds. In Part 1, I introduced Code to Story — a tool that helps you turn raw Python code into a structured, human-readable story. I built it for one reason: I was tired of writing code I couldn’t explain. Not because I didn’t understand it. But because when someone — a hiring manager, a teammate, even a friend — asked, “What does this do?” …I froze. I’d stumble. I’d default to low-energy phrases like: “Oh, it’s just something I was playing around with…” I realized I had spent hours solving a real problem… only to fail at the most important step: communication. So I built a tool that solved that — something that turns code into a narrative. That was Part 1. But there was a deeper layer I hadn’t solved yet. Even after turning code into blog posts, people still didn’t engage. Why? Because they didn’t have the time. When I sent my blog to: Future hiring managersFriends I respectDevelopers I admire …they didn’t react. Not because they didn’t care. But because they were busy. Busy working. Job-hunting. Parenting. Resting. The truth hit me hard: No one owes your work their time. But you can make your work easier to understand in less time. So I asked myself: What’s the fastest way for someone to “get it” without reading anything? And the… Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post
    0 Commentarios 0 Acciones 16 Views
  • WWW.IGN.COM
    Alienware Has the Best Price on a GeForce RTX 4090 Prebuilt Gaming PC
    The GeForce RTX 4090 is a generation older than the new Blackwell 50 series GPUs, but this doesn't change the fact that it's still one of the most powerful cards out there, eclipsing the GeForce RTX 5080 or RTX 4080 Super or the Radeon RX 9070 XT or RX 7900 XTX. Only one GPU performs better - the RTX 5090 - and you'll need to use up a lifetime of luck to find one that isn't marked up by hundreds, even thousands of dollars.Because the RTX 4090 has been discontinued, it's getting harder to source as well. Fortunately, Dell still sells a Alienware Aurora R16 gaming PC configuration that can be equipped with a 4090 GPU. Not only is it one of the few RTX 4090 prebuilts still available - Lenovo and HP no longer carry them - it's also one of the more reasonably priced ones.Alienware Aurora R16 RTX 4090 Gaming PC for $2,999.99Alienware Aurora R16 Intel Core Ultra 7 265F RTX 4090 Gaming PC$2,999.99 at AlienwareThis Alienware Aurora R16 gaming PC is equipped with an Intel Core Ultra 7 265F CPU, GeForce RTX 4090 GPU, 16GB of DDR5-5200MHz RAM, and a 1TB NVMe SSD. The processor can also be upgraded up to an Intel Core Ultra 9 285K. If you're getting system with a focus on gaming, then the upgrade is unnecessary. Gaming at higher resolutions is almost always GPU bound, and besides, the default Intel Core Ultra 7 265F is a solid processor with a max turbo frequency of 5.3GHz and a total of 20 cores. It's cooled by a robust 240mm all-in-one liquid cooler and the entire system is powered by an 1,000W 80PLUS Platinum power supply.Get an Upgraded Model for $3,749.99Alienware Aurora R16 Intel Core Ultra 9 285K RTX 4090 Gaming PC (32GB/2TB)$3,749.99 at DellDell also offers this upgraded RTX 4090 model for $3,749.99 with free shipping. It's about $750 more than the base model Alienware 4090 gaming PC, but that's because the processor has been upgraded to a much more powerful Intel Core Ultra 9 285K CPU. You also get double the RAM and storage.How does the RTX 4090 stack up against current cards?The RTX 4090 is the most powerful RTX 40 series GPU on the market. Compared to the new Blackwell cards, only the $2,000 MSRP RTX 5090 is superior in performance. This card will run every game comfortably at 4K resolution; you should be hitting 60+fps even with all settings turned to the max and ray tracing enabled, doubly so if DLSS is supported. The only setting that the 4090 (as well as every other GPU) struggles with is path tracing, but no one really ever turns this on except during benchmark tests or social media flexing. The RTX 5090 might be faster, but for the vast majority of people out there, it's just wasted power since the 4090 already excels at pretty much all things gaming.Nvidia GeForce RTX 4090 GPU Review by Chris Coke"The RTX 4090 may be huge and expensive, but holy smokes if it doesn’t blow the competition out of the water. That’s a little unfair because it’s currently the only card of this new generation that’s available, so we only have cards from the past few years to compare it to. But until the rest of the pack can catch up, between its impressive hardware specs and its DLSS 3 AI wizardry, even the $1,599 price doesn’t seem unreasonable for the unrivaled frame rates that this card can crank out."Alternative: Alienware RTX 5080 Gaming PC for $2,400Alienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PC (16GB/1TB)$2,399.99 at AlienwareDell is offering an Alienware Aurora R16 gaming PC equipped with the new GeForce RTX 5080 GPU for $2,399.99 shipped. The RTX 5080 is one of three new Blackwell graphics cards that are out (and impossible to find). In our Nvidia GeForce RTX 5080 FE review, Jackie writes that "If you already have a high-end graphics card from the last couple of years, the Nvidia GeForce RTX 5080 doesn’t make a lot of sense – it just doesn’t have much of a performance lead over the RTX 4080, though the extra frames from DLSS 4 Multi-Frame Generation do make things look better in games that support it. However, for gamers with an older graphics card who want a significant performance boost, the RTX 5080 absolutely provides – doubly so if you’re comfortable with Nvidia’s AI goodies."Check out more of the best Alienware deals.Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Commentarios 0 Acciones 21 Views