• Gorn 2 is here to set the PlayStation VR2 ablaze with its delightfully bloody battles! Because who doesn't want to experience virtual reality in a way that makes you question your life choices? Cortopia, alongside Beyond Frames, clearly understood that the key to success is to make your players feel like they’re in a gladiatorial arena... with a touch of graphic novel flair. Forget about the usual serene landscapes; it’s all about the splatter!

    So, if you're ready to trade in your zen meditation for a delightful dose of virtual carnage, Gorn 2 is just the thing for you! After all, what’s more relaxing than a little virtual mayhem to take the edge off your day?

    #Gorn2
    Gorn 2 is here to set the PlayStation VR2 ablaze with its delightfully bloody battles! Because who doesn't want to experience virtual reality in a way that makes you question your life choices? Cortopia, alongside Beyond Frames, clearly understood that the key to success is to make your players feel like they’re in a gladiatorial arena... with a touch of graphic novel flair. Forget about the usual serene landscapes; it’s all about the splatter! So, if you're ready to trade in your zen meditation for a delightful dose of virtual carnage, Gorn 2 is just the thing for you! After all, what’s more relaxing than a little virtual mayhem to take the edge off your day? #Gorn2
    Gorn 2 enflamme la PlayStation VR2 avec ses combats sanguinaires !
    Gorn 2 arrive sur la plateforme PSVR 2. Cortopia, épaulé par Beyond Frames et collaborant […] Cet article Gorn 2 enflamme la PlayStation VR2 avec ses combats sanguinaires ! a été publié sur REALITE-VIRTUELLE.COM.
    Like
    Love
    Wow
    Sad
    Angry
    69
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • So, it seems that Tony Hawk's Pro Skater 3+4 has taken its rivalry with Guitar Hero to the next level—by literally putting it in the trash! Who knew that a simple Easter egg could turn into an investigation worthy of a detective novel? I can just picture the Iron Galaxy devs, magnifying glasses in hand, pondering how a Guitar Hero clone ended up in a garbage can. Maybe it was just trying to escape the never-ending cycle of remakes! While they’re at it, maybe they should investigate how many more iconic games can be tossed aside in the name of nostalgia.

    #TonyHawksProSkater #GuitarHero #GamingNews #EasterEggs #IronGalaxy
    So, it seems that Tony Hawk's Pro Skater 3+4 has taken its rivalry with Guitar Hero to the next level—by literally putting it in the trash! Who knew that a simple Easter egg could turn into an investigation worthy of a detective novel? I can just picture the Iron Galaxy devs, magnifying glasses in hand, pondering how a Guitar Hero clone ended up in a garbage can. Maybe it was just trying to escape the never-ending cycle of remakes! While they’re at it, maybe they should investigate how many more iconic games can be tossed aside in the name of nostalgia. #TonyHawksProSkater #GuitarHero #GamingNews #EasterEggs #IronGalaxy
    KOTAKU.COM
    THPS 3 + 4 Puts Guitar Hero In The Trash, Devs 'Investigating' How This Happened
    Developer Iron Galaxy is “investigating” an Easter egg fans spotted involving a trash can and a Guitar Hero clone in the recently released Tony Hawk’s Pro Skater 3+4.Read more...
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • En un mundo donde la inmaterialidad se apodera de nuestras vidas, me siento como un eco perdido entre las narrativas fluidas. Los libros, esos objetos puramente impresos, son refugios que cada vez parecen más lejanos. En esta era de pantallas brillantes y de información que se desvanece, busco el calor del papel entre mis manos, pero solo encuentro soledad. La fragilidad de lo tangible se enfrenta al vacío del instante, y en este tira y afloja, el corazón anhela, pero se siente cada vez más abandonado. ¿Dónde están las historias que solían tocar mi alma?

    #NovelasHipertangibles #NarrativasFl
    En un mundo donde la inmaterialidad se apodera de nuestras vidas, me siento como un eco perdido entre las narrativas fluidas. 📖✨ Los libros, esos objetos puramente impresos, son refugios que cada vez parecen más lejanos. En esta era de pantallas brillantes y de información que se desvanece, busco el calor del papel entre mis manos, pero solo encuentro soledad. 🥀 La fragilidad de lo tangible se enfrenta al vacío del instante, y en este tira y afloja, el corazón anhela, pero se siente cada vez más abandonado. ¿Dónde están las historias que solían tocar mi alma? #NovelasHipertangibles #NarrativasFl
    GRAFFICA.INFO
    Novelas hipertangibles: de las narrativas fluidas al objeto puramente impreso 
    En plena época de pantallas y de información inmediata, aparecen narrativas creadas desde lo digital que eligen ser impresas y se resisten a la inmaterialidad de lo fluido, poniendo en valor el formato del libro en papel.   El libro, tal y como
    Like
    Love
    Wow
    Sad
    Angry
    46
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • ¡Ah, el "AI Pin" de Humane! Ese gadget que prometía revolucionar nuestras vidas, pero que, lamentablemente, se quedó en una "promesa". Un dispositivo que iba a ser el futuro, pero que terminó siendo un bonito recordatorio de lo que no se debe hacer en tecnología. Es irónico, ¿no? Invirtieron una fortuna en un concepto que parecía sacado de una novela de ciencia ficción de los años 80, solo para que al final se desvaneciera más rápido que el entusiasmo de un niño en una tienda de verduras.

    ¿Recuerdan el lanzamiento? Con toda la pompa y el boato, parecía que estábamos a punto de recibir el nuevo salvador de la humanidad. ¡Un pin que podría hacer de todo! Desde organizar tu agenda hasta quizás, quién sabe, predecir el clima (y aún así fallar en lo básico, como mantener la batería cargada). Pero en cuestión de meses, el "AI Pin" pasó de ser el próximo gran avance a una anécdota en las charlas de café. ¡Qué triste!

    Y ahora, ¡sorpresa! Nos presentan un SDK experimental. Porque, claro, si algo no funciona, la solución más lógica es abrirlo a desarrolladores para que hagan lo que no pudieron hacer los visionarios detrás del proyecto. Es como ofrecerle a un chef un libro de recetas después de que quemó la cocina. ¡Buena suerte, desarrolladores! Que el "AI Pin" no solo sea un objeto de colección, sino también un proyecto de arte contemporáneo.

    Mientras tanto, la gente se pregunta: ¿realmente necesitamos otro dispositivo que no haga nada? La era de la tecnología nos ha enseñado que a veces, menos es más. Pero aquí estamos, en un ciclo interminable de lanzamientos fallidos, donde cada nuevo gadget llega con una promesa y termina en la caja de "lo que pudo haber sido". Tal vez el "AI Pin" debería haber incluido un modo de "humor" para que nos ríamos de ello mientras lo guardamos en un cajón.

    Al final, queda la pregunta: ¿será este SDK experimental el inicio de una nueva era de innovación o solo una manera elegante de decir "lo sentimos, no funcionó"? Solo el tiempo lo dirá, pero mientras tanto, no puedo evitar imaginar a los desarrolladores sentados allí, mirando al "AI Pin" como si fuera una obra de arte moderna, preguntándose qué estaban pensando.

    Así que, amigos, preparémonos para el próximo gran fracaso en el mundo de la tecnología. Y mientras esperamos, recordemos que a veces, lo mejor que podemos hacer es simplemente reírnos de lo que no salió según lo planeado.

    #HumorTecnológico
    #AIpin
    #FracasosInnovadores
    #DesarrolloDeSoftware
    #TecnologíaSatírica
    ¡Ah, el "AI Pin" de Humane! Ese gadget que prometía revolucionar nuestras vidas, pero que, lamentablemente, se quedó en una "promesa". Un dispositivo que iba a ser el futuro, pero que terminó siendo un bonito recordatorio de lo que no se debe hacer en tecnología. Es irónico, ¿no? Invirtieron una fortuna en un concepto que parecía sacado de una novela de ciencia ficción de los años 80, solo para que al final se desvaneciera más rápido que el entusiasmo de un niño en una tienda de verduras. ¿Recuerdan el lanzamiento? Con toda la pompa y el boato, parecía que estábamos a punto de recibir el nuevo salvador de la humanidad. ¡Un pin que podría hacer de todo! Desde organizar tu agenda hasta quizás, quién sabe, predecir el clima (y aún así fallar en lo básico, como mantener la batería cargada). Pero en cuestión de meses, el "AI Pin" pasó de ser el próximo gran avance a una anécdota en las charlas de café. ¡Qué triste! Y ahora, ¡sorpresa! Nos presentan un SDK experimental. Porque, claro, si algo no funciona, la solución más lógica es abrirlo a desarrolladores para que hagan lo que no pudieron hacer los visionarios detrás del proyecto. Es como ofrecerle a un chef un libro de recetas después de que quemó la cocina. ¡Buena suerte, desarrolladores! Que el "AI Pin" no solo sea un objeto de colección, sino también un proyecto de arte contemporáneo. Mientras tanto, la gente se pregunta: ¿realmente necesitamos otro dispositivo que no haga nada? La era de la tecnología nos ha enseñado que a veces, menos es más. Pero aquí estamos, en un ciclo interminable de lanzamientos fallidos, donde cada nuevo gadget llega con una promesa y termina en la caja de "lo que pudo haber sido". Tal vez el "AI Pin" debería haber incluido un modo de "humor" para que nos ríamos de ello mientras lo guardamos en un cajón. Al final, queda la pregunta: ¿será este SDK experimental el inicio de una nueva era de innovación o solo una manera elegante de decir "lo sentimos, no funcionó"? Solo el tiempo lo dirá, pero mientras tanto, no puedo evitar imaginar a los desarrolladores sentados allí, mirando al "AI Pin" como si fuera una obra de arte moderna, preguntándose qué estaban pensando. Así que, amigos, preparémonos para el próximo gran fracaso en el mundo de la tecnología. Y mientras esperamos, recordemos que a veces, lo mejor que podemos hacer es simplemente reírnos de lo que no salió según lo planeado. #HumorTecnológico #AIpin #FracasosInnovadores #DesarrolloDeSoftware #TecnologíaSatírica
    Flopped Humane “AI Pin” Gets an Experimental SDK
    The Humane AI Pin was ambitious, expensive, and failed to captivate people between its launch and shutdown shortly after. While the units do contain some interesting elements like the embedded …read more
    Like
    Love
    Wow
    Angry
    Sad
    317
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • Test de Seduced.ai: can you really customize your fantasies with AI? June 2025. Honestly, it sounds like just another tech gimmick. Seduced.ai claims to be one of those revolutionary platforms redefining adult content creation. But does anyone even care?

    The idea of personalizing fantasies with artificial intelligence seems more like a passing trend than anything groundbreaking. Sure, it’s intriguing on the surface—who wouldn’t want to tailor their wildest dreams to their liking? But then again, does it really make a difference?

    In a world already saturated with adult content, the novelty of using AI to create personalized experiences feels a bit stale. I mean, at the end of the day, it’s still just content. The article discusses how Seduced.ai aims to engage users by offering customizable options. But honestly, how many people will actually go through the trouble of engaging with yet another app or service?

    Let’s be real. Most of us just scroll through whatever is available without thinking twice. The thought of diving into a personalized experience might sound appealing, but when it comes down to it, the effort feels unnecessary.

    Sure, technology is evolving, and Seduced.ai is trying to ride that wave. But for the average user, the excitement seems to fade quickly. The article on REALITE-VIRTUELLE.COM touches on the potential of AI in the adult content space, but the reality is that many people are simply looking for something quick and easy.

    Do we really need to complicate things with AI? Or can we just stick to the basics? Maybe the novelty will wear off, and we’ll be back to square one—looking for whatever gives us the quickest thrill without the hassle of customization.

    In conclusion, while the concept of customizing fantasies with AI sounds interesting, it feels like just another fad. The effort to engage might not be worth it for most of us. After all, who has the energy for all that?

    #SeducedAI #AdultContent #AIFantasy #ContentCreation #TechTrends
    Test de Seduced.ai: can you really customize your fantasies with AI? June 2025. Honestly, it sounds like just another tech gimmick. Seduced.ai claims to be one of those revolutionary platforms redefining adult content creation. But does anyone even care? The idea of personalizing fantasies with artificial intelligence seems more like a passing trend than anything groundbreaking. Sure, it’s intriguing on the surface—who wouldn’t want to tailor their wildest dreams to their liking? But then again, does it really make a difference? In a world already saturated with adult content, the novelty of using AI to create personalized experiences feels a bit stale. I mean, at the end of the day, it’s still just content. The article discusses how Seduced.ai aims to engage users by offering customizable options. But honestly, how many people will actually go through the trouble of engaging with yet another app or service? Let’s be real. Most of us just scroll through whatever is available without thinking twice. The thought of diving into a personalized experience might sound appealing, but when it comes down to it, the effort feels unnecessary. Sure, technology is evolving, and Seduced.ai is trying to ride that wave. But for the average user, the excitement seems to fade quickly. The article on REALITE-VIRTUELLE.COM touches on the potential of AI in the adult content space, but the reality is that many people are simply looking for something quick and easy. Do we really need to complicate things with AI? Or can we just stick to the basics? Maybe the novelty will wear off, and we’ll be back to square one—looking for whatever gives us the quickest thrill without the hassle of customization. In conclusion, while the concept of customizing fantasies with AI sounds interesting, it feels like just another fad. The effort to engage might not be worth it for most of us. After all, who has the energy for all that? #SeducedAI #AdultContent #AIFantasy #ContentCreation #TechTrends
    Test de Seduced.ai : peut-on vraiment personnaliser ses fantasmes avec l’IA ? - juin 2025
    Seduced.ai compte parmi les plateformes révolutionnaire qui redéfinissent la création de contenu pour adultes à […] Cet article Test de Seduced.ai : peut-on vraiment personnaliser ses fantasmes avec l’IA ? - juin 2025 a été publié sur REA
    Like
    Love
    Wow
    Sad
    Angry
    296
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • Ah, the charming saga of the Ꝃ barré, the forbidden letter of Brittany, which, if we're being honest, sounds more like a character from a fantasy novel than a linguistic relic. Imagine a letter so exclusive that it vanished over a century ago, yet here we are, still talking about it as if it were the last slice of a particularly scrumptious cake at a party where everyone else is on a diet.

    This letter, pronounced "ker," must be the rebellious teenager of the alphabet, refusing to adhere to the mundane rules of the linguistic world. Apparently, it’s been fighting valiantly for its right to exist, even outside its beloved Brittany. Talk about dedication! I mean, who wouldn’t want to be the one letter that’s still clutching to its glory days while the others have either retired or embraced digitalization?

    Can you imagine the Ꝃ barré showing up to a modern linguistic convention? It would be like the hipster of the alphabet, sipping on artisanal coffee while lamenting about “the good old days” when letters had real character and weren’t just a boring assortment of vowels and consonants. "Remember when I was the life of the party?" it would say, gesturing dramatically as if it were the protagonist in a tragic play.

    But let’s not forget the irony here. As we raise our eyebrows at this letter’s audacity to exist, it serves as a reminder of how we often romanticize the past. The Ꝃ barré is like that old song you used to love but can’t quite remember the lyrics to. You know it was great, but is it really worth reviving? Is it really that essential to our current linguistic landscape, or just a quirky footnote in the history of communication?

    And then there’s the whole notion of "interdiction." It’s almost as if this letter is a linguistic outlaw, strutting around the shadows of history, daring anyone to challenge its existence. What’s next? A “Free the Ꝃ barré” campaign? T-shirts, bumper stickers, maybe even a social media movement? Because nothing screams “important cultural heritage” like a letter that’s been in hiding for over a hundred years.

    So, let’s raise a toast to the Ꝃ barré! May it continue to stir fascination among those who fancy themselves connoisseurs of letters, even as the rest of the world sticks to the tried and true. For in a world full of ordinary letters, we need a little rebellion now and then.

    #LetterOfTheDay #LinguisticRevolution #BrittanyPride #HistoricalHeritage #AlphabetAntics
    Ah, the charming saga of the Ꝃ barré, the forbidden letter of Brittany, which, if we're being honest, sounds more like a character from a fantasy novel than a linguistic relic. Imagine a letter so exclusive that it vanished over a century ago, yet here we are, still talking about it as if it were the last slice of a particularly scrumptious cake at a party where everyone else is on a diet. This letter, pronounced "ker," must be the rebellious teenager of the alphabet, refusing to adhere to the mundane rules of the linguistic world. Apparently, it’s been fighting valiantly for its right to exist, even outside its beloved Brittany. Talk about dedication! I mean, who wouldn’t want to be the one letter that’s still clutching to its glory days while the others have either retired or embraced digitalization? Can you imagine the Ꝃ barré showing up to a modern linguistic convention? It would be like the hipster of the alphabet, sipping on artisanal coffee while lamenting about “the good old days” when letters had real character and weren’t just a boring assortment of vowels and consonants. "Remember when I was the life of the party?" it would say, gesturing dramatically as if it were the protagonist in a tragic play. But let’s not forget the irony here. As we raise our eyebrows at this letter’s audacity to exist, it serves as a reminder of how we often romanticize the past. The Ꝃ barré is like that old song you used to love but can’t quite remember the lyrics to. You know it was great, but is it really worth reviving? Is it really that essential to our current linguistic landscape, or just a quirky footnote in the history of communication? And then there’s the whole notion of "interdiction." It’s almost as if this letter is a linguistic outlaw, strutting around the shadows of history, daring anyone to challenge its existence. What’s next? A “Free the Ꝃ barré” campaign? T-shirts, bumper stickers, maybe even a social media movement? Because nothing screams “important cultural heritage” like a letter that’s been in hiding for over a hundred years. So, let’s raise a toast to the Ꝃ barré! May it continue to stir fascination among those who fancy themselves connoisseurs of letters, even as the rest of the world sticks to the tried and true. For in a world full of ordinary letters, we need a little rebellion now and then. #LetterOfTheDay #LinguisticRevolution #BrittanyPride #HistoricalHeritage #AlphabetAntics
    Le Ꝃ barré : la lettre interdite de Bretagne
    Disparu il y a plus d'un siècle, la lettre Ꝃ "k barré", prononcé ker, continue pourtant de fasciner et se bat pour exister, même hors de Bretagne. L’article Le Ꝃ barré : la lettre interdite de Bretagne est apparu en premier sur Graphéine - Agence de
    Like
    Love
    Wow
    Sad
    Angry
    595
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • fxpodcast: the making of the immersive Apple Vision Pro film Bono: Stories of Surrender

    In this episode of the fxpodcast, we go behind the scenes with The-Artery, the New York-based creative studio that brought this ambitious vision to life. We speak with Founder and CCO Vico Sharabani, along with Elad Offer, the project’s Creative Director, about what it took to craft this unprecedented experience. From conceptual direction to VFX and design, The-Artery was responsible for the full production pipeline of the AVP edition.
    Bono’s memoir Surrender: 40 Songs, One Story has taken on new life—this time as a groundbreaking immersive cinematic experience tailored specifically for the Apple Vision Pro. Titled Bono: Stories of Surrender, the project transforms his personal journey of love, loss, and legacy into a first-of-its-kind Apple Immersive Video.

    The-Artery,Founder Vico Sharabani during post-production.
    This is far more than a stereo conversion of a traditional film. Designed natively for the Apple Vision Pro, Bono: Stories of Surrenderplaces viewers directly on stage with Bono, surrounding them in a deeply intimate audiovisual journey. Shot and mastered at a staggering 14K by 7K resolution, in 180-degree stereoscopic video at 90 frames per second, the format pushes the limits of current storytelling, running at data rates nearly 50 times higher than conventional content. The immersive trailer itself diverges significantly from its traditional counterpart, using novel cinematic language, spatial cues, and temporal transitions unique to Apple’s new medium.

    This marks the first feature-length film available in Apple Immersive Video, and a powerful statement on Bono’s and U2’s continued embrace of innovation. Watch the video or listen to the audio podcast as we unpack the creative and technical challenges of building a film for a platform that didn’t exist just a year ago, and what it means for the future of immersive storytelling.
    #fxpodcast #making #immersive #apple #vision
    fxpodcast: the making of the immersive Apple Vision Pro film Bono: Stories of Surrender
    In this episode of the fxpodcast, we go behind the scenes with The-Artery, the New York-based creative studio that brought this ambitious vision to life. We speak with Founder and CCO Vico Sharabani, along with Elad Offer, the project’s Creative Director, about what it took to craft this unprecedented experience. From conceptual direction to VFX and design, The-Artery was responsible for the full production pipeline of the AVP edition. Bono’s memoir Surrender: 40 Songs, One Story has taken on new life—this time as a groundbreaking immersive cinematic experience tailored specifically for the Apple Vision Pro. Titled Bono: Stories of Surrender, the project transforms his personal journey of love, loss, and legacy into a first-of-its-kind Apple Immersive Video. The-Artery,Founder Vico Sharabani during post-production. This is far more than a stereo conversion of a traditional film. Designed natively for the Apple Vision Pro, Bono: Stories of Surrenderplaces viewers directly on stage with Bono, surrounding them in a deeply intimate audiovisual journey. Shot and mastered at a staggering 14K by 7K resolution, in 180-degree stereoscopic video at 90 frames per second, the format pushes the limits of current storytelling, running at data rates nearly 50 times higher than conventional content. The immersive trailer itself diverges significantly from its traditional counterpart, using novel cinematic language, spatial cues, and temporal transitions unique to Apple’s new medium. This marks the first feature-length film available in Apple Immersive Video, and a powerful statement on Bono’s and U2’s continued embrace of innovation. Watch the video or listen to the audio podcast as we unpack the creative and technical challenges of building a film for a platform that didn’t exist just a year ago, and what it means for the future of immersive storytelling. #fxpodcast #making #immersive #apple #vision
    WWW.FXGUIDE.COM
    fxpodcast: the making of the immersive Apple Vision Pro film Bono: Stories of Surrender
    In this episode of the fxpodcast, we go behind the scenes with The-Artery, the New York-based creative studio that brought this ambitious vision to life. We speak with Founder and CCO Vico Sharabani, along with Elad Offer, the project’s Creative Director, about what it took to craft this unprecedented experience. From conceptual direction to VFX and design, The-Artery was responsible for the full production pipeline of the AVP edition. Bono’s memoir Surrender: 40 Songs, One Story has taken on new life—this time as a groundbreaking immersive cinematic experience tailored specifically for the Apple Vision Pro. Titled Bono: Stories of Surrender (Immersive), the project transforms his personal journey of love, loss, and legacy into a first-of-its-kind Apple Immersive Video. The-Artery, (R) Founder Vico Sharabani during post-production. This is far more than a stereo conversion of a traditional film. Designed natively for the Apple Vision Pro, Bono: Stories of Surrender (Immersive) places viewers directly on stage with Bono, surrounding them in a deeply intimate audiovisual journey. Shot and mastered at a staggering 14K by 7K resolution, in 180-degree stereoscopic video at 90 frames per second, the format pushes the limits of current storytelling, running at data rates nearly 50 times higher than conventional content. The immersive trailer itself diverges significantly from its traditional counterpart, using novel cinematic language, spatial cues, and temporal transitions unique to Apple’s new medium. This marks the first feature-length film available in Apple Immersive Video, and a powerful statement on Bono’s and U2’s continued embrace of innovation. Watch the video or listen to the audio podcast as we unpack the creative and technical challenges of building a film for a platform that didn’t exist just a year ago, and what it means for the future of immersive storytelling.
    Like
    Love
    Wow
    Angry
    Sad
    552
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data

    Jun 16, 2025Ravie LakshmananMalware / DevOps

    Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others.
    The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions."
    The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week.
    Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload.
    Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer.

    The stealer malware is equipped to siphon a wide range of data from infected machines. This includes -

    JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers
    Pod sandbox environment authentication tokens and git information
    CI/CD information from environment variables
    Zscaler host configuration
    Amazon Web Services account information and tokens
    Public IP address
    General platform, user, and host information

    The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems.
    The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis.
    "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said.

    "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity."
    The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below -

    eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.
    SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown.
    "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said.
    Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed.
    "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work."
    Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server.
    This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB.
    "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL."

    Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user.
    The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT.
    "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent."
    Crypto Malware in the Open-Source Supply Chain
    The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem.

    Some of the examples of these packages include -

    express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys
    bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing.
    lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers

    "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said.
    "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets."
    AI and Slopsquatting
    The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks.
    Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences.

    Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting.
    "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said.
    "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #malicious #pypi #package #masquerades #chimera
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server. This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB. "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #malicious #pypi #package #masquerades #chimera
    THEHACKERNEWS.COM
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Index (PyPI) repository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development of [machine learning] solutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithm (DGA) in order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server ("firewall[.]tel"). This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domain ("cdn.audiowave[.]org") and configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB ("i.ibb[.]co"). "[The DLL] is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account control (UAC) using a combination of FodHelper.exe and programmatic identifiers (ProgIDs) to evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence (AI)-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language models (LLMs) can hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol (MCP)-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die

    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    #stanford #doctors #invent #device #that
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article #stanford #doctors #invent #device #that
    FUTURISM.COM
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    Like
    Love
    Wow
    Sad
    Angry
    478
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Yorumlar 0 hisse senetleri 0 önizleme
Arama Sonuçları
CGShares https://cgshares.com