• It's astounding how many people still cling to outdated notions when it comes to the choice between hardware and software for electronics projects. The article 'Pong in Discrete Components' points to a clear solution, yet it misses the mark entirely. Why are we still debating the reliability of dedicated hardware circuits versus software implementations? Are we really that complacent?

    Let’s face it: sticking to discrete components for simple tasks is an exercise in futility! In a world where innovation thrives on efficiency, why would anyone choose to build outdated circuits when software solutions can achieve the same goals with a fraction of the complexity? It’s mind-boggling! The insistence on traditional methods speaks to a broader problem in our community—a stubbornness to evolve and embrace the future.

    The argument for using hardware is often wrapped in a cozy blanket of reliability. But let’s be honest, how reliable is that? Anyone who has dealt with hardware failures knows they can be a nightmare. Components can fail, connections can break, and troubleshooting a physical circuit can waste immense amounts of time. Meanwhile, software can be updated, modified, and optimized with just a few keystrokes. Why are we so quick to glorify something that is inherently flawed?

    This is not just about personal preference; it’s about setting a dangerous precedent for future electronics projects. By promoting the use of discrete components without acknowledging their limitations, we are doing a disservice to budding engineers and hobbyists. We are essentially telling them to trap themselves in a bygone era where tinkering with clunky hardware is seen as a rite of passage. It’s ridiculous!

    Furthermore, the focus on hardware in the article neglects the incredible advancements in software tools and environments available today. Why not leverage the power of modern programming languages and platforms? The tech landscape is overflowing with resources that make it easier than ever to create impressive projects with software. Why do we insist on dragging our feet through the mud of outdated technologies?

    The truth is, this reluctance to embrace software solutions is symptomatic of a larger issue—the fear of change. Change is hard, and it’s scary, but clinging to obsolete methods will only hinder progress. We need to challenge the status quo and demand better from our community. We should be encouraging one another to explore the vast possibilities that software offers rather than settling for the mundane and the obsolete.

    Let’s stop romanticizing the past and start looking forward. The world of electronics is rapidly evolving, and it’s time we caught up. Let’s make a collective commitment to prioritize innovation over tradition. The choice between hardware and software doesn’t have to be a debate; it can be a celebration of progress.

    #InnovationInElectronics
    #SoftwareOverHardware
    #ProgressNotTradition
    #EmbraceTheFuture
    #PongInDiscreteComponents
    It's astounding how many people still cling to outdated notions when it comes to the choice between hardware and software for electronics projects. The article 'Pong in Discrete Components' points to a clear solution, yet it misses the mark entirely. Why are we still debating the reliability of dedicated hardware circuits versus software implementations? Are we really that complacent? Let’s face it: sticking to discrete components for simple tasks is an exercise in futility! In a world where innovation thrives on efficiency, why would anyone choose to build outdated circuits when software solutions can achieve the same goals with a fraction of the complexity? It’s mind-boggling! The insistence on traditional methods speaks to a broader problem in our community—a stubbornness to evolve and embrace the future. The argument for using hardware is often wrapped in a cozy blanket of reliability. But let’s be honest, how reliable is that? Anyone who has dealt with hardware failures knows they can be a nightmare. Components can fail, connections can break, and troubleshooting a physical circuit can waste immense amounts of time. Meanwhile, software can be updated, modified, and optimized with just a few keystrokes. Why are we so quick to glorify something that is inherently flawed? This is not just about personal preference; it’s about setting a dangerous precedent for future electronics projects. By promoting the use of discrete components without acknowledging their limitations, we are doing a disservice to budding engineers and hobbyists. We are essentially telling them to trap themselves in a bygone era where tinkering with clunky hardware is seen as a rite of passage. It’s ridiculous! Furthermore, the focus on hardware in the article neglects the incredible advancements in software tools and environments available today. Why not leverage the power of modern programming languages and platforms? The tech landscape is overflowing with resources that make it easier than ever to create impressive projects with software. Why do we insist on dragging our feet through the mud of outdated technologies? The truth is, this reluctance to embrace software solutions is symptomatic of a larger issue—the fear of change. Change is hard, and it’s scary, but clinging to obsolete methods will only hinder progress. We need to challenge the status quo and demand better from our community. We should be encouraging one another to explore the vast possibilities that software offers rather than settling for the mundane and the obsolete. Let’s stop romanticizing the past and start looking forward. The world of electronics is rapidly evolving, and it’s time we caught up. Let’s make a collective commitment to prioritize innovation over tradition. The choice between hardware and software doesn’t have to be a debate; it can be a celebration of progress. #InnovationInElectronics #SoftwareOverHardware #ProgressNotTradition #EmbraceTheFuture #PongInDiscreteComponents
    HACKADAY.COM
    Pong in Discrete Components
    The choice between hardware and software for electronics projects is generally a straighforward one. For simple tasks we might build dedicated hardware circuits out of discrete components for reliability and …read more
    1 Comentários 0 Compartilhamentos
  • In a world where we’re all desperately trying to make our digital creations look as lifelike as a potato, we now have the privilege of diving headfirst into the revolutionary topic of "Separate shaders in AI 3D generated models." Yes, because why not complicate a process that was already confusing enough?

    Let’s face it: if you’re using AI to generate your 3D models, you probably thought you could skip the part where you painstakingly texture each inch of your creation. But alas! Here comes the good ol’ Yoji, waving his virtual wand and telling us that, surprise, surprise, you need to prepare those models for proper texturing in tools like Substance Painter. Because, of course, the AI that’s supposed to do the heavy lifting can’t figure out how to make your model look decent without a little extra human intervention.

    But don’t worry! Yoji has got your back with his meticulous “how-to” on separating shaders. Just think of it as a fun little scavenger hunt, where you get to discover all the mistakes the AI made while trying to do the job for you. Who knew that a model could look so… special? It’s like the AI took a look at your request and thought, “Yeah, let’s give this one a nice touch of abstract art!” Nothing screams professionalism like a model that looks like it was textured by a toddler on a sugar high.

    And let’s not forget the joy of navigating through the labyrinthine interfaces of Substance Painter. Ah, yes! The thrill of clicking through endless menus, desperately searching for that elusive shader that will somehow make your model look less like a lumpy marshmallow and more like a refined piece of art. It’s a bit like being in a relationship, really. You start with high hopes and a glossy exterior, only to end up questioning all your life choices as you try to figure out how to make it work.

    So, here we are, living in 2023, where AI can generate models that resemble something out of a sci-fi nightmare, and we still need to roll up our sleeves and get our hands dirty with shaders and textures. Who knew that the future would come with so many manual adjustments? Isn’t technology just delightful?

    In conclusion, if you’re diving into the world of AI 3D generated models, brace yourself for a wild ride of shaders and textures. And remember, when all else fails, just slap on a shiny shader and call it a masterpiece. After all, art is subjective, right?

    #3DModels #AIGenerated #SubstancePainter #Shaders #DigitalArt
    In a world where we’re all desperately trying to make our digital creations look as lifelike as a potato, we now have the privilege of diving headfirst into the revolutionary topic of "Separate shaders in AI 3D generated models." Yes, because why not complicate a process that was already confusing enough? Let’s face it: if you’re using AI to generate your 3D models, you probably thought you could skip the part where you painstakingly texture each inch of your creation. But alas! Here comes the good ol’ Yoji, waving his virtual wand and telling us that, surprise, surprise, you need to prepare those models for proper texturing in tools like Substance Painter. Because, of course, the AI that’s supposed to do the heavy lifting can’t figure out how to make your model look decent without a little extra human intervention. But don’t worry! Yoji has got your back with his meticulous “how-to” on separating shaders. Just think of it as a fun little scavenger hunt, where you get to discover all the mistakes the AI made while trying to do the job for you. Who knew that a model could look so… special? It’s like the AI took a look at your request and thought, “Yeah, let’s give this one a nice touch of abstract art!” Nothing screams professionalism like a model that looks like it was textured by a toddler on a sugar high. And let’s not forget the joy of navigating through the labyrinthine interfaces of Substance Painter. Ah, yes! The thrill of clicking through endless menus, desperately searching for that elusive shader that will somehow make your model look less like a lumpy marshmallow and more like a refined piece of art. It’s a bit like being in a relationship, really. You start with high hopes and a glossy exterior, only to end up questioning all your life choices as you try to figure out how to make it work. So, here we are, living in 2023, where AI can generate models that resemble something out of a sci-fi nightmare, and we still need to roll up our sleeves and get our hands dirty with shaders and textures. Who knew that the future would come with so many manual adjustments? Isn’t technology just delightful? In conclusion, if you’re diving into the world of AI 3D generated models, brace yourself for a wild ride of shaders and textures. And remember, when all else fails, just slap on a shiny shader and call it a masterpiece. After all, art is subjective, right? #3DModels #AIGenerated #SubstancePainter #Shaders #DigitalArt
    Separate shaders in AI 3d generated models
    Yoji shows how to prepare generated models for proper texturing in tools like Substance Painter. Source
    Like
    Love
    Wow
    Sad
    Angry
    192
    1 Comentários 0 Compartilhamentos
  • What a world we live in when scientists finally unlock the secrets to the axolotls' ability to regenerate limbs, only to reveal that the key lies not in some miraculous regrowth molecule, but in its controlled destruction! Seriously, what kind of twisted logic is this? Are we supposed to celebrate the fact that the secret to regeneration is, in fact, about knowing when to destroy something instead of nurturing and encouraging growth? This revelation is not just baffling; it's downright infuriating!

    In an age where regenerative medicine holds the promise of healing wounds and restoring functionality, we are faced with the shocking realization that the science is not about building up, but rather about tearing down. Why would we ever want to focus on the destruction of growth molecules instead of creating an environment where regeneration can bloom unimpeded? Where is the inspiration in that? It feels like a slap in the face to anyone who believes in the potential of science to improve lives!

    Moreover, can we talk about the implications of this discovery? If the key to regeneration involves a meticulous dance of destruction, what does that say about our approach to medical advancements? Are we really expected to just stand by and accept that we must embrace an idea that says, "let's get rid of the good stuff to allow for growth"? This is not just a minor flaw in reasoning; it's a fundamental misunderstanding of what regeneration should mean for us!

    To make matters worse, this revelation could lead to misguided practices in regenerative medicine. Instead of developing therapies that promote healing and growth, we could end up with treatments that focus on the elimination of beneficial molecules. This is absolutely unacceptable! How dare the scientific community suggest that the way forward is through destruction rather than cultivation? We should be demanding more from our researchers, not less!

    Let’s not forget the ethical implications. If the path to regeneration is paved with the controlled destruction of vital components, how can we trust the outcomes? We’re putting lives in the hands of a process that promotes destruction. Just imagine the future of medicine being dictated by a philosophy that sounds more like a dystopian nightmare than a beacon of hope.

    It is high time we hold scientists accountable for the direction they are taking in regenerative research. We need a shift in focus that prioritizes constructive growth, not destructive measures. If we are serious about advancing regenerative medicine, we must reject this flawed notion and demand a commitment to genuine regeneration—the kind that nurtures life, rather than sabotages it.

    Let’s raise our voices against this madness. We deserve better than a science that advocates for destruction as the means to an end. The axolotls may thrive on this paradox, but we, as humans, should expect far more from our scientific endeavors.

    #RegenerativeMedicine #Axolotl #ScienceFail #MedicalEthics #Innovation
    What a world we live in when scientists finally unlock the secrets to the axolotls' ability to regenerate limbs, only to reveal that the key lies not in some miraculous regrowth molecule, but in its controlled destruction! Seriously, what kind of twisted logic is this? Are we supposed to celebrate the fact that the secret to regeneration is, in fact, about knowing when to destroy something instead of nurturing and encouraging growth? This revelation is not just baffling; it's downright infuriating! In an age where regenerative medicine holds the promise of healing wounds and restoring functionality, we are faced with the shocking realization that the science is not about building up, but rather about tearing down. Why would we ever want to focus on the destruction of growth molecules instead of creating an environment where regeneration can bloom unimpeded? Where is the inspiration in that? It feels like a slap in the face to anyone who believes in the potential of science to improve lives! Moreover, can we talk about the implications of this discovery? If the key to regeneration involves a meticulous dance of destruction, what does that say about our approach to medical advancements? Are we really expected to just stand by and accept that we must embrace an idea that says, "let's get rid of the good stuff to allow for growth"? This is not just a minor flaw in reasoning; it's a fundamental misunderstanding of what regeneration should mean for us! To make matters worse, this revelation could lead to misguided practices in regenerative medicine. Instead of developing therapies that promote healing and growth, we could end up with treatments that focus on the elimination of beneficial molecules. This is absolutely unacceptable! How dare the scientific community suggest that the way forward is through destruction rather than cultivation? We should be demanding more from our researchers, not less! Let’s not forget the ethical implications. If the path to regeneration is paved with the controlled destruction of vital components, how can we trust the outcomes? We’re putting lives in the hands of a process that promotes destruction. Just imagine the future of medicine being dictated by a philosophy that sounds more like a dystopian nightmare than a beacon of hope. It is high time we hold scientists accountable for the direction they are taking in regenerative research. We need a shift in focus that prioritizes constructive growth, not destructive measures. If we are serious about advancing regenerative medicine, we must reject this flawed notion and demand a commitment to genuine regeneration—the kind that nurtures life, rather than sabotages it. Let’s raise our voices against this madness. We deserve better than a science that advocates for destruction as the means to an end. The axolotls may thrive on this paradox, but we, as humans, should expect far more from our scientific endeavors. #RegenerativeMedicine #Axolotl #ScienceFail #MedicalEthics #Innovation
    Scientists Discover the Key to Axolotls’ Ability to Regenerate Limbs
    A new study reveals the key lies not in the production of a regrowth molecule, but in that molecule's controlled destruction. The discovery could inspire future regenerative medicine.
    Like
    Love
    Wow
    Sad
    Angry
    586
    1 Comentários 0 Compartilhamentos
  • Burnout, $1M income, retiring early: Lessons from 29 people secretly working multiple remote jobs

    Secretly working multiple full-time remote jobs may sound like a nightmare — but Americans looking to make their financial dreams come true willingly hustle for it.Over the past two years, Business Insider has interviewed more than two dozen "overemployed" workers, many of whom work in tech roles. They tend to work long hours but say the extra earnings are worth it to pay off student debt, save for an early retirement, and afford expensive vacations and weight-loss drugs. Many started working multiple jobs during the pandemic, when remote job openings soared.One example is Sarah, who's on track to earn about this year by secretly working two remote IT jobs. Over the last few years, Sarah said the extra income from job juggling has helped her save more than in her 401s, pay off in credit card debt, and furnish her home.Sarah, who's in her 50s and lives in the Southeast, said working 12-hour days is worth it for the job security. This security came in handy when she was laid off from one of her jobs last year. She's since found a new second gig."I want to ride this out until I retire," Sarah previously told BI. Business Insider verified her identity, but she asked to use a pseudonym, citing fears of professional repercussions. BI spoke to one boss who caught an employee secretly working another job and fired him. Job juggling could breach some employment contracts and be a fireable offense.Overemployed workers like Sarah told BI how they've landed extra roles, juggled the workload, and stayed under the radar. Some said they rely on tactics like blocking off calendars, using separate devices, minimizing meetings, and sticking to flexible roles with low oversight.
    While job juggling could have professional repercussions or lead to burnout, and some readers have questioned the ethics of this working arrangement, many workers have told BI they don't feel guilty about their job juggling — and that the financial benefits generally outweigh the downsides and risks.

    In recent years, some have struggled to land new remote gigs, due in part to hiring slowdowns and return-to-office mandates. Most said they plan to continue pursuing overemployment as long as they can.Read the stories ahead to learn how some Americans have managed the workload, risks, and stress of working multiple jobs — and transformed their finances.
    #burnout #income #retiring #early #lessons
    Burnout, $1M income, retiring early: Lessons from 29 people secretly working multiple remote jobs
    Secretly working multiple full-time remote jobs may sound like a nightmare — but Americans looking to make their financial dreams come true willingly hustle for it.Over the past two years, Business Insider has interviewed more than two dozen "overemployed" workers, many of whom work in tech roles. They tend to work long hours but say the extra earnings are worth it to pay off student debt, save for an early retirement, and afford expensive vacations and weight-loss drugs. Many started working multiple jobs during the pandemic, when remote job openings soared.One example is Sarah, who's on track to earn about this year by secretly working two remote IT jobs. Over the last few years, Sarah said the extra income from job juggling has helped her save more than in her 401s, pay off in credit card debt, and furnish her home.Sarah, who's in her 50s and lives in the Southeast, said working 12-hour days is worth it for the job security. This security came in handy when she was laid off from one of her jobs last year. She's since found a new second gig."I want to ride this out until I retire," Sarah previously told BI. Business Insider verified her identity, but she asked to use a pseudonym, citing fears of professional repercussions. BI spoke to one boss who caught an employee secretly working another job and fired him. Job juggling could breach some employment contracts and be a fireable offense.Overemployed workers like Sarah told BI how they've landed extra roles, juggled the workload, and stayed under the radar. Some said they rely on tactics like blocking off calendars, using separate devices, minimizing meetings, and sticking to flexible roles with low oversight. While job juggling could have professional repercussions or lead to burnout, and some readers have questioned the ethics of this working arrangement, many workers have told BI they don't feel guilty about their job juggling — and that the financial benefits generally outweigh the downsides and risks. In recent years, some have struggled to land new remote gigs, due in part to hiring slowdowns and return-to-office mandates. Most said they plan to continue pursuing overemployment as long as they can.Read the stories ahead to learn how some Americans have managed the workload, risks, and stress of working multiple jobs — and transformed their finances. #burnout #income #retiring #early #lessons
    WWW.BUSINESSINSIDER.COM
    Burnout, $1M income, retiring early: Lessons from 29 people secretly working multiple remote jobs
    Secretly working multiple full-time remote jobs may sound like a nightmare — but Americans looking to make their financial dreams come true willingly hustle for it.Over the past two years, Business Insider has interviewed more than two dozen "overemployed" workers, many of whom work in tech roles. They tend to work long hours but say the extra earnings are worth it to pay off student debt, save for an early retirement, and afford expensive vacations and weight-loss drugs. Many started working multiple jobs during the pandemic, when remote job openings soared.One example is Sarah, who's on track to earn about $300,000 this year by secretly working two remote IT jobs. Over the last few years, Sarah said the extra income from job juggling has helped her save more than $100,000 in her 401(k)s, pay off $17,000 in credit card debt, and furnish her home.Sarah, who's in her 50s and lives in the Southeast, said working 12-hour days is worth it for the job security. This security came in handy when she was laid off from one of her jobs last year. She's since found a new second gig."I want to ride this out until I retire," Sarah previously told BI. Business Insider verified her identity, but she asked to use a pseudonym, citing fears of professional repercussions. BI spoke to one boss who caught an employee secretly working another job and fired him. Job juggling could breach some employment contracts and be a fireable offense.Overemployed workers like Sarah told BI how they've landed extra roles, juggled the workload, and stayed under the radar. Some said they rely on tactics like blocking off calendars, using separate devices, minimizing meetings, and sticking to flexible roles with low oversight. While job juggling could have professional repercussions or lead to burnout, and some readers have questioned the ethics of this working arrangement, many workers have told BI they don't feel guilty about their job juggling — and that the financial benefits generally outweigh the downsides and risks. In recent years, some have struggled to land new remote gigs, due in part to hiring slowdowns and return-to-office mandates. Most said they plan to continue pursuing overemployment as long as they can.Read the stories ahead to learn how some Americans have managed the workload, risks, and stress of working multiple jobs — and transformed their finances.
    Like
    Love
    Wow
    Angry
    Sad
    457
    0 Comentários 0 Compartilhamentos
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Comentários 0 Compartilhamentos
  • Tech billionaires are making a risky bet with humanity’s future

    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future. 

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.

    While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction. 

    “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.”

    “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow. 

    A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in? 

    I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization. 

    What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry. 

    Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share? 

    They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity. 

    In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics.

    You argue that the Singularity is the purest expression of the ideology of technological salvation. How so?

    Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end.

    The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen. 

    Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto”is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth. 

    Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed?

    Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law.

    “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.”

    My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over. 

    These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins?

    You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care. 

    I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control. 

    You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is? 

    I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals.

    More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that?

    It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast. 

    This interview was edited for length and clarity.

    Bryan Gardiner is a writer based in Oakland, California. 
    #tech #billionaires #are #making #risky
    Tech billionaires are making a risky bet with humanity’s future
    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future.  Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos. While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction.  “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.” “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology,to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.  A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in?  I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideologyand through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.  What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.  Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share?  They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.  In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics. You argue that the Singularity is the purest expression of the ideology of technological salvation. How so? Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end. The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen.  Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto”is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth.  Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed? Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law. “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.” My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Mooreknew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over.  These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins? You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.  I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control.  You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is?  I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals. More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that? It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast.  This interview was edited for length and clarity. Bryan Gardiner is a writer based in Oakland, California.  #tech #billionaires #are #making #risky
    WWW.TECHNOLOGYREVIEW.COM
    Tech billionaires are making a risky bet with humanity’s future
    “The best way to predict the future is to invent it,” the famed computer scientist Alan Kay once said. Uttered more out of exasperation than as inspiration, his remark has nevertheless attained gospel-like status among Silicon Valley entrepreneurs, in particular a handful of tech billionaires who fancy themselves the chief architects of humanity’s future.  Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals and ambitions in the near term, but their grand visions for the next decade and beyond are remarkably similar. Framed less as technological objectives and more as existential imperatives, they include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos. While there’s a sprawling patchwork of ideas and philosophies powering these visions, three features play a central role, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits. In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker calls this triumvirate of beliefs the “ideology of technological salvation” and warns that tech titans are using it to steer humanity in a dangerous direction.  “In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress.” “The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, [and] to justify nearly any action they might want to take,” he writes. Becker argues that the only way to break free of these visions is to see them for what they are: a convenient excuse to continue destroying the environment, skirt regulations, amass more power and control, and dismiss the very real problems of today to focus on the imagined ones of tomorrow.  A lot of critics, academics, and journalists have tried to define or distill the Silicon Valley ethos over the years. There was the “Californian Ideology” in the mid-’90s, the “Move fast and break things” era of the early 2000s, and more recently the “Libertarianism for me, feudalism for thee”  or “techno-­authoritarian” views. How do you see the “ideology of technological salvation” fitting in?  I’d say it’s very much of a piece with those earlier attempts to describe the Silicon Valley mindset. I mean, you can draw a pretty straight line from Max More’s principles of transhumanism in the ’90s to the Californian Ideology [a mashup of countercultural, libertarian, and neoliberal values] and through to what I call the ideology of technological salvation. The fact is, many of the ideas that define or animate Silicon Valley thinking have never been much of a ­mystery—libertarianism, an antipathy toward the government and regulation, the boundless faith in technology, the obsession with optimization.  What can be difficult is to parse where all these ideas come from and how they fit together—or if they fit together at all. I came up with the ideology of technological salvation as a way to name and give shape to a group of interrelated concepts and philosophies that can seem sprawling and ill-defined at first, but that actually sit at the center of a worldview shared by venture capitalists, executives, and other thought leaders in the tech industry.  Readers will likely be familiar with the tech billionaires featured in your book and at least some of their ambitions. I’m guessing they’ll be less familiar with the various “isms” that you argue have influenced or guided their thinking. Effective altruism, rationalism, long­termism, extropianism, effective accelerationism, futurism, singularitarianism, ­transhumanism—there are a lot of them. Is there something that they all share?  They’re definitely connected. In a sense, you could say they’re all versions or instantiations of the ideology of technological salvation, but there are also some very deep historical connections between the people in these groups and their aims and beliefs. The Extropians in the late ’80s believed in self-­transformation through technology and freedom from limitations of any kind—ideas that Ray Kurzweil eventually helped popularize and legitimize for a larger audience with the Singularity.  In most of these isms you’ll find the idea of escape and transcendence, as well as the promise of an amazing future, full of unimaginable wonders—so long as we don’t get in the way of technological progress. I should say that AI researcher Timnit Gebru and philosopher Émile Torres have also done a lot of great work linking these ideologies to one another and showing how they all have ties to racism, misogyny, and eugenics. You argue that the Singularity is the purest expression of the ideology of technological salvation. How so? Well, for one thing, it’s just this very simple, straightforward idea—the Singularity is coming and will occur when we merge our brains with the cloud and expand our intelligence a millionfold. This will then deepen our awareness and consciousness and everything will be amazing. In many ways, it’s a fantastical vision of a perfect technological utopia. We’re all going to live as long as we want in an eternal paradise, watched over by machines of loving grace, and everything will just get exponentially better forever. The end. The other isms I talk about in the book have a little more … heft isn’t the right word—they just have more stuff going on. There’s more to them, right? The rationalists and the effective altruists and the longtermists—they think that something like a singularity will happen, or could happen, but that there’s this really big danger between where we are now and that potential event. We have to address the fact that an all-powerful AI might destroy humanity—the so-called alignment problem—before any singularity can happen.  Then you’ve got the effective accelerationists, who are more like Kurzweil, but they’ve got more of a tech-bro spin on things. They’ve taken some of the older transhumanist ideas from the Singularity and updated them for startup culture. Marc Andreessen’s “Techno-Optimist Manifesto” [from 2023] is a good example. You could argue that all of these other philosophies that have gained purchase in Silicon Valley are just twists on Kurzweil’s Singularity, each one building on top of the core ideas of transcendence, techno­-optimism, and exponential growth.  Early on in the book you take aim at that idea of exponential growth—specifically, Kurzweil’s “Law of Accelerating Returns.” Could you explain what that is and why you think it’s flawed? Kurzweil thinks there’s this immutable “Law of Accelerating Returns” at work in the affairs of the universe, especially when it comes to technology. It’s the idea that technological progress isn’t linear but exponential. Advancements in one technology fuel even more rapid advancements in the future, which in turn lead to greater complexity and greater technological power, and on and on. This is just a mistake. Kurzweil uses the Law of Accelerating Returns to explain why the Singularity is inevitable, but to be clear, he’s far from the only one who believes in this so-called law. “I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear.” My sense is that it’s an idea that comes from staring at Moore’s Law for too long. Moore’s Law is of course the famous prediction that the number of transistors on a chip will double roughly every two years, with a minimal increase in cost. Now, that has in fact happened for the last 50 years or so, but not because of some fundamental law in the universe. It’s because the tech industry made a choice and some very sizable investments to make it happen. Moore’s Law was ultimately this really interesting observation or projection of a historical trend, but even Gordon Moore [who first articulated it] knew that it wouldn’t and couldn’t last forever. In fact, some think it’s already over.  These ideologies take inspiration from some pretty unsavory characters. Transhumanism, you say, was first popularized by the eugenicist Julian Huxley in a speech in 1951. Marc Andreessen’s “Techno-Optimist Manifesto” name-checks the noted fascist Filippo Tommaso Marinetti and his futurist manifesto. Did you get the sense while researching the book that the tech titans who champion these ideas understand their dangerous origins? You’re assuming in the framing of that question that there’s any rigorous thought going on here at all. As I say in the book, Andreessen’s manifesto runs almost entirely on vibes, not logic. I think someone may have told him about the futurist manifesto at some point, and he just sort of liked the general vibe, which is why he paraphrases a part of it. Maybe he learned something about Marinetti and forgot it. Maybe he didn’t care.  I really believe that when you get as rich as some of these guys are, you can just do things that seem like thinking and no one is really going to correct you or tell you things you don’t want to hear. For many of these billionaires, the vibes of fascism, authoritarianism, and colonialism are attractive because they’re fundamentally about creating a fantasy of control.  You argue that these visions of the future are being used to hasten environmental destruction, increase authoritarianism, and exacerbate inequalities. You also admit that they appeal to lots of people who aren’t billionaires. Why do you think that is?  I think a lot of us are also attracted to these ideas for the same reasons the tech billionaires are—they offer this fantasy of knowing what the future holds, of transcending death, and a sense that someone or something out there is in control. It’s hard to overstate how comforting a simple, coherent narrative can be in an increasingly complex and fast-moving world. This is of course what religion offers for many of us, and I don’t think it’s an accident that a sizable number of people in the rationalist and effective altruist communities are actually ex-evangelicals. More than any one specific technology, it seems like the most consequential thing these billionaires have invented is a sense of inevitability—that their visions for the future are somehow predestined. How does one fight against that? It’s a difficult question. For me, the answer was to write this book. I guess I’d also say this: Silicon Valley enjoyed well over a decade with little to no pushback on anything. That’s definitely a big part of how we ended up in this mess. There was no regulation, very little critical coverage in the press, and a lot of self-mythologizing going on. Things have started to change, especially as the social and environmental damage that tech companies and industry leaders have helped facilitate has become more clear. That understanding is an essential part of deflating the power of these tech billionaires and breaking free of their visions. When we understand that these dreams of the future are actually nightmares for the rest of us, I think you’ll see that senseof inevitability vanish pretty fast.  This interview was edited for length and clarity. Bryan Gardiner is a writer based in Oakland, California. 
    Like
    Love
    Wow
    Sad
    Angry
    535
    2 Comentários 0 Compartilhamentos