• So, the illustrious Pokémon Scarlet & Violet are finally gracing the Switch 2. I guess after the original Switch struggled like a Magikarp in a battle, they decided to give it a new home. Who knew that upgrading hardware could also mean upgrading our hopes and dreams? It’s like trading in your trusty old bicycle for a shiny new Ferrari, only to realize it still has the same old engine under the hood. But hey, if you ever wanted to see your Pokémon look slightly less pixelated while getting stuck in the same glitches, this is your moment! Let’s gear up for a wild ride filled with nostalgia and… more glitches, probably.

    #PokemonScarlet #PokemonViolet #Switch2 #GamingHumor #
    So, the illustrious Pokémon Scarlet & Violet are finally gracing the Switch 2. I guess after the original Switch struggled like a Magikarp in a battle, they decided to give it a new home. Who knew that upgrading hardware could also mean upgrading our hopes and dreams? It’s like trading in your trusty old bicycle for a shiny new Ferrari, only to realize it still has the same old engine under the hood. But hey, if you ever wanted to see your Pokémon look slightly less pixelated while getting stuck in the same glitches, this is your moment! Let’s gear up for a wild ride filled with nostalgia and… more glitches, probably. #PokemonScarlet #PokemonViolet #Switch2 #GamingHumor #
    Everything You Need To Know About Playing Pokémon Scarlet & Violet On Switch 2
    kotaku.com
    Modern Pokémon games have struggled quite a bit on the original Switch, so how does it fare with new hardware? The post Everything You Need To Know About Playing <i>Pokémon Scarlet & Violet</i> On Switch 2 appeared first on Kotaku.
    Like
    Love
    Wow
    Sad
    Angry
    167
    · 1 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • The release of the 'Atelier Ryza Secret Trilogy Deluxe Pack' is yet another example of the gaming industry prioritizing profit over genuine innovation. Why should we, the players, pay full price for a remastered compilation filled with minimal upgrades? This is a blatant cash grab disguised as a "deluxe" offering! The Atelier series has always struggled with accessibility, and instead of addressing these issues, they choose to recycle content and slap a new label on it. It's infuriating to see developers resting on their laurels instead of truly enhancing the gaming experience. We deserve better than this lazy approach!

    #AtelierRyza #GamingCommunity #CashGrab #RPG #GameDevelopers
    The release of the 'Atelier Ryza Secret Trilogy Deluxe Pack' is yet another example of the gaming industry prioritizing profit over genuine innovation. Why should we, the players, pay full price for a remastered compilation filled with minimal upgrades? This is a blatant cash grab disguised as a "deluxe" offering! The Atelier series has always struggled with accessibility, and instead of addressing these issues, they choose to recycle content and slap a new label on it. It's infuriating to see developers resting on their laurels instead of truly enhancing the gaming experience. We deserve better than this lazy approach! #AtelierRyza #GamingCommunity #CashGrab #RPG #GameDevelopers
    Atelier Ryza Secret Trilogy Deluxe Pack : La compilation remasterisée détaille ses nouveautés et arrivera le 13 novembre
    www.actugaming.net
    ActuGaming.net Atelier Ryza Secret Trilogy Deluxe Pack : La compilation remasterisée détaille ses nouveautés et arrivera le 13 novembre La série Atelier n’est pas la plus accessible des sagas RPG par chez nous, en […] L'article Atelier R
    Like
    Love
    Wow
    Sad
    Angry
    108
    · 1 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • In 1983, the Coleco Adam emerged with dreams of greatness, a beacon of hope in a world filled with ambition. Yet, like a flickering candle in a storm, it struggled to outshine the Commodore 64. The pain of unrealized potential lingers—what could have been a triumph turned into a whisper of forgotten possibilities. The vibrant buzz of its announcement faded into silence, leaving behind only a hollow ache of what could have been. Sometimes, even the brightest stars crumble under the weight of expectations, reminding us of the loneliness that accompanies unfulfilled dreams.

    #ColecoAdam #Commodore64 #UnfulfilledDreams #TechHistory #Loneliness
    In 1983, the Coleco Adam emerged with dreams of greatness, a beacon of hope in a world filled with ambition. Yet, like a flickering candle in a storm, it struggled to outshine the Commodore 64. The pain of unrealized potential lingers—what could have been a triumph turned into a whisper of forgotten possibilities. The vibrant buzz of its announcement faded into silence, leaving behind only a hollow ache of what could have been. Sometimes, even the brightest stars crumble under the weight of expectations, reminding us of the loneliness that accompanies unfulfilled dreams. #ColecoAdam #Commodore64 #UnfulfilledDreams #TechHistory #Loneliness
    Coleco Adam: A Commodore 64 Competitor, Almost
    hackaday.com
    For a brief, buzzing moment in 1983, the Coleco Adam looked like it might out-64 the Commodore 64. Announced with lots of ambition, this 8-bit marvel promised a complete computing …read more
    Like
    Love
    Wow
    Sad
    Angry
    126
    · 1 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Burnout, $1M income, retiring early: Lessons from 29 people secretly working multiple remote jobs

    Secretly working multiple full-time remote jobs may sound like a nightmare — but Americans looking to make their financial dreams come true willingly hustle for it.Over the past two years, Business Insider has interviewed more than two dozen "overemployed" workers, many of whom work in tech roles. They tend to work long hours but say the extra earnings are worth it to pay off student debt, save for an early retirement, and afford expensive vacations and weight-loss drugs. Many started working multiple jobs during the pandemic, when remote job openings soared.One example is Sarah, who's on track to earn about this year by secretly working two remote IT jobs. Over the last few years, Sarah said the extra income from job juggling has helped her save more than in her 401s, pay off in credit card debt, and furnish her home.Sarah, who's in her 50s and lives in the Southeast, said working 12-hour days is worth it for the job security. This security came in handy when she was laid off from one of her jobs last year. She's since found a new second gig."I want to ride this out until I retire," Sarah previously told BI. Business Insider verified her identity, but she asked to use a pseudonym, citing fears of professional repercussions. BI spoke to one boss who caught an employee secretly working another job and fired him. Job juggling could breach some employment contracts and be a fireable offense.Overemployed workers like Sarah told BI how they've landed extra roles, juggled the workload, and stayed under the radar. Some said they rely on tactics like blocking off calendars, using separate devices, minimizing meetings, and sticking to flexible roles with low oversight.
    While job juggling could have professional repercussions or lead to burnout, and some readers have questioned the ethics of this working arrangement, many workers have told BI they don't feel guilty about their job juggling — and that the financial benefits generally outweigh the downsides and risks.

    In recent years, some have struggled to land new remote gigs, due in part to hiring slowdowns and return-to-office mandates. Most said they plan to continue pursuing overemployment as long as they can.Read the stories ahead to learn how some Americans have managed the workload, risks, and stress of working multiple jobs — and transformed their finances.
    #burnout #income #retiring #early #lessons
    Burnout, $1M income, retiring early: Lessons from 29 people secretly working multiple remote jobs
    Secretly working multiple full-time remote jobs may sound like a nightmare — but Americans looking to make their financial dreams come true willingly hustle for it.Over the past two years, Business Insider has interviewed more than two dozen "overemployed" workers, many of whom work in tech roles. They tend to work long hours but say the extra earnings are worth it to pay off student debt, save for an early retirement, and afford expensive vacations and weight-loss drugs. Many started working multiple jobs during the pandemic, when remote job openings soared.One example is Sarah, who's on track to earn about this year by secretly working two remote IT jobs. Over the last few years, Sarah said the extra income from job juggling has helped her save more than in her 401s, pay off in credit card debt, and furnish her home.Sarah, who's in her 50s and lives in the Southeast, said working 12-hour days is worth it for the job security. This security came in handy when she was laid off from one of her jobs last year. She's since found a new second gig."I want to ride this out until I retire," Sarah previously told BI. Business Insider verified her identity, but she asked to use a pseudonym, citing fears of professional repercussions. BI spoke to one boss who caught an employee secretly working another job and fired him. Job juggling could breach some employment contracts and be a fireable offense.Overemployed workers like Sarah told BI how they've landed extra roles, juggled the workload, and stayed under the radar. Some said they rely on tactics like blocking off calendars, using separate devices, minimizing meetings, and sticking to flexible roles with low oversight. While job juggling could have professional repercussions or lead to burnout, and some readers have questioned the ethics of this working arrangement, many workers have told BI they don't feel guilty about their job juggling — and that the financial benefits generally outweigh the downsides and risks. In recent years, some have struggled to land new remote gigs, due in part to hiring slowdowns and return-to-office mandates. Most said they plan to continue pursuing overemployment as long as they can.Read the stories ahead to learn how some Americans have managed the workload, risks, and stress of working multiple jobs — and transformed their finances. #burnout #income #retiring #early #lessons
    Burnout, $1M income, retiring early: Lessons from 29 people secretly working multiple remote jobs
    www.businessinsider.com
    Secretly working multiple full-time remote jobs may sound like a nightmare — but Americans looking to make their financial dreams come true willingly hustle for it.Over the past two years, Business Insider has interviewed more than two dozen "overemployed" workers, many of whom work in tech roles. They tend to work long hours but say the extra earnings are worth it to pay off student debt, save for an early retirement, and afford expensive vacations and weight-loss drugs. Many started working multiple jobs during the pandemic, when remote job openings soared.One example is Sarah, who's on track to earn about $300,000 this year by secretly working two remote IT jobs. Over the last few years, Sarah said the extra income from job juggling has helped her save more than $100,000 in her 401(k)s, pay off $17,000 in credit card debt, and furnish her home.Sarah, who's in her 50s and lives in the Southeast, said working 12-hour days is worth it for the job security. This security came in handy when she was laid off from one of her jobs last year. She's since found a new second gig."I want to ride this out until I retire," Sarah previously told BI. Business Insider verified her identity, but she asked to use a pseudonym, citing fears of professional repercussions. BI spoke to one boss who caught an employee secretly working another job and fired him. Job juggling could breach some employment contracts and be a fireable offense.Overemployed workers like Sarah told BI how they've landed extra roles, juggled the workload, and stayed under the radar. Some said they rely on tactics like blocking off calendars, using separate devices, minimizing meetings, and sticking to flexible roles with low oversight. While job juggling could have professional repercussions or lead to burnout, and some readers have questioned the ethics of this working arrangement, many workers have told BI they don't feel guilty about their job juggling — and that the financial benefits generally outweigh the downsides and risks. In recent years, some have struggled to land new remote gigs, due in part to hiring slowdowns and return-to-office mandates. Most said they plan to continue pursuing overemployment as long as they can.Read the stories ahead to learn how some Americans have managed the workload, risks, and stress of working multiple jobs — and transformed their finances.
    Like
    Love
    Wow
    Angry
    Sad
    457
    · 0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL &amp; Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL &amp; Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    www.marktechpost.com
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL &amp; Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    · 0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?

    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report.
    The deal surprised the industry as the two are seen as major AI rivals.
    Signs of friction between OpenAI and Microsoft may have also fueled the move.
    The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025.

    In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs.
    The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft.
    Why the Deal Surprised the Tech Industry
    The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider.
    Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product.

    A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result.
    Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024. 
    And then there’s this gem:

    With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice.
    In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed.
    It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft?
    In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception.
    Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy. 
    Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025.
    If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later.
    While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months.
    In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March.
    The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity.

    In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector.

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #rivals #partners #whats #with #google
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #rivals #partners #whats #with #google
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    techreport.com
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its $75B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bard (now known as Gemini) to compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost $100B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence (AGI). Defined as when OpenAI develops AI systems that generate $100B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to $12.7 from $3.7B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked $75B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unit (TPU) called Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Tanks, guns and face-painting

    Of all the jarring things I’ve witnessed on the National Mall, nothing will beat the image of the first thing I saw after I cleared security at the Army festival: a child, sitting at the controls of an M119A3 Howitzer, being instructed by a soldier on how to aim it, as his red-hatted parents took a photo with the Washington Monument in the background. The primary stated reason for the Grand Military Parade is to celebrate the US Army’s 250th birthday. The second stated reason is to use the event for recruiting purposes. Like other military branches, the Army has struggled to meet its enlistment quotas for over the past decade. And according to very defensive Army spokespeople trying to convince skeptics that the parade was not for Donald Trump’s birthday, there had always been a festival planned on the National Mall that day, and it had been in the works for over two years, and the parade, tacked on just two months ago, was purely incidental. Assuming that their statement was true, I wasn’t quite sure if they had anticipated so many people in blatant MAGA swag in attendance — or how eager they were to bring their children and hand them assault rifles. WASHINGTON, DC - JUNE 14: An Army festival attendee holds a M3 Carl Gustav Recoilless Rifle on June 14, 2025 in Washington, DC. Photo by Anna Moneymaker / Getty ImagesThere had been kid-friendly events planned: an NFL Kids Zone with a photo op with the Washington Commanders’ mascot, a few face-painting booths, several rock-climbing walls. But they were dwarfed, literally, by dozens of war machines parked along the jogging paths: massive tanks, trucks with gun-mounted turrets, assault helicopters, many of them currently used in combat, all with helpful signs explaining the history of each vehicle, as well as the guns and ammo it could carry. And the families — wearing everything from J6 shirts to Vineyard Vines — were drawn more to the military vehicles, all-too-ready to place their kids in the cockpit of an AH-1F Cobra 998 helicopter as they pretended to aim the nose-mounted 3-barrelled Gatling Cannon. Parents told their children to smile as they poked their little heads out of the hatch of an M1135 Stryker armored vehicle; reminded them to be patient as they waited in line to sit inside an M109A7 self-propelled Howitzer with a 155MM rifled cannon.Attendees look at a military vehicle on display. Bloomberg via Getty ImagesBut seeing a kid’s happiness of being inside a big thing that goes boom was nothing compared to the grownups’ faces when they got the chance to hold genuine military assault rifles — especially the grownups who had made sure to wear Trump merch during the Army’s birthday party.It seemed that not even a free Army-branded Bluetooth speaker could compare to how fucking sick the modded AR-15 was. Attendees were in raptures over the Boston Dynamics robot dog gun, the quadcopter drone gun, or really any of the other guns available.RelatedHowever many protesters made it out to DC, they were dwarfed by thousands of people winding down Constitution Avenue to enter the parade viewing grounds: lots of MAGA heads, lots of foreign tourists, all people who really just like to see big, big tanks. “Angry LOSERS!” they jeered at the protesters.and after walking past them, crossing the bridge, winding through hundreds of yards of metal fencing, Funneling through security, crossing a choked pedestrian bridge over Constitution Ave, I was finally dumped onto the parade viewing section: slightly muggy and surprisingly navigable. But whatever sluggishness the crowd was feeling, it would immediately dissipate the moment a tank turned the corner — and the music started blasting.Americans have a critical weakness for 70s and 80s rock, and this crowd seemed more than willing to look past the questionable origins of the parade so long as the soundtrack had a sick guitar solo. An M1 Abrams tank driving past you while Barracuda blasts on a tower of speakers? Badass. Black Hawk helicopters circling the Washington Monument and disappearing behind the African-American history museum, thrashing your head to “separate ways” by Journey? Fucking badass. ANOTHER M1 ABRAMS TANK?!?!! AND TO FORTUNATE SON??!?!? “They got me fucking hooked,” a young redheaded man said behind me as the crowd screamed for the waving drivers.Members of the U.S. Army drive Bradley Fighting Vehicles in the 250th birthday parade on June 14, 2025 in Washington, DC. Getty ImagesWhen you listen to the hardest fucking rock soundtrack long enough, and learn more about how fucking sick the Bradley Fighting Vehicles streaming by you are, an animalistic hype takes over you — enough to drown out all the nationwide anger about the parade, the enormity of Trump’s power grab, the fact that two Minnesota Democratic lawmakers were shot in their homes just that morning, the riot police roving the streets of LA.It helped that it didn’t rain. It helped that the only people at the parade were the diehards who didn’t care if they were rained out. And by the end of the parade, they didn’t even bother to stay for Trump’s speech, beelining back to the bridge at the first drop of rain.The only thing that mattered to this crowd inside the security perimeter — more than the Army’s honor and history, and barely more than Trump himself — was firepower, strength, hard rock, and America’s unparalleled, world-class ability to kill.See More:
    #tanks #guns #facepainting
    Tanks, guns and face-painting
    Of all the jarring things I’ve witnessed on the National Mall, nothing will beat the image of the first thing I saw after I cleared security at the Army festival: a child, sitting at the controls of an M119A3 Howitzer, being instructed by a soldier on how to aim it, as his red-hatted parents took a photo with the Washington Monument in the background. The primary stated reason for the Grand Military Parade is to celebrate the US Army’s 250th birthday. The second stated reason is to use the event for recruiting purposes. Like other military branches, the Army has struggled to meet its enlistment quotas for over the past decade. And according to very defensive Army spokespeople trying to convince skeptics that the parade was not for Donald Trump’s birthday, there had always been a festival planned on the National Mall that day, and it had been in the works for over two years, and the parade, tacked on just two months ago, was purely incidental. Assuming that their statement was true, I wasn’t quite sure if they had anticipated so many people in blatant MAGA swag in attendance — or how eager they were to bring their children and hand them assault rifles. WASHINGTON, DC - JUNE 14: An Army festival attendee holds a M3 Carl Gustav Recoilless Rifle on June 14, 2025 in Washington, DC. Photo by Anna Moneymaker / Getty ImagesThere had been kid-friendly events planned: an NFL Kids Zone with a photo op with the Washington Commanders’ mascot, a few face-painting booths, several rock-climbing walls. But they were dwarfed, literally, by dozens of war machines parked along the jogging paths: massive tanks, trucks with gun-mounted turrets, assault helicopters, many of them currently used in combat, all with helpful signs explaining the history of each vehicle, as well as the guns and ammo it could carry. And the families — wearing everything from J6 shirts to Vineyard Vines — were drawn more to the military vehicles, all-too-ready to place their kids in the cockpit of an AH-1F Cobra 998 helicopter as they pretended to aim the nose-mounted 3-barrelled Gatling Cannon. Parents told their children to smile as they poked their little heads out of the hatch of an M1135 Stryker armored vehicle; reminded them to be patient as they waited in line to sit inside an M109A7 self-propelled Howitzer with a 155MM rifled cannon.Attendees look at a military vehicle on display. Bloomberg via Getty ImagesBut seeing a kid’s happiness of being inside a big thing that goes boom was nothing compared to the grownups’ faces when they got the chance to hold genuine military assault rifles — especially the grownups who had made sure to wear Trump merch during the Army’s birthday party.It seemed that not even a free Army-branded Bluetooth speaker could compare to how fucking sick the modded AR-15 was. Attendees were in raptures over the Boston Dynamics robot dog gun, the quadcopter drone gun, or really any of the other guns available.RelatedHowever many protesters made it out to DC, they were dwarfed by thousands of people winding down Constitution Avenue to enter the parade viewing grounds: lots of MAGA heads, lots of foreign tourists, all people who really just like to see big, big tanks. “Angry LOSERS!” they jeered at the protesters.and after walking past them, crossing the bridge, winding through hundreds of yards of metal fencing, Funneling through security, crossing a choked pedestrian bridge over Constitution Ave, I was finally dumped onto the parade viewing section: slightly muggy and surprisingly navigable. But whatever sluggishness the crowd was feeling, it would immediately dissipate the moment a tank turned the corner — and the music started blasting.Americans have a critical weakness for 70s and 80s rock, and this crowd seemed more than willing to look past the questionable origins of the parade so long as the soundtrack had a sick guitar solo. An M1 Abrams tank driving past you while Barracuda blasts on a tower of speakers? Badass. Black Hawk helicopters circling the Washington Monument and disappearing behind the African-American history museum, thrashing your head to “separate ways” by Journey? Fucking badass. ANOTHER M1 ABRAMS TANK?!?!! AND TO FORTUNATE SON??!?!? “They got me fucking hooked,” a young redheaded man said behind me as the crowd screamed for the waving drivers.Members of the U.S. Army drive Bradley Fighting Vehicles in the 250th birthday parade on June 14, 2025 in Washington, DC. Getty ImagesWhen you listen to the hardest fucking rock soundtrack long enough, and learn more about how fucking sick the Bradley Fighting Vehicles streaming by you are, an animalistic hype takes over you — enough to drown out all the nationwide anger about the parade, the enormity of Trump’s power grab, the fact that two Minnesota Democratic lawmakers were shot in their homes just that morning, the riot police roving the streets of LA.It helped that it didn’t rain. It helped that the only people at the parade were the diehards who didn’t care if they were rained out. And by the end of the parade, they didn’t even bother to stay for Trump’s speech, beelining back to the bridge at the first drop of rain.The only thing that mattered to this crowd inside the security perimeter — more than the Army’s honor and history, and barely more than Trump himself — was firepower, strength, hard rock, and America’s unparalleled, world-class ability to kill.See More: #tanks #guns #facepainting
    Tanks, guns and face-painting
    www.theverge.com
    Of all the jarring things I’ve witnessed on the National Mall, nothing will beat the image of the first thing I saw after I cleared security at the Army festival: a child, sitting at the controls of an M119A3 Howitzer, being instructed by a soldier on how to aim it, as his red-hatted parents took a photo with the Washington Monument in the background. The primary stated reason for the Grand Military Parade is to celebrate the US Army’s 250th birthday. The second stated reason is to use the event for recruiting purposes. Like other military branches, the Army has struggled to meet its enlistment quotas for over the past decade. And according to very defensive Army spokespeople trying to convince skeptics that the parade was not for Donald Trump’s birthday, there had always been a festival planned on the National Mall that day, and it had been in the works for over two years, and the parade, tacked on just two months ago, was purely incidental. Assuming that their statement was true, I wasn’t quite sure if they had anticipated so many people in blatant MAGA swag in attendance — or how eager they were to bring their children and hand them assault rifles. WASHINGTON, DC - JUNE 14: An Army festival attendee holds a M3 Carl Gustav Recoilless Rifle on June 14, 2025 in Washington, DC. Photo by Anna Moneymaker / Getty ImagesThere had been kid-friendly events planned: an NFL Kids Zone with a photo op with the Washington Commanders’ mascot, a few face-painting booths, several rock-climbing walls. But they were dwarfed, literally, by dozens of war machines parked along the jogging paths: massive tanks, trucks with gun-mounted turrets, assault helicopters, many of them currently used in combat, all with helpful signs explaining the history of each vehicle, as well as the guns and ammo it could carry. And the families — wearing everything from J6 shirts to Vineyard Vines — were drawn more to the military vehicles, all-too-ready to place their kids in the cockpit of an AH-1F Cobra 998 helicopter as they pretended to aim the nose-mounted 3-barrelled Gatling Cannon. Parents told their children to smile as they poked their little heads out of the hatch of an M1135 Stryker armored vehicle; reminded them to be patient as they waited in line to sit inside an M109A7 self-propelled Howitzer with a 155MM rifled cannon.Attendees look at a military vehicle on display. Bloomberg via Getty ImagesBut seeing a kid’s happiness of being inside a big thing that goes boom was nothing compared to the grownups’ faces when they got the chance to hold genuine military assault rifles — especially the grownups who had made sure to wear Trump merch during the Army’s birthday party. (Some even handed the rifles to their children for their own photo ops.) It seemed that not even a free Army-branded Bluetooth speaker could compare to how fucking sick the modded AR-15 was. Attendees were in raptures over the Boston Dynamics robot dog gun, the quadcopter drone gun, or really any of the other guns available (except for those historic guns, those were only maybe cool).RelatedHowever many protesters made it out to DC, they were dwarfed by thousands of people winding down Constitution Avenue to enter the parade viewing grounds: lots of MAGA heads, lots of foreign tourists, all people who really just like to see big, big tanks. “Angry LOSERS!” they jeered at the protesters. (“Don’t worry about them,” said one cop, “they lost anyways.”) and after walking past them, crossing the bridge, winding through hundreds of yards of metal fencing, Funneling through security, crossing a choked pedestrian bridge over Constitution Ave, I was finally dumped onto the parade viewing section: slightly muggy and surprisingly navigable. But whatever sluggishness the crowd was feeling, it would immediately dissipate the moment a tank turned the corner — and the music started blasting.Americans have a critical weakness for 70s and 80s rock, and this crowd seemed more than willing to look past the questionable origins of the parade so long as the soundtrack had a sick guitar solo. An M1 Abrams tank driving past you while Barracuda blasts on a tower of speakers? Badass. Black Hawk helicopters circling the Washington Monument and disappearing behind the African-American history museum, thrashing your head to “separate ways” by Journey? Fucking badass. ANOTHER M1 ABRAMS TANK?!?!! AND TO FORTUNATE SON??!?!? “They got me fucking hooked,” a young redheaded man said behind me as the crowd screamed for the waving drivers. (The tank was so badass that the irony of “Fortunate Son” didn’t matter.)Members of the U.S. Army drive Bradley Fighting Vehicles in the 250th birthday parade on June 14, 2025 in Washington, DC. Getty ImagesWhen you listen to the hardest fucking rock soundtrack long enough, and learn more about how fucking sick the Bradley Fighting Vehicles streaming by you are (either from the parade announcer or the tank enthusiast next to you), an animalistic hype takes over you — enough to drown out all the nationwide anger about the parade, the enormity of Trump’s power grab, the fact that two Minnesota Democratic lawmakers were shot in their homes just that morning, the riot police roving the streets of LA.It helped that it didn’t rain. It helped that the only people at the parade were the diehards who didn’t care if they were rained out. And by the end of the parade, they didn’t even bother to stay for Trump’s speech, beelining back to the bridge at the first drop of rain.The only thing that mattered to this crowd inside the security perimeter — more than the Army’s honor and history, and barely more than Trump himself — was firepower, strength, hard rock, and America’s unparalleled, world-class ability to kill.See More:
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?

    Meta is looking to up its weakening AI game with a key talent grab.

    Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts.

    Meta will invest billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO.

    This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence.

    The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity.

    “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the billion price tag, this might be the most expensive individual talent acquisition in tech history.”

    Closing gaps with competitors

    Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.

     “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following.

    Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X, that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.”

    But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.”

    Allowing big tech to side-step notification

    But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements.

    The US Federal Trade Commissionrequires mergers and acquisitions totaling more than million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process.

    Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup million in licensing fees and hired much of its team, including co-founders Mustafa Suleymanand Karén Simonyan.

    Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers.

    However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Departmentanalyzing Google-Character AI.

    Reflecting ‘desperation’ in the AI industry

    Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race.

    “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.”

    However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition.

    Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning.

    All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted.

    “I think theof this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
    #meta #officially #acquihires #scale #will
    Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?
    Meta is looking to up its weakening AI game with a key talent grab. Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts. Meta will invest billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO. This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence. The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity. “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the billion price tag, this might be the most expensive individual talent acquisition in tech history.” Closing gaps with competitors Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.  “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following. Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X, that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.” But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.” Allowing big tech to side-step notification But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements. The US Federal Trade Commissionrequires mergers and acquisitions totaling more than million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process. Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup million in licensing fees and hired much of its team, including co-founders Mustafa Suleymanand Karén Simonyan. Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers. However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Departmentanalyzing Google-Character AI. Reflecting ‘desperation’ in the AI industry Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race. “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.” However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition. Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning. All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted. “I think theof this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.” #meta #officially #acquihires #scale #will
    Meta officially ‘acqui-hires’ Scale AI — will it draw regulator scrutiny?
    www.computerworld.com
    Meta is looking to up its weakening AI game with a key talent grab. Following days of speculation, the social media giant has confirmed that Scale AI’s founder and CEO, Alexandr Wang, is joining Meta to work on its AI efforts. Meta will invest $14.3 billion in Scale AI as part of the deal, and will have a 49% stake in the AI startup, which specializes in data labeling and model evaluation services. Other key Scale employees will also move over to Meta, while CSO Jason Droege will step in as Scale’s interim CEO. This move comes as the Mark Zuckerberg-led company goes all-in on building a new research lab focused on “superintelligence,” the next step beyond artificial general intelligence (AGI). The arrangement also reflects a growing trend in big tech, where industry giants are buying companies without really buying them — what’s increasingly being referred to as “acqui-hiring.” It involves recruiting key personnel from a company, licensing its technology, and selling its products, but leaving it as a private entity. “This is fundamentally a massive ‘acqui-hire’ play disguised as a strategic investment,” said Wyatt Mayham, lead AI consultant at Northwest AI Consulting. “While Meta gets Scale’s data infrastructure, the real prize is Wang joining Meta to lead their superintelligence lab. At the $14.3 billion price tag, this might be the most expensive individual talent acquisition in tech history.” Closing gaps with competitors Meta has struggled to keep up with OpenAI, Anthropic, and other key competitors in the AI race, recently even delaying the launch of its new flagship model, Behemoth, purportedly due to internal concerns about its performance. It has also seen the departure of several of its top researchers.  “It’s not really a secret at this point that Meta’s Llama 4 models have had significant performance issues,” Mayham said. “Zuck is essentially betting that Wang’s track record building AI infrastructure can solve Meta’s alignment and model quality problems faster than internal development.” And, he added, Scale’s enterprise-grade human feedback loops are exactly what Meta’s Llama models need to compete with ChatGPT and Claude on reliability and task-following. Data quality, a key focus for Wang, is a big factor in solving those performance problems. He wrote in a note to Scale employees on Thursday, later posted on X (formerly Twitter), that when he founded Scale AI in 2016 amidst some of the early AI breakthroughs, “it was clear even then that data was the lifeblood of AI systems, and that was the inspiration behind starting Scale.” But despite Meta’s huge investment, Scale AI is underscoring its commitment to sovereignty: “Scale remains an independent leader in AI, committed to providing industry-leading AI solutions and safeguarding customer data,” the company wrote in a blog post. “Scale will continue to partner with leading AI labs, multinational enterprises, and governments to deliver expert data and technology solutions through every phase of AI’s evolution.” Allowing big tech to side-step notification But while it’s only just been inked, the high-profile deal is already raising some eyebrows. According to experts, arrangements like these allow tech companies to acquire top talent and key technologies in a side-stepping manner, thus avoiding regulatory notification requirements. The US Federal Trade Commission (FTC) requires mergers and acquisitions totaling more than $126 million be reported in advance. Licensing deals or the mass hiring-away of a company’s employees don’t have this requirement. This allows companies to move more quickly, as they don’t have to undergo the lengthy federal review process. Microsoft’s deal with Inflection AI is probably one of the highest-profile examples of the “acqui-hiring” trend. In March 2024, the tech giant paid the startup $650 million in licensing fees and hired much of its team, including co-founders Mustafa Suleyman (now CEO of Microsoft AI) and Karén Simonyan (chief scientist of Microsoft AI). Similarly, last year Amazon hired more than 50% of Adept AI’s key personnel, including its CEO, to focus on AGI. Google also inked a licensing agreement with Character AI and hired a majority of its founders and researchers. However, regulators have caught on, with the FTC launching inquiries into both the Microsoft-Inflection and Amazon-Adept deals, and the US Justice Department (DOJ) analyzing Google-Character AI. Reflecting ‘desperation’ in the AI industry Meta’s decision to go forward with this arrangement anyway, despite that dicey backdrop, seems to indicate how anxious the company is to keep up in the AI race. “The most interesting piece of this all is the timing,” said Mayham. “It reflects broader industry desperation. Tech giants are increasingly buying parts of promising AI startups to secure key talent without acquiring full companies, following similar patterns with Microsoft-Inflection and Google-Character AI.” However, the regulatory risks are “real but nuanced,” he noted. Meta’s acquisition could face scrutiny from antitrust regulators, particularly as the company is involved in an ongoing FTC lawsuit over its Instagram and WhatsApp acquisitions. While the 49% ownership position appears designed to avoid triggering automatic thresholds, US regulatory bodies like the FTC and DOJ can review minority stake acquisitions under the Clayton Antitrust Act if they seem to threaten competition. Perhaps more importantly, Meta is not considered a leader in AGI development and is trailing OpenAI, Anthropic, and Google, meaning regulators may not consider the deal all that concerning (yet). All told, the arrangement certainly signals Meta’s recognition that the AI race has shifted from a compute and model size competition to a data quality and alignment battle, Mayham noted. “I think the [gist] of this is that Zuck’s biggest bet is that talent and data infrastructure matter more than raw compute power in the AI race,” he said. “The regulatory risk is manageable given Meta’s trailing position, but the acqui-hire premium shows how expensive top AI talent has become.”
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
CGShares https://cgshares.com