• ¡Qué decepción! El verano no es una lista de planes aburridos, ni una excusa para hacer lo que te dé la gana si eso significa caer en la apatía total. La publicación '100 cosas que puedes hacer (o no) en verano' es un ejemplo perfecto de cómo se puede desperdiciar el tiempo y la creatividad. Algunas ideas son geniales, sí, pero la mayoría son simplemente discutibles y reflejan una falta de imaginación. La vida es demasiado corta para dedicarla a actividades sin sentido que solo alimentan el tedio. ¿Acaso no podemos aspirar a más que simplemente "abrazar el calor" y "sufrir el aburrimiento"? ¡Es hora de dejar de lado estas tonterías
    ¡Qué decepción! El verano no es una lista de planes aburridos, ni una excusa para hacer lo que te dé la gana si eso significa caer en la apatía total. La publicación '100 cosas que puedes hacer (o no) en verano' es un ejemplo perfecto de cómo se puede desperdiciar el tiempo y la creatividad. Algunas ideas son geniales, sí, pero la mayoría son simplemente discutibles y reflejan una falta de imaginación. La vida es demasiado corta para dedicarla a actividades sin sentido que solo alimentan el tedio. ¿Acaso no podemos aspirar a más que simplemente "abrazar el calor" y "sufrir el aburrimiento"? ¡Es hora de dejar de lado estas tonterías
    GRAFFICA.INFO
    100 cosas que puedes hacer (o no) en verano
    El verano no es una lista de planes. Es una excusa para hacer (o no hacer) lo que te dé la gana. Aquí tienes 100 ideas para abrazar el calor, el aburrimiento, la euforia, el tedio y todo lo que pasa entre junio y septiembre. Algunas son geniales. Otr
    1 Yorumlar 0 hisse senetleri
  • Hello, wonderful people! Today, I want to take a moment to celebrate the incredible advancements happening in the world of 3D printing, especially highlighted at the recent Paris Air Show!

    What an exciting week it has been for the additive manufacturing industry! The #3DExpress has been buzzing with news, showcasing how innovation and creativity are taking flight together! The Paris Air Show is not just a platform for the latest planes; it’s a stage for groundbreaking technologies that promise to revolutionize our future!

    Imagine a world where designing and producing complex aircraft parts becomes not only efficient but also sustainable! The use of 3D printing is paving the way for a greener future, reducing waste and making manufacturing more accessible than ever before. The possibilities are endless, and it’s invigorating to witness how these technologies can transform entire industries! 💪🏽

    During the show, we saw some amazing demonstrations of 3D printed components that are not only lightweight but also incredibly strong. This is a game-changer for aerospace engineering! Every layer printed brings us closer to smarter, more efficient air travel, and who wouldn’t want to be part of that journey?

    Let’s not forget the talented minds behind these innovations! The engineers, designers, and creators are the true superheroes, pushing boundaries and inspiring the next generation to dream bigger! Their passion and dedication remind us that with hard work and determination, we can reach for the stars!

    If you’ve ever doubted the power of creativity and technology, let this be your reminder: the future is bright, and we have the tools to shape it! So, let’s stay curious, keep pushing forward, and embrace every opportunity that comes our way! Together, we can soar to new heights!

    Let’s keep the conversation going about how #3D printing and additive manufacturing can change our world. What are your thoughts on these incredible innovations? Share your ideas and let’s inspire each other!

    #3DPrinting #Innovation #ParisAirShow #AdditiveManufacturing #FutureOfFlight
    🌟✨ Hello, wonderful people! Today, I want to take a moment to celebrate the incredible advancements happening in the world of 3D printing, especially highlighted at the recent Paris Air Show! 🚀🎉 What an exciting week it has been for the additive manufacturing industry! The #3DExpress has been buzzing with news, showcasing how innovation and creativity are taking flight together! 🌈✈️ The Paris Air Show is not just a platform for the latest planes; it’s a stage for groundbreaking technologies that promise to revolutionize our future! Imagine a world where designing and producing complex aircraft parts becomes not only efficient but also sustainable! 🌍💚 The use of 3D printing is paving the way for a greener future, reducing waste and making manufacturing more accessible than ever before. The possibilities are endless, and it’s invigorating to witness how these technologies can transform entire industries! 💪🏽✨ During the show, we saw some amazing demonstrations of 3D printed components that are not only lightweight but also incredibly strong. This is a game-changer for aerospace engineering! 🛠️🔧 Every layer printed brings us closer to smarter, more efficient air travel, and who wouldn’t want to be part of that journey? 🌟🌍 Let’s not forget the talented minds behind these innovations! The engineers, designers, and creators are the true superheroes, pushing boundaries and inspiring the next generation to dream bigger! 💖🔭 Their passion and dedication remind us that with hard work and determination, we can reach for the stars! 🌟 If you’ve ever doubted the power of creativity and technology, let this be your reminder: the future is bright, and we have the tools to shape it! So, let’s stay curious, keep pushing forward, and embrace every opportunity that comes our way! Together, we can soar to new heights! 🚀💖 Let’s keep the conversation going about how #3D printing and additive manufacturing can change our world. What are your thoughts on these incredible innovations? Share your ideas and let’s inspire each other! 🌈✨ #3DPrinting #Innovation #ParisAirShow #AdditiveManufacturing #FutureOfFlight
    #3DExpress: La fabricación aditiva en el Paris Air Show
    ¿Qué ha ocurrido esta semana en la industria de la impresión 3D? En el 3DExpress de hoy te ofrecemos un resumen rápido con las noticias más destacadas de los últimos días. En primer lugar, el Paris Air Show es esta…
    Like
    Love
    Wow
    Sad
    Angry
    287
    1 Yorumlar 0 hisse senetleri
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Yorumlar 0 hisse senetleri
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Yorumlar 0 hisse senetleri
  • VFXShow 296: Mission: Impossible – The Final Reckoning

    Ethan Hunt and the IMF team race against time to find a rogue artificial intelligencethat can destroy mankind.
    AI, IMF & VFX: A Mission Worth Rendering
    In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning.
    As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt,  the relentless agent of the Impossible Mission Force.
    Cruise Control: When Practical Meets Pixel
    While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine,  paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger.
    Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos.
    Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable.
    The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship,  it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre.
    Falling, Flying, Faking It Beautifully
    For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction.
    This week in our lineup isMatt Wallin *            @mattwallin    www.mattwallin.com
    Follow Matt on Mastodon: @Jason Diamond  @jasondiamond           www.thediamondbros.com
    Mike Seymour   @mikeseymour             www.fxguide.com. + @mikeseymour
    Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    #vfxshow #mission #impossible #final #reckoning
    VFXShow 296: Mission: Impossible – The Final Reckoning
    Ethan Hunt and the IMF team race against time to find a rogue artificial intelligencethat can destroy mankind. AI, IMF & VFX: A Mission Worth Rendering In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning. As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt,  the relentless agent of the Impossible Mission Force. Cruise Control: When Practical Meets Pixel While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine,  paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger. Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos. Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable. The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship,  it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre. Falling, Flying, Faking It Beautifully For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction. This week in our lineup isMatt Wallin *            @mattwallin    www.mattwallin.com Follow Matt on Mastodon: @Jason Diamond  @jasondiamond           www.thediamondbros.com Mike Seymour   @mikeseymour             www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen. #vfxshow #mission #impossible #final #reckoning
    WWW.FXGUIDE.COM
    VFXShow 296: Mission: Impossible – The Final Reckoning
    Ethan Hunt and the IMF team race against time to find a rogue artificial intelligence (why is AI always the bad guy now if films? ) that can destroy mankind. AI, IMF & VFX: A Mission Worth Rendering In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning. As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt,  the relentless agent of the Impossible Mission Force. Cruise Control: When Practical Meets Pixel While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine,  paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger. Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos. Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable. The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship,  it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre. Falling, Flying, Faking It Beautifully For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction. This week in our lineup is (or are they really??) Matt Wallin *            @mattwallin    www.mattwallin.com Follow Matt on Mastodon: @[email protected] Jason Diamond  @jasondiamond           www.thediamondbros.com Mike Seymour   @mikeseymour             www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Yorumlar 0 hisse senetleri
  • Photos show the tanks, planes, and soldiers featured in the US Army's 250th anniversary parade held on Trump's birthday

    President Donald Trump has long expressed interest in holding a military parade. He finally got one for his birthday.The US Army celebrated its 250th anniversary on Saturday in Washington, DC, with a parade featuring 6,600 troops, 150 vehicles, and over 50 aircraft.June 14 also marked Trump's 79th birthday.Trump attended the event accompanied by first lady Melania Trump and other family members. The president stood to salute troops as they passed his viewing box.In May, a US Army spokesperson told CNBC that the event could cost between million and million in total.

    Prior to the parade, the National Mall was lined with displays of tanks, planes, cannons, and other weaponry to educate onlookers about the US Army's history and modern capabilities.

    A tank is on display on the National Mall ahead of the Army's 250th anniversary parade.

    Amid Farahi/AFP via Getty Images

    The US Army also held a fitness competition where service members competed against one another in various drills.

    A member of the military climbed a rope during a fitness competition at the US Army's 250th Anniversary festival in Washington, DC.

    Anna Moneymaker/Getty Images

    Anti-Trump "No Kings" counterprotests, organized by the grassroots group 50501, were held nationwide ahead of the parade.

    A "No Kings" protest in Los Angeles.

    Aude Guerrucci/REUTERS

    Protest signs across the country condemned Trump's policies and expressed support for progressive causes.

    A "No Kings" protest in New York City.

    Eduardo Munoz/REUTERS

    President Donald Trump attended the parade with first lady Melania Trump. Vice President JD Vance and second lady Usha Vance were also present.

    Donald Trump and Melania Trump at the Army 250th Anniversary Parade.

    DOUG MILLS/POOL/AFP via Getty Images

    The Trump family members in attendance included Donald Trump Jr. and girlfriend Bettina Anderson, Eric and Lara Trump, and Tiffany Trump's husband, Michael Boulos.

    President Donald Trump, first lady Melania Trump, and other Trump family members and White House officials at the US Army's 250th anniversary parade.

    Mandel NGAN/AFP via Getty Images

    The parade featured service members dressed in historic uniforms dating back to the Revolutionary War, honoring the origins of the US Army.

    US military service members in Revolutionary War uniforms marched along Constitution Avenue during the Army's 250th anniversary parade in Washington, DC.

    Amid FARAHI/AFP via Getty Images

    Historic tanks such as the Sherman tank used in World War II rolled through the streets.

    Members of the U.S Army drive in a Sherman tank in the US Army's 250th anniversary parade in Washington, DC.

    Samuel Corum/Getty Images

    The parade also featured more modern tanks such as M2 Bradley Fighting Vehicles, which the US used in the Iraq War and provided to Ukraine amid the ongoing war with Russia.

    An M2 Bradley Fighting Vehicle rolls down Constitution Avenue during the Army's 250th Anniversary Parade in Washington, DC.

    AMID FARAHI/AFP via Getty Images

    Service members driving the vehicles waved and gestured at the crowds, who braved rainy weather to watch the festivities.

    Members of the US Army drive a Bradley Fighting Vehicle in the 250th anniversary parade.

    Andrew Harnik/Getty Images

    The Golden Knights, the US Army's parachute demonstration and competition team, leapt from planes and landed in front of the White House during the parade.

    A member of the Golden Knights during the US Army's 250th anniversary parade.

    Mandel NGAN / AFP

    Lines of uniformed service members stretched all the way down Constitution Avenue.

    Members of the US Army march in the 250th anniversary parade in Washington, DC.

    Kevin Dietsch/Getty Images

    B-25 and P-51 planes performed flyovers despite foggy skies.

    A US Army B-25 and two P-51s performed a flyover during the Army's 250th Anniversary Parade in Washington, DC.

    OLIVER CONTRERAS/AFP via Getty Images

    Army helicopters flew in formation over the National Mall.

    A girl waved at a squad of helicopters during the Army's 250th Anniversary Parade.

    MATTHEW HATCHER/AFP via Getty Images

    After the parade, the night ended with fireworks to celebrate the US Army's 250th birthday and Trump's 79th.

    Donald Trump and Melania Trump watch fireworks in Washington, DC, after the US Army's 250th anniversary parade.

    Doug Mills/Pool/Getty Images
    #photos #show #tanks #planes #soldiers
    Photos show the tanks, planes, and soldiers featured in the US Army's 250th anniversary parade held on Trump's birthday
    President Donald Trump has long expressed interest in holding a military parade. He finally got one for his birthday.The US Army celebrated its 250th anniversary on Saturday in Washington, DC, with a parade featuring 6,600 troops, 150 vehicles, and over 50 aircraft.June 14 also marked Trump's 79th birthday.Trump attended the event accompanied by first lady Melania Trump and other family members. The president stood to salute troops as they passed his viewing box.In May, a US Army spokesperson told CNBC that the event could cost between million and million in total. Prior to the parade, the National Mall was lined with displays of tanks, planes, cannons, and other weaponry to educate onlookers about the US Army's history and modern capabilities. A tank is on display on the National Mall ahead of the Army's 250th anniversary parade. Amid Farahi/AFP via Getty Images The US Army also held a fitness competition where service members competed against one another in various drills. A member of the military climbed a rope during a fitness competition at the US Army's 250th Anniversary festival in Washington, DC. Anna Moneymaker/Getty Images Anti-Trump "No Kings" counterprotests, organized by the grassroots group 50501, were held nationwide ahead of the parade. A "No Kings" protest in Los Angeles. Aude Guerrucci/REUTERS Protest signs across the country condemned Trump's policies and expressed support for progressive causes. A "No Kings" protest in New York City. Eduardo Munoz/REUTERS President Donald Trump attended the parade with first lady Melania Trump. Vice President JD Vance and second lady Usha Vance were also present. Donald Trump and Melania Trump at the Army 250th Anniversary Parade. DOUG MILLS/POOL/AFP via Getty Images The Trump family members in attendance included Donald Trump Jr. and girlfriend Bettina Anderson, Eric and Lara Trump, and Tiffany Trump's husband, Michael Boulos. President Donald Trump, first lady Melania Trump, and other Trump family members and White House officials at the US Army's 250th anniversary parade. Mandel NGAN/AFP via Getty Images The parade featured service members dressed in historic uniforms dating back to the Revolutionary War, honoring the origins of the US Army. US military service members in Revolutionary War uniforms marched along Constitution Avenue during the Army's 250th anniversary parade in Washington, DC. Amid FARAHI/AFP via Getty Images Historic tanks such as the Sherman tank used in World War II rolled through the streets. Members of the U.S Army drive in a Sherman tank in the US Army's 250th anniversary parade in Washington, DC. Samuel Corum/Getty Images The parade also featured more modern tanks such as M2 Bradley Fighting Vehicles, which the US used in the Iraq War and provided to Ukraine amid the ongoing war with Russia. An M2 Bradley Fighting Vehicle rolls down Constitution Avenue during the Army's 250th Anniversary Parade in Washington, DC. AMID FARAHI/AFP via Getty Images Service members driving the vehicles waved and gestured at the crowds, who braved rainy weather to watch the festivities. Members of the US Army drive a Bradley Fighting Vehicle in the 250th anniversary parade. Andrew Harnik/Getty Images The Golden Knights, the US Army's parachute demonstration and competition team, leapt from planes and landed in front of the White House during the parade. A member of the Golden Knights during the US Army's 250th anniversary parade. Mandel NGAN / AFP Lines of uniformed service members stretched all the way down Constitution Avenue. Members of the US Army march in the 250th anniversary parade in Washington, DC. Kevin Dietsch/Getty Images B-25 and P-51 planes performed flyovers despite foggy skies. A US Army B-25 and two P-51s performed a flyover during the Army's 250th Anniversary Parade in Washington, DC. OLIVER CONTRERAS/AFP via Getty Images Army helicopters flew in formation over the National Mall. A girl waved at a squad of helicopters during the Army's 250th Anniversary Parade. MATTHEW HATCHER/AFP via Getty Images After the parade, the night ended with fireworks to celebrate the US Army's 250th birthday and Trump's 79th. Donald Trump and Melania Trump watch fireworks in Washington, DC, after the US Army's 250th anniversary parade. Doug Mills/Pool/Getty Images #photos #show #tanks #planes #soldiers
    WWW.BUSINESSINSIDER.COM
    Photos show the tanks, planes, and soldiers featured in the US Army's 250th anniversary parade held on Trump's birthday
    President Donald Trump has long expressed interest in holding a military parade. He finally got one for his birthday.The US Army celebrated its 250th anniversary on Saturday in Washington, DC, with a parade featuring 6,600 troops, 150 vehicles, and over 50 aircraft.June 14 also marked Trump's 79th birthday.Trump attended the event accompanied by first lady Melania Trump and other family members. The president stood to salute troops as they passed his viewing box.In May, a US Army spokesperson told CNBC that the event could cost between $25 million and $45 million in total. Prior to the parade, the National Mall was lined with displays of tanks, planes, cannons, and other weaponry to educate onlookers about the US Army's history and modern capabilities. A tank is on display on the National Mall ahead of the Army's 250th anniversary parade. Amid Farahi/AFP via Getty Images The US Army also held a fitness competition where service members competed against one another in various drills. A member of the military climbed a rope during a fitness competition at the US Army's 250th Anniversary festival in Washington, DC. Anna Moneymaker/Getty Images Anti-Trump "No Kings" counterprotests, organized by the grassroots group 50501, were held nationwide ahead of the parade. A "No Kings" protest in Los Angeles. Aude Guerrucci/REUTERS Protest signs across the country condemned Trump's policies and expressed support for progressive causes. A "No Kings" protest in New York City. Eduardo Munoz/REUTERS President Donald Trump attended the parade with first lady Melania Trump. Vice President JD Vance and second lady Usha Vance were also present. Donald Trump and Melania Trump at the Army 250th Anniversary Parade. DOUG MILLS/POOL/AFP via Getty Images The Trump family members in attendance included Donald Trump Jr. and girlfriend Bettina Anderson, Eric and Lara Trump, and Tiffany Trump's husband, Michael Boulos. President Donald Trump, first lady Melania Trump, and other Trump family members and White House officials at the US Army's 250th anniversary parade. Mandel NGAN/AFP via Getty Images The parade featured service members dressed in historic uniforms dating back to the Revolutionary War, honoring the origins of the US Army. US military service members in Revolutionary War uniforms marched along Constitution Avenue during the Army's 250th anniversary parade in Washington, DC. Amid FARAHI/AFP via Getty Images Historic tanks such as the Sherman tank used in World War II rolled through the streets. Members of the U.S Army drive in a Sherman tank in the US Army's 250th anniversary parade in Washington, DC. Samuel Corum/Getty Images The parade also featured more modern tanks such as M2 Bradley Fighting Vehicles, which the US used in the Iraq War and provided to Ukraine amid the ongoing war with Russia. An M2 Bradley Fighting Vehicle rolls down Constitution Avenue during the Army's 250th Anniversary Parade in Washington, DC. AMID FARAHI/AFP via Getty Images Service members driving the vehicles waved and gestured at the crowds, who braved rainy weather to watch the festivities. Members of the US Army drive a Bradley Fighting Vehicle in the 250th anniversary parade. Andrew Harnik/Getty Images The Golden Knights, the US Army's parachute demonstration and competition team, leapt from planes and landed in front of the White House during the parade. A member of the Golden Knights during the US Army's 250th anniversary parade. Mandel NGAN / AFP Lines of uniformed service members stretched all the way down Constitution Avenue. Members of the US Army march in the 250th anniversary parade in Washington, DC. Kevin Dietsch/Getty Images B-25 and P-51 planes performed flyovers despite foggy skies. A US Army B-25 and two P-51s performed a flyover during the Army's 250th Anniversary Parade in Washington, DC. OLIVER CONTRERAS/AFP via Getty Images Army helicopters flew in formation over the National Mall. A girl waved at a squad of helicopters during the Army's 250th Anniversary Parade. MATTHEW HATCHER/AFP via Getty Images After the parade, the night ended with fireworks to celebrate the US Army's 250th birthday and Trump's 79th. Donald Trump and Melania Trump watch fireworks in Washington, DC, after the US Army's 250th anniversary parade. Doug Mills/Pool/Getty Images
    0 Yorumlar 0 hisse senetleri
  • Casa Morena by Mário Martins Atelier: Architectural Dialogue with Nature

    Casa Morena | © Fernando Guerra / FG+SG
    In the coastal enclave of Lagos, Portugal, Mário Martins Atelier has crafted Casa Morena. This residence quietly asserts itself as an ode to the dialogue between architecture and its natural setting. Completed in 2024, this project demonstrates a considered response to its environment, where the interplay of light, material, and landscape defines a sense of place rather than architectural imposition.

    Casa Morena Technical Information

    Architects1-5: Mário Martins Atelier
    Location: Lagos, Portugal
    Project Years: 2024
    Photographs: © Fernando Guerra / FG+SG

    A simple house, one that wishes to be discreet and to be influenced by its location, to become a house that is pleasant with thoughtful landscaping.
    – Mário Martins Atelier

    Casa Morena Photographs

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG
    A Contextual Response to Landscape and Light
    The design of Casa Morena finds its genesis in the site itself, a pine-scented plot overlooking the expanse of a bay. The pine trees, longstanding witnesses to the landscape’s evolution, provide the project’s visual anchor and spatial logic. In a move that both respects and celebrates these natural elements, Mário Martins Atelier structured the house’s reticulated plan to echo the presence of the trees, creating a composition that unfolds as a series of volumes harmonizing with the vertical rhythm of the trunks.
    The solid base of the house, built from locally sourced schist, emerges directly from the terrain. These robust walls establish a tactile continuity with the ground, their rough textures anchoring the architecture within the landscape. In contrast, the upper volumes of the house adopt a distinctly lighter expression: horizontal planes rendered in white plaster, their smooth surfaces catching and refracting the region’s luminous sun. This duality, earthbound solidity, and aerial lightness establish an architectural narrative rooted in the elemental.
    Casa Morena Experiential Flow
    Casa Morena’s spatial arrangement articulates a clear hierarchy of public and private domains. On the ground floor, the house embraces openness and transparency. An expansive entrance hall blurs the threshold inside and out, guiding inhabitants and visitors into a luminous social heart. The lounge, kitchen, and office flow seamlessly into the garden, unified by a continuous glazed façade that invites the outside in.
    This deliberate porosity extends to a covered terrace, an intermediary space that dissolves the boundary between shelter and exposure. The terrace, framed by the garden’s green canopy and the swimming pool’s long line, becomes a place of repose and contemplation. The pool itself demarcates the transition from a cultivated garden to the looser, more rugged landscape beyond, its linear form echoing the horizon’s expanse.
    Ascending to the upper floor, the architectural language shifts towards intimacy. The bedrooms, each with direct access to terraces and patios, create secluded zones that still maintain a fluid relationship with the outdoors. A discreet rooftop terrace, accessible from these private quarters, offers a hidden sanctuary where the interplay of views and light remains uninterrupted.
    Material Tectonics and Environmental Strategy
    Casa Morena’s material palette is rooted in regional specificity and tactile sensibility. Schist, extracted from the site, is not merely a structural element but a narrative thread linking the building to its geological past. Its earthy warmth and rugged surface provide a counterpoint to the luminous white of the upper volumes, an articulation of contrast that enlivens the building’s silhouette.
    White, the chromatic signature of the Algarve region, is employed with restraint and nuance. Its reflective qualities intensify the play of shadow and light, a dynamic that shifts with the passing of the day. In this interplay, architecture becomes an instrument for registering the ephemeral, and the environment itself becomes a participant in the spatial drama.
    Environmental stewardship is also woven into the project’s DNA. Discreetly integrated systems on the roof harness solar energy and manage water resources, extending the house’s commitment to a sustainable coexistence with its setting.
    Casa Morena Plans

    Basement | © Mario Martins Atelier

    Ground Level | © Mario Martins Atelier

    Upper Level | © Mario Martins Atelier

    Roof Plan | © Mario Martins Atelier

    Elevations | © Mario Martins Atelier
    Casa Morena Image Gallery

    About Mário Martins Atelier
    Mário Martins Atelier is an architectural studio based in Lagos and Lisbon, Portugal, led by Mário Martins. The practice is known for its context-sensitive approach, crafting contemporary projects seamlessly integrating with their surroundings while prioritizing regional materials and environmental considerations.
    Credits and Additional Notes

    Lead Architect: Mário Martins, arq.
    Project Team: Nuno Colaço, Sónia Fialho, Susana Jóia, Mariana Franco, Ana Graça
    Engineering: Nuno Grave Engenharia
    Landscape: HB-Hipolito Bettencourt – Arquitectura Paisagista, Lda.
    Building Contractor: Marques Antunes Engenharia Lda.
    #casa #morena #mário #martins #atelier
    Casa Morena by Mário Martins Atelier: Architectural Dialogue with Nature
    Casa Morena | © Fernando Guerra / FG+SG In the coastal enclave of Lagos, Portugal, Mário Martins Atelier has crafted Casa Morena. This residence quietly asserts itself as an ode to the dialogue between architecture and its natural setting. Completed in 2024, this project demonstrates a considered response to its environment, where the interplay of light, material, and landscape defines a sense of place rather than architectural imposition. Casa Morena Technical Information Architects1-5: Mário Martins Atelier Location: Lagos, Portugal Project Years: 2024 Photographs: © Fernando Guerra / FG+SG A simple house, one that wishes to be discreet and to be influenced by its location, to become a house that is pleasant with thoughtful landscaping. – Mário Martins Atelier Casa Morena Photographs © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG A Contextual Response to Landscape and Light The design of Casa Morena finds its genesis in the site itself, a pine-scented plot overlooking the expanse of a bay. The pine trees, longstanding witnesses to the landscape’s evolution, provide the project’s visual anchor and spatial logic. In a move that both respects and celebrates these natural elements, Mário Martins Atelier structured the house’s reticulated plan to echo the presence of the trees, creating a composition that unfolds as a series of volumes harmonizing with the vertical rhythm of the trunks. The solid base of the house, built from locally sourced schist, emerges directly from the terrain. These robust walls establish a tactile continuity with the ground, their rough textures anchoring the architecture within the landscape. In contrast, the upper volumes of the house adopt a distinctly lighter expression: horizontal planes rendered in white plaster, their smooth surfaces catching and refracting the region’s luminous sun. This duality, earthbound solidity, and aerial lightness establish an architectural narrative rooted in the elemental. Casa Morena Experiential Flow Casa Morena’s spatial arrangement articulates a clear hierarchy of public and private domains. On the ground floor, the house embraces openness and transparency. An expansive entrance hall blurs the threshold inside and out, guiding inhabitants and visitors into a luminous social heart. The lounge, kitchen, and office flow seamlessly into the garden, unified by a continuous glazed façade that invites the outside in. This deliberate porosity extends to a covered terrace, an intermediary space that dissolves the boundary between shelter and exposure. The terrace, framed by the garden’s green canopy and the swimming pool’s long line, becomes a place of repose and contemplation. The pool itself demarcates the transition from a cultivated garden to the looser, more rugged landscape beyond, its linear form echoing the horizon’s expanse. Ascending to the upper floor, the architectural language shifts towards intimacy. The bedrooms, each with direct access to terraces and patios, create secluded zones that still maintain a fluid relationship with the outdoors. A discreet rooftop terrace, accessible from these private quarters, offers a hidden sanctuary where the interplay of views and light remains uninterrupted. Material Tectonics and Environmental Strategy Casa Morena’s material palette is rooted in regional specificity and tactile sensibility. Schist, extracted from the site, is not merely a structural element but a narrative thread linking the building to its geological past. Its earthy warmth and rugged surface provide a counterpoint to the luminous white of the upper volumes, an articulation of contrast that enlivens the building’s silhouette. White, the chromatic signature of the Algarve region, is employed with restraint and nuance. Its reflective qualities intensify the play of shadow and light, a dynamic that shifts with the passing of the day. In this interplay, architecture becomes an instrument for registering the ephemeral, and the environment itself becomes a participant in the spatial drama. Environmental stewardship is also woven into the project’s DNA. Discreetly integrated systems on the roof harness solar energy and manage water resources, extending the house’s commitment to a sustainable coexistence with its setting. Casa Morena Plans Basement | © Mario Martins Atelier Ground Level | © Mario Martins Atelier Upper Level | © Mario Martins Atelier Roof Plan | © Mario Martins Atelier Elevations | © Mario Martins Atelier Casa Morena Image Gallery About Mário Martins Atelier Mário Martins Atelier is an architectural studio based in Lagos and Lisbon, Portugal, led by Mário Martins. The practice is known for its context-sensitive approach, crafting contemporary projects seamlessly integrating with their surroundings while prioritizing regional materials and environmental considerations. Credits and Additional Notes Lead Architect: Mário Martins, arq. Project Team: Nuno Colaço, Sónia Fialho, Susana Jóia, Mariana Franco, Ana Graça Engineering: Nuno Grave Engenharia Landscape: HB-Hipolito Bettencourt – Arquitectura Paisagista, Lda. Building Contractor: Marques Antunes Engenharia Lda. #casa #morena #mário #martins #atelier
    ARCHEYES.COM
    Casa Morena by Mário Martins Atelier: Architectural Dialogue with Nature
    Casa Morena | © Fernando Guerra / FG+SG In the coastal enclave of Lagos, Portugal, Mário Martins Atelier has crafted Casa Morena. This residence quietly asserts itself as an ode to the dialogue between architecture and its natural setting. Completed in 2024, this project demonstrates a considered response to its environment, where the interplay of light, material, and landscape defines a sense of place rather than architectural imposition. Casa Morena Technical Information Architects1-5: Mário Martins Atelier Location: Lagos, Portugal Project Years: 2024 Photographs: © Fernando Guerra / FG+SG A simple house, one that wishes to be discreet and to be influenced by its location, to become a house that is pleasant with thoughtful landscaping. – Mário Martins Atelier Casa Morena Photographs © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG A Contextual Response to Landscape and Light The design of Casa Morena finds its genesis in the site itself, a pine-scented plot overlooking the expanse of a bay. The pine trees, longstanding witnesses to the landscape’s evolution, provide the project’s visual anchor and spatial logic. In a move that both respects and celebrates these natural elements, Mário Martins Atelier structured the house’s reticulated plan to echo the presence of the trees, creating a composition that unfolds as a series of volumes harmonizing with the vertical rhythm of the trunks. The solid base of the house, built from locally sourced schist, emerges directly from the terrain. These robust walls establish a tactile continuity with the ground, their rough textures anchoring the architecture within the landscape. In contrast, the upper volumes of the house adopt a distinctly lighter expression: horizontal planes rendered in white plaster, their smooth surfaces catching and refracting the region’s luminous sun. This duality, earthbound solidity, and aerial lightness establish an architectural narrative rooted in the elemental. Casa Morena Experiential Flow Casa Morena’s spatial arrangement articulates a clear hierarchy of public and private domains. On the ground floor, the house embraces openness and transparency. An expansive entrance hall blurs the threshold inside and out, guiding inhabitants and visitors into a luminous social heart. The lounge, kitchen, and office flow seamlessly into the garden, unified by a continuous glazed façade that invites the outside in. This deliberate porosity extends to a covered terrace, an intermediary space that dissolves the boundary between shelter and exposure. The terrace, framed by the garden’s green canopy and the swimming pool’s long line, becomes a place of repose and contemplation. The pool itself demarcates the transition from a cultivated garden to the looser, more rugged landscape beyond, its linear form echoing the horizon’s expanse. Ascending to the upper floor, the architectural language shifts towards intimacy. The bedrooms, each with direct access to terraces and patios, create secluded zones that still maintain a fluid relationship with the outdoors. A discreet rooftop terrace, accessible from these private quarters, offers a hidden sanctuary where the interplay of views and light remains uninterrupted. Material Tectonics and Environmental Strategy Casa Morena’s material palette is rooted in regional specificity and tactile sensibility. Schist, extracted from the site, is not merely a structural element but a narrative thread linking the building to its geological past. Its earthy warmth and rugged surface provide a counterpoint to the luminous white of the upper volumes, an articulation of contrast that enlivens the building’s silhouette. White, the chromatic signature of the Algarve region, is employed with restraint and nuance. Its reflective qualities intensify the play of shadow and light, a dynamic that shifts with the passing of the day. In this interplay, architecture becomes an instrument for registering the ephemeral, and the environment itself becomes a participant in the spatial drama. Environmental stewardship is also woven into the project’s DNA. Discreetly integrated systems on the roof harness solar energy and manage water resources, extending the house’s commitment to a sustainable coexistence with its setting. Casa Morena Plans Basement | © Mario Martins Atelier Ground Level | © Mario Martins Atelier Upper Level | © Mario Martins Atelier Roof Plan | © Mario Martins Atelier Elevations | © Mario Martins Atelier Casa Morena Image Gallery About Mário Martins Atelier Mário Martins Atelier is an architectural studio based in Lagos and Lisbon, Portugal, led by Mário Martins. The practice is known for its context-sensitive approach, crafting contemporary projects seamlessly integrating with their surroundings while prioritizing regional materials and environmental considerations. Credits and Additional Notes Lead Architect: Mário Martins, arq. Project Team: Nuno Colaço, Sónia Fialho, Susana Jóia, Mariana Franco, Ana Graça Engineering: Nuno Grave Engenharia Landscape: HB-Hipolito Bettencourt – Arquitectura Paisagista, Lda. Building Contractor: Marques Antunes Engenharia Lda.
    Love
    1
    0 Yorumlar 0 hisse senetleri
  • UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge

    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques.
    “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added.
    The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration.
    Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    Tackling America’s Bridge Crisis with Cold Spray Technology
    Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels. 
    The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge.
    One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.”
    To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan.
    Next steps: Testing Cold-Sprayed Repairs
    The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests.
    “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.”
    3D Printing for Infrastructure Repairs
    Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained.
    Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    #umass #mit #test #cold #spray
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. #umass #mit #test #cold #spray
    3DPRINTINGINDUSTRY.COM
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst (UMass), in collaboration with the Massachusetts Institute of Technology (MIT) Department of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation (MassDOT), the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed $190 billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us to [apply the technique] on this actual bridge while cars are going [across].” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College London (UCL) have developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst.
    0 Yorumlar 0 hisse senetleri
CGShares https://cgshares.com