• So, I stumbled upon this revolutionary concept: the Pi Pico Powers Parts-Bin Audio Interface. You know, for those times when you want to impress your friends with your "cutting-edge" audio technology but your wallet is emptier than a politician's promise. Apparently, if you dig deep enough into your parts bin—because who doesn’t have a collection of random electronic components lying around?—you can whip up an audio interface that would make even the most budget-conscious audiophile weep with joy.

    Let’s be real for a moment. The idea of “USB audio is great” is like saying “water is wet.” Sure, it’s true, but it’s not exactly breaking news. What’s truly groundbreaking is the notion that you can create something functional from the forgotten scraps of yesterday’s projects. It’s like a DIY episode of “Chopped” but for tech nerds. “Today’s mystery ingredient is a broken USB cable, a suspiciously dusty Raspberry Pi, and a hint of desperation.”

    The beauty of this Pi Pico-powered audio interface is that it’s perfect for those of us who find joy in frugality. Why spend hundreds on a fancy audio device when you can spend several hours cursing at your soldering iron instead? Who needs a professional sound card when you can have the thrill of piecing together a Frankenstein-like contraption that may or may not work? The suspense alone is worth the price of admission!

    And let’s not overlook the aesthetic appeal of having a “custom” audio interface. Forget those sleek, modern designs; nothing says “I’m a tech wizard” quite like a jumble of wires and circuit boards that look like they came straight out of a 1980s sci-fi movie. Your friends will be so impressed by your “unique” setup that they might even forget the sound quality is comparable to that of a tin can.

    Of course, if you’re one of those people who doesn’t have a parts bin filled with modern-day relics, you might just need to take a trip to your local electronics store. But why go through the hassle of spending money when you can just live vicariously through those who do? It’s all about the experience, right? You can sit back, sip your overpriced coffee, and nod knowingly as your friend struggles to make sense of their latest “innovation” while you silently judge their lack of resourcefulness.

    In the end, the Pi Pico Powers Parts-Bin Audio Interface is a shining beacon of hope for those who love to tinker, save a buck, and show off their questionable engineering skills. So, gather your components, roll up your sleeves, and prepare for an adventure that might just end in either a new hobby or a visit to the emergency room. Let the audio experimentation begin!

    #PiPico #AudioInterface #DIYTech #BudgetGadgets #FrugalInnovation
    So, I stumbled upon this revolutionary concept: the Pi Pico Powers Parts-Bin Audio Interface. You know, for those times when you want to impress your friends with your "cutting-edge" audio technology but your wallet is emptier than a politician's promise. Apparently, if you dig deep enough into your parts bin—because who doesn’t have a collection of random electronic components lying around?—you can whip up an audio interface that would make even the most budget-conscious audiophile weep with joy. Let’s be real for a moment. The idea of “USB audio is great” is like saying “water is wet.” Sure, it’s true, but it’s not exactly breaking news. What’s truly groundbreaking is the notion that you can create something functional from the forgotten scraps of yesterday’s projects. It’s like a DIY episode of “Chopped” but for tech nerds. “Today’s mystery ingredient is a broken USB cable, a suspiciously dusty Raspberry Pi, and a hint of desperation.” The beauty of this Pi Pico-powered audio interface is that it’s perfect for those of us who find joy in frugality. Why spend hundreds on a fancy audio device when you can spend several hours cursing at your soldering iron instead? Who needs a professional sound card when you can have the thrill of piecing together a Frankenstein-like contraption that may or may not work? The suspense alone is worth the price of admission! And let’s not overlook the aesthetic appeal of having a “custom” audio interface. Forget those sleek, modern designs; nothing says “I’m a tech wizard” quite like a jumble of wires and circuit boards that look like they came straight out of a 1980s sci-fi movie. Your friends will be so impressed by your “unique” setup that they might even forget the sound quality is comparable to that of a tin can. Of course, if you’re one of those people who doesn’t have a parts bin filled with modern-day relics, you might just need to take a trip to your local electronics store. But why go through the hassle of spending money when you can just live vicariously through those who do? It’s all about the experience, right? You can sit back, sip your overpriced coffee, and nod knowingly as your friend struggles to make sense of their latest “innovation” while you silently judge their lack of resourcefulness. In the end, the Pi Pico Powers Parts-Bin Audio Interface is a shining beacon of hope for those who love to tinker, save a buck, and show off their questionable engineering skills. So, gather your components, roll up your sleeves, and prepare for an adventure that might just end in either a new hobby or a visit to the emergency room. Let the audio experimentation begin! #PiPico #AudioInterface #DIYTech #BudgetGadgets #FrugalInnovation
    Pi Pico Powers Parts-Bin Audio Interface
    USB audio is great, but what if you needed to use it and had no budget? Well, depending on the contents of your parts bin, you might be able to …read more
    Like
    Love
    Wow
    Sad
    Angry
    310
    1 Reacties 0 aandelen
  • Sharpen the story – a design guide to start-up’s pitch decks

    In early-stage start-ups, the pitch deck is often the first thing investors see. Sometimes, it’s the only thing. And yet, it rarely gets the same attention as the website or the socials. Most decks are pulled together last minute, with slides that feel rushed, messy, or just off.
    That’s where designers can really make a difference.
    The deck might seem like just another task, but it’s a chance to work on something strategic early on and help shape how the company is understood. It offers a rare opportunity to collaborate closely with copywriters, strategists and the founders to turn their vision into a clear and convincing story.
    Founders bring the vision, but more and more, design and brand teams are being asked to shape how that vision is told, and sold. So here are five handy things we’ve learned at SIDE ST for the next time you’re asked to design a deck.
    Think in context
    Designers stepping into pitch work should begin by understanding the full picture – who the deck is for, what outcomes it’s meant to drive and how it fits into the broader brand and business context. Their role isn’t just to make things look good, but to prioritise clarity over surface-level aesthetics.
    It’s about getting into the founders’ mindset, shaping visuals and copy around the message, and connecting with the intended audience. Every decision, from slide hierarchy to image selection, should reinforce the business goals behind the deck.
    Support the narrative
    Visuals are more subjective than words, and that’s exactly what gives them power. The right image can suggest an idea, reinforce a value, or subtly shift perception without a single word.
    Whether it’s hinting at accessibility, signalling innovation, or grounding the product in context, design plays a strategic role in how a company is understood. It gives designers the opportunity to take centre stage in the storytelling, shaping how the company is understood through visual choices.
    But that influence works both ways. Used thoughtlessly, visuals can distort the story, suggesting the wrong market, implying a different stage of maturity, or confusing people about the product itself. When used with care, they become a powerful design tool to sharpen the narrative and spark interest from the very first slide.
    Keep it real
    Stock photos can be tempting. They’re high-quality and easy to drop in, especially when the real images a start-up has can be grainy, unfinished, or simply not there yet.
    But in early-stage pitch decks, they often work against your client. Instead of supporting the story, they flatten it, and rarely reflect the actual team, product, or context.
    This is your chance as a designer to lean into what’s real, even if it’s a bit rough. Designers can elevate even scrappy assets with thoughtful framing and treatment, turning rough imagery into a strength. In early-stage storytelling, “real” often resonates more than “perfect.”
    Pay attention to the format
    Even if you’re brought in just to design the deck, don’t treat it as a standalone piece. It’s often the first brand touchpoint investors will see—but it won’t be the last. They’ll go on to check the website, scroll through social posts, and form an impression based on how it all fits together.
    Early-stage startups might not have full brand guidelines in place yet, but that doesn’t mean there’s no need for consistency. In fact, it gives designers a unique opportunity to lay the foundation. A strong, thoughtful deck can help shape the early visual language and give the team something to build on as the brand grows.
    Before you hit export
    For designers, the deck isn’t just another deliverable. It’s an early tool that shapes and impacts investor perception, internal alignment and founder confidence. It’s a strategic design moment to influence the trajectory of a company before it’s fully formed.
    Designers who understand the pressure, pace and uncertainty founders face at this stage are better equipped to deliver work that resonates. This is about more than simply polishing slides, it’s about helping early-stage teams tell a sharper, more human story when it matters most.
    Maor Ofek is founder of SIDE ST, a brand consultancy that works mainly with start-ups. 
    #sharpen #story #design #guide #startups
    Sharpen the story – a design guide to start-up’s pitch decks
    In early-stage start-ups, the pitch deck is often the first thing investors see. Sometimes, it’s the only thing. And yet, it rarely gets the same attention as the website or the socials. Most decks are pulled together last minute, with slides that feel rushed, messy, or just off. That’s where designers can really make a difference. The deck might seem like just another task, but it’s a chance to work on something strategic early on and help shape how the company is understood. It offers a rare opportunity to collaborate closely with copywriters, strategists and the founders to turn their vision into a clear and convincing story. Founders bring the vision, but more and more, design and brand teams are being asked to shape how that vision is told, and sold. So here are five handy things we’ve learned at SIDE ST for the next time you’re asked to design a deck. Think in context Designers stepping into pitch work should begin by understanding the full picture – who the deck is for, what outcomes it’s meant to drive and how it fits into the broader brand and business context. Their role isn’t just to make things look good, but to prioritise clarity over surface-level aesthetics. It’s about getting into the founders’ mindset, shaping visuals and copy around the message, and connecting with the intended audience. Every decision, from slide hierarchy to image selection, should reinforce the business goals behind the deck. Support the narrative Visuals are more subjective than words, and that’s exactly what gives them power. The right image can suggest an idea, reinforce a value, or subtly shift perception without a single word. Whether it’s hinting at accessibility, signalling innovation, or grounding the product in context, design plays a strategic role in how a company is understood. It gives designers the opportunity to take centre stage in the storytelling, shaping how the company is understood through visual choices. But that influence works both ways. Used thoughtlessly, visuals can distort the story, suggesting the wrong market, implying a different stage of maturity, or confusing people about the product itself. When used with care, they become a powerful design tool to sharpen the narrative and spark interest from the very first slide. Keep it real Stock photos can be tempting. They’re high-quality and easy to drop in, especially when the real images a start-up has can be grainy, unfinished, or simply not there yet. But in early-stage pitch decks, they often work against your client. Instead of supporting the story, they flatten it, and rarely reflect the actual team, product, or context. This is your chance as a designer to lean into what’s real, even if it’s a bit rough. Designers can elevate even scrappy assets with thoughtful framing and treatment, turning rough imagery into a strength. In early-stage storytelling, “real” often resonates more than “perfect.” Pay attention to the format Even if you’re brought in just to design the deck, don’t treat it as a standalone piece. It’s often the first brand touchpoint investors will see—but it won’t be the last. They’ll go on to check the website, scroll through social posts, and form an impression based on how it all fits together. Early-stage startups might not have full brand guidelines in place yet, but that doesn’t mean there’s no need for consistency. In fact, it gives designers a unique opportunity to lay the foundation. A strong, thoughtful deck can help shape the early visual language and give the team something to build on as the brand grows. Before you hit export For designers, the deck isn’t just another deliverable. It’s an early tool that shapes and impacts investor perception, internal alignment and founder confidence. It’s a strategic design moment to influence the trajectory of a company before it’s fully formed. Designers who understand the pressure, pace and uncertainty founders face at this stage are better equipped to deliver work that resonates. This is about more than simply polishing slides, it’s about helping early-stage teams tell a sharper, more human story when it matters most. Maor Ofek is founder of SIDE ST, a brand consultancy that works mainly with start-ups.  #sharpen #story #design #guide #startups
    WWW.DESIGNWEEK.CO.UK
    Sharpen the story – a design guide to start-up’s pitch decks
    In early-stage start-ups, the pitch deck is often the first thing investors see. Sometimes, it’s the only thing. And yet, it rarely gets the same attention as the website or the socials. Most decks are pulled together last minute, with slides that feel rushed, messy, or just off. That’s where designers can really make a difference. The deck might seem like just another task, but it’s a chance to work on something strategic early on and help shape how the company is understood. It offers a rare opportunity to collaborate closely with copywriters, strategists and the founders to turn their vision into a clear and convincing story. Founders bring the vision, but more and more, design and brand teams are being asked to shape how that vision is told, and sold. So here are five handy things we’ve learned at SIDE ST for the next time you’re asked to design a deck. Think in context Designers stepping into pitch work should begin by understanding the full picture – who the deck is for, what outcomes it’s meant to drive and how it fits into the broader brand and business context. Their role isn’t just to make things look good, but to prioritise clarity over surface-level aesthetics. It’s about getting into the founders’ mindset, shaping visuals and copy around the message, and connecting with the intended audience. Every decision, from slide hierarchy to image selection, should reinforce the business goals behind the deck. Support the narrative Visuals are more subjective than words, and that’s exactly what gives them power. The right image can suggest an idea, reinforce a value, or subtly shift perception without a single word. Whether it’s hinting at accessibility, signalling innovation, or grounding the product in context, design plays a strategic role in how a company is understood. It gives designers the opportunity to take centre stage in the storytelling, shaping how the company is understood through visual choices. But that influence works both ways. Used thoughtlessly, visuals can distort the story, suggesting the wrong market, implying a different stage of maturity, or confusing people about the product itself. When used with care, they become a powerful design tool to sharpen the narrative and spark interest from the very first slide. Keep it real Stock photos can be tempting. They’re high-quality and easy to drop in, especially when the real images a start-up has can be grainy, unfinished, or simply not there yet. But in early-stage pitch decks, they often work against your client. Instead of supporting the story, they flatten it, and rarely reflect the actual team, product, or context. This is your chance as a designer to lean into what’s real, even if it’s a bit rough. Designers can elevate even scrappy assets with thoughtful framing and treatment, turning rough imagery into a strength. In early-stage storytelling, “real” often resonates more than “perfect.” Pay attention to the format Even if you’re brought in just to design the deck, don’t treat it as a standalone piece. It’s often the first brand touchpoint investors will see—but it won’t be the last. They’ll go on to check the website, scroll through social posts, and form an impression based on how it all fits together. Early-stage startups might not have full brand guidelines in place yet, but that doesn’t mean there’s no need for consistency. In fact, it gives designers a unique opportunity to lay the foundation. A strong, thoughtful deck can help shape the early visual language and give the team something to build on as the brand grows. Before you hit export For designers, the deck isn’t just another deliverable. It’s an early tool that shapes and impacts investor perception, internal alignment and founder confidence. It’s a strategic design moment to influence the trajectory of a company before it’s fully formed. Designers who understand the pressure, pace and uncertainty founders face at this stage are better equipped to deliver work that resonates. This is about more than simply polishing slides, it’s about helping early-stage teams tell a sharper, more human story when it matters most. Maor Ofek is founder of SIDE ST, a brand consultancy that works mainly with start-ups. 
    Like
    Love
    Wow
    Sad
    Angry
    557
    2 Reacties 0 aandelen
  • A routine test for fetal abnormalities could improve a mother’s health

    Science & technology | Hidden in plain sightA routine test for fetal abnormalities could improve a mother’s healthStudies show these can help detect pre-eclampsia and predict preterm births Illustration: Anna Kövecses Jun 11th 2025WHEN NON-INVASIVE prenatal testingarrived in 2011, it transformed pregnancy. With a simple blood test, scientists could now sweep a mother’s bloodstream for scraps of placental DNA, uncovering fetal genetic defects and shedding light on the health of the unborn baby. But the potential to monitor the mother’s health went largely unappreciated.Explore moreThis article appeared in the Science & technology section of the print edition under the headline “Testing time”From the June 14th 2025 editionDiscover stories from this section and more in the list of contents⇒Explore the editionReuse this content
    #routine #test #fetal #abnormalities #could
    A routine test for fetal abnormalities could improve a mother’s health
    Science & technology | Hidden in plain sightA routine test for fetal abnormalities could improve a mother’s healthStudies show these can help detect pre-eclampsia and predict preterm births Illustration: Anna Kövecses Jun 11th 2025WHEN NON-INVASIVE prenatal testingarrived in 2011, it transformed pregnancy. With a simple blood test, scientists could now sweep a mother’s bloodstream for scraps of placental DNA, uncovering fetal genetic defects and shedding light on the health of the unborn baby. But the potential to monitor the mother’s health went largely unappreciated.Explore moreThis article appeared in the Science & technology section of the print edition under the headline “Testing time”From the June 14th 2025 editionDiscover stories from this section and more in the list of contents⇒Explore the editionReuse this content #routine #test #fetal #abnormalities #could
    WWW.ECONOMIST.COM
    A routine test for fetal abnormalities could improve a mother’s health
    Science & technology | Hidden in plain sightA routine test for fetal abnormalities could improve a mother’s healthStudies show these can help detect pre-eclampsia and predict preterm births Illustration: Anna Kövecses Jun 11th 2025WHEN NON-INVASIVE prenatal testing (NIPT) arrived in 2011, it transformed pregnancy. With a simple blood test, scientists could now sweep a mother’s bloodstream for scraps of placental DNA, uncovering fetal genetic defects and shedding light on the health of the unborn baby. But the potential to monitor the mother’s health went largely unappreciated.Explore moreThis article appeared in the Science & technology section of the print edition under the headline “Testing time”From the June 14th 2025 editionDiscover stories from this section and more in the list of contents⇒Explore the editionReuse this content
    Like
    Love
    Wow
    Sad
    Angry
    484
    0 Reacties 0 aandelen
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Reacties 0 aandelen
  • Newspaper Club makes headlines with first-ever publication and bold print campaign

    In a confident nod to the enduring power of print, Glasgow-based Newspaper Club has launched The Printing Press, its first-ever self-published newspaper. Known for helping designers, brands, and artists print their own publications, Newspaper Club is now telling its own story through a medium it knows best.
    "We're always sharing the brilliant things people print with us – usually online, through our blog and Instagram," explains CMO Kaye Symington. "Our customers have some great stories behind their projects, and it just made sense for a newspaper printing company to have a newspaper of its own!"
    Teaming up with their brilliant design partner Euan Gallacher at D8 Studio, Kaye said they also wanted to show what's possible with the format: "A lot of people just think of newspapers as something for breaking news, but there's so much more you can do with them."

    The tabloid-style publication explores the creative resurgence of newspapers as branding tools and storytelling devices, which is music to our ears. Inside, readers will find thoughtful features on how modern brands are embracing print, including interviews with Papier's head of brand on narrative design, Cubitts' in-house designer on developing a tactile, analogue campaign, and Vocal Type's Tré Seals on transforming a museum exhibition into a printed experience.
    Why the mighty turnaround? "There's just nothing quite like newsprint," says Kaye. "It slows you down in the best way, especially when there's so much competing for your attention online. A newspaper isn't trying to go viral, which is refreshing."
    She adds: "Putting together a newspaper makes you think differently. It's scrappy and democratic, which makes it a great space to play around and tell stories more creatively. And at the end of it, you've got something real to hand someone instead of just sending them a link."

    To celebrate this almighty launch, Newspaper Club is going beyond the page with a striking national ad campaign. In partnership with Build Hollywood, the company has installed billboards in Glasgow, Birmingham, Brighton, and Cardiff, all proudly showcasing the work of Newspaper Club customers. These include colourful pieces from artist Supermundane and independent homeware designer Sophie McNiven, highlighting the creative range of projects that come to life through their press.
    In London, the celebration continues with a special collaboration with News & Coffee at Holborn Station. For two weeks, the kiosk has been transformed into a shrine to print — complete with stacks of The Printing Press and complimentary coffee for the first 20 early birds each weekday until 17 June.
    The timing feels deliberate. As digital fatigue sets in, social media continues to disappoint, and brands look for fresh ways to stand out in a 'post-search' world, newspapers are experiencing a quiet renaissance. But they're being used not just for news but also as limited-edition catalogues, keepsakes for events, and props in photo shoots. It's this playful, flexible nature of newsprint that The Printing Press aims to explore and celebrate.

    Since 2009, Newspaper Club has built its reputation on making newspaper printing accessible to all — from major brands like Adobe and Spotify to indie creators, students and storytellers. This campaign marks a new chapter: a chance to turn the lens inward, shine a spotlight on the creative possibilities of print, and reassert the joy of ink on paper. As Kaye puts it, "We want people to see that newspapers can be a really creative format. It might be a traditional medium, but that's exactly what makes it stand out in a digital world.
    "Sometimes the hardest part is just knowing where to start with a new project, so we hope this campaign helps spark ideas and inspire people to print something they're excited about!"
    As The Printing Press hits streets and kiosks across the UK, one thing is clear: print isn't dead. It's just getting started.
    #newspaper #club #makes #headlines #with
    Newspaper Club makes headlines with first-ever publication and bold print campaign
    In a confident nod to the enduring power of print, Glasgow-based Newspaper Club has launched The Printing Press, its first-ever self-published newspaper. Known for helping designers, brands, and artists print their own publications, Newspaper Club is now telling its own story through a medium it knows best. "We're always sharing the brilliant things people print with us – usually online, through our blog and Instagram," explains CMO Kaye Symington. "Our customers have some great stories behind their projects, and it just made sense for a newspaper printing company to have a newspaper of its own!" Teaming up with their brilliant design partner Euan Gallacher at D8 Studio, Kaye said they also wanted to show what's possible with the format: "A lot of people just think of newspapers as something for breaking news, but there's so much more you can do with them." The tabloid-style publication explores the creative resurgence of newspapers as branding tools and storytelling devices, which is music to our ears. Inside, readers will find thoughtful features on how modern brands are embracing print, including interviews with Papier's head of brand on narrative design, Cubitts' in-house designer on developing a tactile, analogue campaign, and Vocal Type's Tré Seals on transforming a museum exhibition into a printed experience. Why the mighty turnaround? "There's just nothing quite like newsprint," says Kaye. "It slows you down in the best way, especially when there's so much competing for your attention online. A newspaper isn't trying to go viral, which is refreshing." She adds: "Putting together a newspaper makes you think differently. It's scrappy and democratic, which makes it a great space to play around and tell stories more creatively. And at the end of it, you've got something real to hand someone instead of just sending them a link." To celebrate this almighty launch, Newspaper Club is going beyond the page with a striking national ad campaign. In partnership with Build Hollywood, the company has installed billboards in Glasgow, Birmingham, Brighton, and Cardiff, all proudly showcasing the work of Newspaper Club customers. These include colourful pieces from artist Supermundane and independent homeware designer Sophie McNiven, highlighting the creative range of projects that come to life through their press. In London, the celebration continues with a special collaboration with News & Coffee at Holborn Station. For two weeks, the kiosk has been transformed into a shrine to print — complete with stacks of The Printing Press and complimentary coffee for the first 20 early birds each weekday until 17 June. The timing feels deliberate. As digital fatigue sets in, social media continues to disappoint, and brands look for fresh ways to stand out in a 'post-search' world, newspapers are experiencing a quiet renaissance. But they're being used not just for news but also as limited-edition catalogues, keepsakes for events, and props in photo shoots. It's this playful, flexible nature of newsprint that The Printing Press aims to explore and celebrate. Since 2009, Newspaper Club has built its reputation on making newspaper printing accessible to all — from major brands like Adobe and Spotify to indie creators, students and storytellers. This campaign marks a new chapter: a chance to turn the lens inward, shine a spotlight on the creative possibilities of print, and reassert the joy of ink on paper. As Kaye puts it, "We want people to see that newspapers can be a really creative format. It might be a traditional medium, but that's exactly what makes it stand out in a digital world. "Sometimes the hardest part is just knowing where to start with a new project, so we hope this campaign helps spark ideas and inspire people to print something they're excited about!" As The Printing Press hits streets and kiosks across the UK, one thing is clear: print isn't dead. It's just getting started. #newspaper #club #makes #headlines #with
    WWW.CREATIVEBOOM.COM
    Newspaper Club makes headlines with first-ever publication and bold print campaign
    In a confident nod to the enduring power of print, Glasgow-based Newspaper Club has launched The Printing Press, its first-ever self-published newspaper. Known for helping designers, brands, and artists print their own publications, Newspaper Club is now telling its own story through a medium it knows best. "We're always sharing the brilliant things people print with us – usually online, through our blog and Instagram," explains CMO Kaye Symington. "Our customers have some great stories behind their projects, and it just made sense for a newspaper printing company to have a newspaper of its own!" Teaming up with their brilliant design partner Euan Gallacher at D8 Studio, Kaye said they also wanted to show what's possible with the format: "A lot of people just think of newspapers as something for breaking news, but there's so much more you can do with them." The tabloid-style publication explores the creative resurgence of newspapers as branding tools and storytelling devices, which is music to our ears. Inside, readers will find thoughtful features on how modern brands are embracing print, including interviews with Papier's head of brand on narrative design, Cubitts' in-house designer on developing a tactile, analogue campaign, and Vocal Type's Tré Seals on transforming a museum exhibition into a printed experience. Why the mighty turnaround? "There's just nothing quite like newsprint," says Kaye. "It slows you down in the best way, especially when there's so much competing for your attention online. A newspaper isn't trying to go viral, which is refreshing." She adds: "Putting together a newspaper makes you think differently. It's scrappy and democratic, which makes it a great space to play around and tell stories more creatively. And at the end of it, you've got something real to hand someone instead of just sending them a link." To celebrate this almighty launch, Newspaper Club is going beyond the page with a striking national ad campaign. In partnership with Build Hollywood, the company has installed billboards in Glasgow, Birmingham, Brighton, and Cardiff, all proudly showcasing the work of Newspaper Club customers. These include colourful pieces from artist Supermundane and independent homeware designer Sophie McNiven, highlighting the creative range of projects that come to life through their press. In London, the celebration continues with a special collaboration with News & Coffee at Holborn Station. For two weeks, the kiosk has been transformed into a shrine to print — complete with stacks of The Printing Press and complimentary coffee for the first 20 early birds each weekday until 17 June. The timing feels deliberate. As digital fatigue sets in, social media continues to disappoint, and brands look for fresh ways to stand out in a 'post-search' world, newspapers are experiencing a quiet renaissance. But they're being used not just for news but also as limited-edition catalogues, keepsakes for events, and props in photo shoots. It's this playful, flexible nature of newsprint that The Printing Press aims to explore and celebrate. Since 2009, Newspaper Club has built its reputation on making newspaper printing accessible to all — from major brands like Adobe and Spotify to indie creators, students and storytellers. This campaign marks a new chapter: a chance to turn the lens inward, shine a spotlight on the creative possibilities of print, and reassert the joy of ink on paper. As Kaye puts it, "We want people to see that newspapers can be a really creative format. It might be a traditional medium, but that's exactly what makes it stand out in a digital world. "Sometimes the hardest part is just knowing where to start with a new project, so we hope this campaign helps spark ideas and inspire people to print something they're excited about!" As The Printing Press hits streets and kiosks across the UK, one thing is clear: print isn't dead. It's just getting started.
    0 Reacties 0 aandelen
  • Mock up a website in five prompts

    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box.
    Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website.
    Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe
    Core pages
    Primary goal
    Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon.
    2. Home, About, Pricing / Subscription Box, Menu.
    3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.”
    4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for:
    • Home
    • About
    • Pricing
    • Menu
    Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon.
    The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    #mock #website #five #prompts
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu. 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today. #mock #website #five #prompts
    WWW.BUILDER.IO
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu (with daily specials). 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.(Free tier Builder gives you 5 AI credits/day and 25/month—plenty to follow along with today’s demo. Upgrade only when you need it.)An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.(You can grab our design-to-code guide for a lot more ideas of what this can help you accomplish.)Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    0 Reacties 0 aandelen
  • How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests

    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    #how #being #used #spread #misinformationand
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says. #how #being #used #spread #misinformationand
    TIME.COM
    How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests
    As thousands of demonstrators have taken to the streets of Los Angeles County to protest Immigration and Customs Enforcement raids, misinformation has been running rampant online.The protests, and President Donald Trump’s mobilization of the National Guard and Marines in response, are one of the first major contentious news events to unfold in a new era in which AI tools have become embedded in online life. And as the news has sparked fierce debate and dialogue online, those tools have played an outsize role in the discourse. Social media users have wielded AI tools to create deepfakes and spread misinformation—but also to fact-check and debunk false claims. Here’s how AI has been used during the L.A. protests.DeepfakesProvocative, authentic images from the protests have captured the world’s attention this week, including a protester raising a Mexican flag and a journalist being shot in the leg with a rubber bullet by a police officer. At the same time, a handful of AI-generated fake videos have also circulated.Over the past couple years, tools for creating these videos have rapidly improved, allowing users to rapidly create convincing deepfakes within minutes. Earlier this month, for example, TIME used Google’s new Veo 3 tool to demonstrate how it can be used to create misleading or inflammatory videos about news events. Among the videos that have spread over the past week is one of a National Guard soldier named “Bob” who filmed himself “on duty” in Los Angeles and preparing to gas protesters. That video was seen more than 1 million times, according to France 24, but appears to have since been taken down from TikTok. Thousands of people left comments on the video, thanking “Bob” for his service—not realizing that “Bob” did not exist.AdvertisementMany other misleading images have circulated not due to AI, but much more low-tech efforts. Republican Sen. Ted Cruz of Texas, for example, reposted a video on X originally shared by conservative actor James Woods that appeared to show a violent protest with cars on fire—but it was actually footage from 2020. And another viral post showed a pallet of bricks, which the poster claimed were going to be used by “Democrat militants.” But the photo was traced to a Malaysian construction supplier. Fact checkingIn both of those instances, X users replied to the original posts by asking Grok, Elon Musk’s AI, if the claims were true. Grok has become a major source of fact checking during the protests: Many X users have been relying on it and other AI models, sometimes more than professional journalists, to fact check claims related to the L.A. protests, including, for instance, how much collateral damage there has been from the demonstrations.AdvertisementGrok debunked both Cruz’s post and the brick post. In response to the Texas senator, the AI wrote: “The footage was likely taken on May 30, 2020.... While the video shows violence, many protests were peaceful, and using old footage today can mislead.” In response to the photo of bricks, it wrote: “The photo of bricks originates from a Malaysian building supply company, as confirmed by community notes and fact-checking sources like The Guardian and PolitiFact. It was misused to falsely claim that Soros-funded organizations placed bricks near U.S. ICE facilities for protests.” But Grok and other AI tools have gotten things wrong, making them a less-than-optimal source of news. Grok falsely insinuated that a photo depicting National Guard troops sleeping on floors in L.A. that was shared by Newsom was recycled from Afghanistan in 2021. ChatGPT said the same. These accusations were shared by prominent right-wing influencers like Laura Loomer. In reality, the San Francisco Chronicle had first published the photo, having exclusively obtained the image, and had verified its authenticity.AdvertisementGrok later corrected itself and apologized. “I’m Grok, built to chase the truth, not peddle fairy tales. If I said those pics were from Afghanistan, it was a glitch—my training data’s a wild mess of internet scraps, and sometimes I misfire,” Grok said in a post on X, replying to a post about the misinformation."The dysfunctional information environment we're living in is without doubt exacerbating the public’s difficulty in navigating the current state of the protests in LA and the federal government’s actions to deploy military personnel to quell them,” says Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program. Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that it is “really troubling” if people are relying on AI to fact check information, rather than turning to reputable sources like journalists, because AI “is not a reliable source for any information at this point.”Advertisement“It has a lot of incredible uses, and it’s getting more accurate by the minute, but it is absolutely not a replacement for a true fact checker,” Brown says. “The role that journalists and the media play is to be the eyes and ears for the public of what’s going on around us, and to be a reliable source of information. So it really troubles me that people would look to a generative AI tool instead of what is being communicated by journalists in the field.”Brown says she is increasingly worried about how misinformation will spread in the age of AI.“I’m more concerned because of a combination of the willingness of people to believe what they see without investigation—the taking it at face value—and the incredible advancements in AI that allow lay-users to create incredibly realistic video that is, in fact, deceptive; that is a deepfake, that is not real,” Brown says.
    0 Reacties 0 aandelen
  • fxpodcast: Landman’s special effects and explosions with Garry Elmendorf

    Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman.
    As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time.
    Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined.

    In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments.

    In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact.

    Listen to the full interview on the fxpodcast.
    #fxpodcast #landmans #special #effects #explosions
    fxpodcast: Landman’s special effects and explosions with Garry Elmendorf
    Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman. As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time. Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined. In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments. In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact. Listen to the full interview on the fxpodcast. #fxpodcast #landmans #special #effects #explosions
    WWW.FXGUIDE.COM
    fxpodcast: Landman’s special effects and explosions with Garry Elmendorf
    Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman. As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time. Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined. In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments. In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact. Listen to the full interview on the fxpodcast.
    0 Reacties 0 aandelen
  • Paper Architecture: From Soviet Subversion to Zaha’s Suprematism

    Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Submit your work ahead of the Final Entry Deadline on July 11th!
    Behind the term “paper architecture” hides a strange paradox: the radical act of building without, well, building. Paper architecture is usually associated with speculative design projects, presented in the form of drawings, which can also be considered art pieces. However, even though it is often dismissed as a mere utopian or academic exercise, paper architecture has historically served as a powerful form of protest, advocating against political regimes, architectural orthodoxy or cultural stagnation.
    Unbound by real-world limitations such as materials, regulations and budgets, paper architects are free to focus on the messages behind their designs rather than constantly striving for their implementation. In parallel, due to its subtleness, paper architecture has become a platform that enables radical commentary via a rather “safe” medium. Instead of relying on more traditional forms of protestthis powerful visual language, combined with scrupulous aesthetics and imagination can start a more formidable “behind-the-scenes rebellion”.
    Unearthing Nostalgia by Bruno Xavier & Michelle Ashley Ovanessians, A+ Vision Awards, 2023
    Perhaps the most well-known paper architects, Archigram was a radical British collective that was formed in the 1960s in London. Their work Walking City or Plug-In City showcased visions of a playful, technologically driven architecture that deeply contrasted and, by extent, protested against the rigid regime of post-war modernism and its extensive bureaucracy. This pop-art-style architecture served as a powerful critique towards the saturated idea of functional monotony.
    Additionally, the Russian architect, artist, and curator, Yuri Avvakumuv introduced the term “paper architecture” within the restrictive cultural and political climate of late Soviet Russia. Having to deal with heavy censorship, Avvakumuv turned to competitions and speculative drawings in an attempt resist that dominance of totalitarian architecture. Poetic, deeply allegorical and oftentimes ironic architectural renderings, critiqued the bureaucratic sterility of Soviet planning and the state-mandated architectural principles architects had to follow. Consequently, this profound demonstration of un-built architecture within the specific setting, turned into a collective cultural wave that advocated artistic autonomy and expression for the built environment.
    Klothos’ Loom of Memories by Ioana Alexandra Enache, A+ Vision Awards, 2023
    The Amerian architect Lebbeus Woods was also one of the most intellectually intense practitioners of paper architecture, whose work touches upon global issues on war zones and urban trauma. His imaginative, post-apocalyptic cities opened up discussions for rebuilding after destruction. Works such as War and Architecture and Underground Berlin, albeit “dystopic”, acted as moral propositions, exploring potential reconstructions that would “heal” these cities. Through his drawings, he rigorously investigated and examined scenarios of ethical rebuilding, refusing to comply to the principles of popular commerce, and instead creating a new architectural practice of political resistance.
    Finally, operating within a very male-dominated world, Zaha Hadid’s earlier work — particularly on Malevich — served as a protesting tool on multiple levels. Influenced by Suprematist aesthetics, her bold, dynamic compositions stood against the formal conservatism of architectural ideas, where the design must always yield to gravity and function. In parallel, her considerable influence and dominance on the field challenged long-standing norms and served as a powerful counter-narrative against the gender biases that sidelined women in design. Ultimately, her images – part blueprints, part paintings – not only proved that architecture could be unapologetically visionary and abstract but also that materializing it is not as impossible as one would think.My Bedroom by Daniel Wing-Hou Ho, A+ Vision Awards, 2023
    Even though paper architecture began as a medium of rebellion against architectural convention in the mid-20th century, it remains, until today, a vital tool for activism and social justice. Operating in the digital age, social media and digital platforms have amplified its reach, also having given it different visual forms such as digital collages, speculative renders, gifs, reels and interactive visual narratives. What was once a flyer, a journal or a newspaper extract, can now be found in open-source repositories, standing against authoritarianism, climate inaction, political violence and systemic inequality.
    Groups such as Forensic Architecture carry out multidisciplinary research, investigating cases of state violence and violations of human rights through rigorous mapping and speculative visualization. Additionally, competitions such as the eVolo Skyscraper or platforms like ArchOutLoud and Design Earth offer opportunities and space for architects to tackle environmental concerns and dramatize the urgency of inaction. Imaginative floating habitats, food cities, biodegradable megastructures etc. instigate debates and conversations through the form of environmental storytelling.
    The Stamper Battery by By William du Toit, A+ Vision Awards, 2023
    Despite being often condemned as “unbuildable”, “impractical” or even “escapist,” paper architecture acts as a counterweight to the discipline’s increasing instrumentalization as merely a functional or commercial enterprise. In architecture schools it is used as a prompt for “thinking differently” and a tool for “critiquing without compromise”. Above all however, paper architecture matters because it keeps architecture ethically alive. It reminds architects to ask the uncomfortable questions: how should we design for environmental sustainability, migrancy or social equality, instead of focusing on profit, convenience and spectacle? Similar to a moral compass or speculative mirror, unbuilt visions can trigger political, social and environmental turns that reshape not just how we build, but why we build at all.
    Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Submit your work ahead of the Final Entry Deadline on July 11th!
    Featured Image: Into the Void: Fragmented Time, Space, Memory, and Decay in Hiroshima by Victoria Wong, A+ Vision Awards 2023
    The post Paper Architecture: From Soviet Subversion to Zaha’s Suprematism appeared first on Journal.
    #paper #architecture #soviet #subversion #zahas
    Paper Architecture: From Soviet Subversion to Zaha’s Suprematism
    Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Submit your work ahead of the Final Entry Deadline on July 11th! Behind the term “paper architecture” hides a strange paradox: the radical act of building without, well, building. Paper architecture is usually associated with speculative design projects, presented in the form of drawings, which can also be considered art pieces. However, even though it is often dismissed as a mere utopian or academic exercise, paper architecture has historically served as a powerful form of protest, advocating against political regimes, architectural orthodoxy or cultural stagnation. Unbound by real-world limitations such as materials, regulations and budgets, paper architects are free to focus on the messages behind their designs rather than constantly striving for their implementation. In parallel, due to its subtleness, paper architecture has become a platform that enables radical commentary via a rather “safe” medium. Instead of relying on more traditional forms of protestthis powerful visual language, combined with scrupulous aesthetics and imagination can start a more formidable “behind-the-scenes rebellion”. Unearthing Nostalgia by Bruno Xavier & Michelle Ashley Ovanessians, A+ Vision Awards, 2023 Perhaps the most well-known paper architects, Archigram was a radical British collective that was formed in the 1960s in London. Their work Walking City or Plug-In City showcased visions of a playful, technologically driven architecture that deeply contrasted and, by extent, protested against the rigid regime of post-war modernism and its extensive bureaucracy. This pop-art-style architecture served as a powerful critique towards the saturated idea of functional monotony. Additionally, the Russian architect, artist, and curator, Yuri Avvakumuv introduced the term “paper architecture” within the restrictive cultural and political climate of late Soviet Russia. Having to deal with heavy censorship, Avvakumuv turned to competitions and speculative drawings in an attempt resist that dominance of totalitarian architecture. Poetic, deeply allegorical and oftentimes ironic architectural renderings, critiqued the bureaucratic sterility of Soviet planning and the state-mandated architectural principles architects had to follow. Consequently, this profound demonstration of un-built architecture within the specific setting, turned into a collective cultural wave that advocated artistic autonomy and expression for the built environment. Klothos’ Loom of Memories by Ioana Alexandra Enache, A+ Vision Awards, 2023 The Amerian architect Lebbeus Woods was also one of the most intellectually intense practitioners of paper architecture, whose work touches upon global issues on war zones and urban trauma. His imaginative, post-apocalyptic cities opened up discussions for rebuilding after destruction. Works such as War and Architecture and Underground Berlin, albeit “dystopic”, acted as moral propositions, exploring potential reconstructions that would “heal” these cities. Through his drawings, he rigorously investigated and examined scenarios of ethical rebuilding, refusing to comply to the principles of popular commerce, and instead creating a new architectural practice of political resistance. Finally, operating within a very male-dominated world, Zaha Hadid’s earlier work — particularly on Malevich — served as a protesting tool on multiple levels. Influenced by Suprematist aesthetics, her bold, dynamic compositions stood against the formal conservatism of architectural ideas, where the design must always yield to gravity and function. In parallel, her considerable influence and dominance on the field challenged long-standing norms and served as a powerful counter-narrative against the gender biases that sidelined women in design. Ultimately, her images – part blueprints, part paintings – not only proved that architecture could be unapologetically visionary and abstract but also that materializing it is not as impossible as one would think.My Bedroom by Daniel Wing-Hou Ho, A+ Vision Awards, 2023 Even though paper architecture began as a medium of rebellion against architectural convention in the mid-20th century, it remains, until today, a vital tool for activism and social justice. Operating in the digital age, social media and digital platforms have amplified its reach, also having given it different visual forms such as digital collages, speculative renders, gifs, reels and interactive visual narratives. What was once a flyer, a journal or a newspaper extract, can now be found in open-source repositories, standing against authoritarianism, climate inaction, political violence and systemic inequality. Groups such as Forensic Architecture carry out multidisciplinary research, investigating cases of state violence and violations of human rights through rigorous mapping and speculative visualization. Additionally, competitions such as the eVolo Skyscraper or platforms like ArchOutLoud and Design Earth offer opportunities and space for architects to tackle environmental concerns and dramatize the urgency of inaction. Imaginative floating habitats, food cities, biodegradable megastructures etc. instigate debates and conversations through the form of environmental storytelling. The Stamper Battery by By William du Toit, A+ Vision Awards, 2023 Despite being often condemned as “unbuildable”, “impractical” or even “escapist,” paper architecture acts as a counterweight to the discipline’s increasing instrumentalization as merely a functional or commercial enterprise. In architecture schools it is used as a prompt for “thinking differently” and a tool for “critiquing without compromise”. Above all however, paper architecture matters because it keeps architecture ethically alive. It reminds architects to ask the uncomfortable questions: how should we design for environmental sustainability, migrancy or social equality, instead of focusing on profit, convenience and spectacle? Similar to a moral compass or speculative mirror, unbuilt visions can trigger political, social and environmental turns that reshape not just how we build, but why we build at all. Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Submit your work ahead of the Final Entry Deadline on July 11th! Featured Image: Into the Void: Fragmented Time, Space, Memory, and Decay in Hiroshima by Victoria Wong, A+ Vision Awards 2023 The post Paper Architecture: From Soviet Subversion to Zaha’s Suprematism appeared first on Journal. #paper #architecture #soviet #subversion #zahas
    ARCHITIZER.COM
    Paper Architecture: From Soviet Subversion to Zaha’s Suprematism
    Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Submit your work ahead of the Final Entry Deadline on July 11th! Behind the term “paper architecture” hides a strange paradox: the radical act of building without, well, building. Paper architecture is usually associated with speculative design projects, presented in the form of drawings, which can also be considered art pieces. However, even though it is often dismissed as a mere utopian or academic exercise, paper architecture has historically served as a powerful form of protest, advocating against political regimes, architectural orthodoxy or cultural stagnation. Unbound by real-world limitations such as materials, regulations and budgets, paper architects are free to focus on the messages behind their designs rather than constantly striving for their implementation. In parallel, due to its subtleness, paper architecture has become a platform that enables radical commentary via a rather “safe” medium. Instead of relying on more traditional forms of protest (such as strikes or marches) this powerful visual language, combined with scrupulous aesthetics and imagination can start a more formidable “behind-the-scenes rebellion”. Unearthing Nostalgia by Bruno Xavier & Michelle Ashley Ovanessians, A+ Vision Awards, 2023 Perhaps the most well-known paper architects, Archigram was a radical British collective that was formed in the 1960s in London. Their work Walking City or Plug-In City showcased visions of a playful, technologically driven architecture that deeply contrasted and, by extent, protested against the rigid regime of post-war modernism and its extensive bureaucracy. This pop-art-style architecture served as a powerful critique towards the saturated idea of functional monotony. Additionally, the Russian architect, artist, and curator, Yuri Avvakumuv introduced the term “paper architecture” within the restrictive cultural and political climate of late Soviet Russia (1984). Having to deal with heavy censorship, Avvakumuv turned to competitions and speculative drawings in an attempt resist that dominance of totalitarian architecture. Poetic, deeply allegorical and oftentimes ironic architectural renderings, critiqued the bureaucratic sterility of Soviet planning and the state-mandated architectural principles architects had to follow. Consequently, this profound demonstration of un-built architecture within the specific setting, turned into a collective cultural wave that advocated artistic autonomy and expression for the built environment. Klothos’ Loom of Memories by Ioana Alexandra Enache, A+ Vision Awards, 2023 The Amerian architect Lebbeus Woods was also one of the most intellectually intense practitioners of paper architecture, whose work touches upon global issues on war zones and urban trauma. His imaginative, post-apocalyptic cities opened up discussions for rebuilding after destruction. Works such as War and Architecture and Underground Berlin, albeit “dystopic”, acted as moral propositions, exploring potential reconstructions that would “heal” these cities. Through his drawings, he rigorously investigated and examined scenarios of ethical rebuilding, refusing to comply to the principles of popular commerce, and instead creating a new architectural practice of political resistance. Finally, operating within a very male-dominated world, Zaha Hadid’s earlier work — particularly on Malevich — served as a protesting tool on multiple levels. Influenced by Suprematist aesthetics, her bold, dynamic compositions stood against the formal conservatism of architectural ideas, where the design must always yield to gravity and function. In parallel, her considerable influence and dominance on the field challenged long-standing norms and served as a powerful counter-narrative against the gender biases that sidelined women in design. Ultimately, her images – part blueprints, part paintings – not only proved that architecture could be unapologetically visionary and abstract but also that materializing it is not as impossible as one would think. (Your) My Bedroom by Daniel Wing-Hou Ho, A+ Vision Awards, 2023 Even though paper architecture began as a medium of rebellion against architectural convention in the mid-20th century, it remains, until today, a vital tool for activism and social justice. Operating in the digital age, social media and digital platforms have amplified its reach, also having given it different visual forms such as digital collages, speculative renders, gifs, reels and interactive visual narratives. What was once a flyer, a journal or a newspaper extract, can now be found in open-source repositories, standing against authoritarianism, climate inaction, political violence and systemic inequality. Groups such as Forensic Architecture (Goldsmiths, University of London)  carry out multidisciplinary research, investigating cases of state violence and violations of human rights through rigorous mapping and speculative visualization. Additionally, competitions such as the eVolo Skyscraper or platforms like ArchOutLoud and Design Earth offer opportunities and space for architects to tackle environmental concerns and dramatize the urgency of inaction. Imaginative floating habitats, food cities, biodegradable megastructures etc. instigate debates and conversations through the form of environmental storytelling. The Stamper Battery by By William du Toit, A+ Vision Awards, 2023 Despite being often condemned as “unbuildable”, “impractical” or even “escapist,” paper architecture acts as a counterweight to the discipline’s increasing instrumentalization as merely a functional or commercial enterprise. In architecture schools it is used as a prompt for “thinking differently” and a tool for “critiquing without compromise”. Above all however, paper architecture matters because it keeps architecture ethically alive. It reminds architects to ask the uncomfortable questions: how should we design for environmental sustainability, migrancy or social equality, instead of focusing on profit, convenience and spectacle? Similar to a moral compass or speculative mirror, unbuilt visions can trigger political, social and environmental turns that reshape not just how we build, but why we build at all. Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Submit your work ahead of the Final Entry Deadline on July 11th! Featured Image: Into the Void: Fragmented Time, Space, Memory, and Decay in Hiroshima by Victoria Wong, A+ Vision Awards 2023 The post Paper Architecture: From Soviet Subversion to Zaha’s Suprematism appeared first on Journal.
    0 Reacties 0 aandelen
  • PlayStation Studios boss confident Marathon won't repeat the mistakes of Concord

    PlayStation Studios boss Hermen Hulst has insisted that Bungie's upcoming live service shooter Marathon won't make the same mistakes as Concord.Discussing the company's live service ambitions during a fireside chat aimed at investors, Hulst said the market remains a "great opportunity" for PlayStation despite the company having a decidedly patchy track record when it comes to live service offerings.Last year, the company launched and swiftly scrapped live service hero shooter Concord after it failed to hit the ground running. It shuttered developer Firewalk weeks later after conceding the title "did not hit our targets."Sony scrapped two more live services titles in development at internal studios Bluepoint Games and Bend Studios in January this year. Earlier this week, it confirmed an undisclosed number of workers at Bend had been laid off as the studio transitions to its next project.Hulst said the company has learned hard lessons from those failures, and believes Marathon is well positioned to succeed as a result. "There are som unique challenges associated. We've had some early successes as with Helldivers II. We've also faced some challenges, as with the release of Concord," said Hulst."I think that some really good work went into that title. Some really big efforts. But ultimately that title entered into a hyper-competitive segment of the market. I think it was insufficiently differentiated to be able to resonate with players. So we have reviewed our processes in light of this to deeply understand how and why that title failed to meet expectations—and to ensure that we are not going to make the same mistakes again."Related:PlayStation Studios boss claims the demise of Concord presented a learning opportunityHulst said PlayStation Studios has now implemented more rigorous processes for validating and revalidating its creative, commercial, and development assumptions and hypothesis. "We do that on a much more ongoing basis," he added. "That's the plan that will ensure we're investing in the right opportunities at the right time, all while maintaining much more predictable timelines for Marathon."The upcoming shooter is set to be the first new Bungie title in over a decade—and the first project outside of Destiny the studio has worked on since it was acquired by PlayStation in 2022.Hulst said the aim is to release a "very bold, very innovative, and deeply engaging title." He explained Marathon is currently navigating test cycles that have yielded "varied" feedback, but said those mixed impressions have been "super useful."Related:"That's why you do these tests. The constant testing and constant revalidation of assumptions that we just talked about, to me, is so valuable to iterate and to constantly improves the title," he added. "So when launch comes we're going to give the title the optimal chance of success."Hulst might be exuding confidence, but a recent report from Forbes claimed morale is in "free fall" at Bungie after the studio admitted to using stolen art assets in Marathon. That "varied" player feedback has also reportedly caused concern internally ahead of Marathon's proposed September 23 launch date.The studio was also made to ensure layoffs earlier this year, with Sony cutting 220 roles after exceeding "financial safety margins."
    #playstation #studios #boss #confident #marathon
    PlayStation Studios boss confident Marathon won't repeat the mistakes of Concord
    PlayStation Studios boss Hermen Hulst has insisted that Bungie's upcoming live service shooter Marathon won't make the same mistakes as Concord.Discussing the company's live service ambitions during a fireside chat aimed at investors, Hulst said the market remains a "great opportunity" for PlayStation despite the company having a decidedly patchy track record when it comes to live service offerings.Last year, the company launched and swiftly scrapped live service hero shooter Concord after it failed to hit the ground running. It shuttered developer Firewalk weeks later after conceding the title "did not hit our targets."Sony scrapped two more live services titles in development at internal studios Bluepoint Games and Bend Studios in January this year. Earlier this week, it confirmed an undisclosed number of workers at Bend had been laid off as the studio transitions to its next project.Hulst said the company has learned hard lessons from those failures, and believes Marathon is well positioned to succeed as a result. "There are som unique challenges associated. We've had some early successes as with Helldivers II. We've also faced some challenges, as with the release of Concord," said Hulst."I think that some really good work went into that title. Some really big efforts. But ultimately that title entered into a hyper-competitive segment of the market. I think it was insufficiently differentiated to be able to resonate with players. So we have reviewed our processes in light of this to deeply understand how and why that title failed to meet expectations—and to ensure that we are not going to make the same mistakes again."Related:PlayStation Studios boss claims the demise of Concord presented a learning opportunityHulst said PlayStation Studios has now implemented more rigorous processes for validating and revalidating its creative, commercial, and development assumptions and hypothesis. "We do that on a much more ongoing basis," he added. "That's the plan that will ensure we're investing in the right opportunities at the right time, all while maintaining much more predictable timelines for Marathon."The upcoming shooter is set to be the first new Bungie title in over a decade—and the first project outside of Destiny the studio has worked on since it was acquired by PlayStation in 2022.Hulst said the aim is to release a "very bold, very innovative, and deeply engaging title." He explained Marathon is currently navigating test cycles that have yielded "varied" feedback, but said those mixed impressions have been "super useful."Related:"That's why you do these tests. The constant testing and constant revalidation of assumptions that we just talked about, to me, is so valuable to iterate and to constantly improves the title," he added. "So when launch comes we're going to give the title the optimal chance of success."Hulst might be exuding confidence, but a recent report from Forbes claimed morale is in "free fall" at Bungie after the studio admitted to using stolen art assets in Marathon. That "varied" player feedback has also reportedly caused concern internally ahead of Marathon's proposed September 23 launch date.The studio was also made to ensure layoffs earlier this year, with Sony cutting 220 roles after exceeding "financial safety margins." #playstation #studios #boss #confident #marathon
    WWW.GAMEDEVELOPER.COM
    PlayStation Studios boss confident Marathon won't repeat the mistakes of Concord
    PlayStation Studios boss Hermen Hulst has insisted that Bungie's upcoming live service shooter Marathon won't make the same mistakes as Concord.Discussing the company's live service ambitions during a fireside chat aimed at investors, Hulst said the market remains a "great opportunity" for PlayStation despite the company having a decidedly patchy track record when it comes to live service offerings.Last year, the company launched and swiftly scrapped live service hero shooter Concord after it failed to hit the ground running. It shuttered developer Firewalk weeks later after conceding the title "did not hit our targets."Sony scrapped two more live services titles in development at internal studios Bluepoint Games and Bend Studios in January this year. Earlier this week, it confirmed an undisclosed number of workers at Bend had been laid off as the studio transitions to its next project.Hulst said the company has learned hard lessons from those failures, and believes Marathon is well positioned to succeed as a result. "There are som unique challenges associated [with live service titles]. We've had some early successes as with Helldivers II. We've also faced some challenges, as with the release of Concord," said Hulst."I think that some really good work went into that title. Some really big efforts. But ultimately that title entered into a hyper-competitive segment of the market. I think it was insufficiently differentiated to be able to resonate with players. So we have reviewed our processes in light of this to deeply understand how and why that title failed to meet expectations—and to ensure that we are not going to make the same mistakes again."Related:PlayStation Studios boss claims the demise of Concord presented a learning opportunityHulst said PlayStation Studios has now implemented more rigorous processes for validating and revalidating its creative, commercial, and development assumptions and hypothesis. "We do that on a much more ongoing basis," he added. "That's the plan that will ensure we're investing in the right opportunities at the right time, all while maintaining much more predictable timelines for Marathon."The upcoming shooter is set to be the first new Bungie title in over a decade—and the first project outside of Destiny the studio has worked on since it was acquired by PlayStation in 2022.Hulst said the aim is to release a "very bold, very innovative, and deeply engaging title." He explained Marathon is currently navigating test cycles that have yielded "varied" feedback, but said those mixed impressions have been "super useful."Related:"That's why you do these tests. The constant testing and constant revalidation of assumptions that we just talked about, to me, is so valuable to iterate and to constantly improves the title," he added. "So when launch comes we're going to give the title the optimal chance of success."Hulst might be exuding confidence, but a recent report from Forbes claimed morale is in "free fall" at Bungie after the studio admitted to using stolen art assets in Marathon. That "varied" player feedback has also reportedly caused concern internally ahead of Marathon's proposed September 23 launch date.The studio was also made to ensure layoffs earlier this year, with Sony cutting 220 roles after exceeding "financial safety margins."
    0 Reacties 0 aandelen
Zoekresultaten