• How Do I Make A Small Space Look Bigger Without Renovating

    Living in a small space doesn’t mean you have to feel cramped or boxed in. With the right design tricks, you can make even the tiniest room feel open, airy, and inviting, no renovation required. Whether you’re in a compact apartment, a small home, or just trying to make the most of a single room, smart styling and layout choices can dramatically shift how the space looks and feels. From strategic lighting and paint colors to furniture swaps and clever storage solutions, there are plenty of easy, affordable ways to stretch your square footage visually. Ready to transform your space? Here are some practical, design-savvy ideas to make your home feel bigger without tearing down a single wall.

    1. Opt for Multi-Functional Furniture

    Image Source: House Beautiful

    In a small space, every piece of furniture should earn its keep. Look for multi-functional items: ottomans that open up for storage, beds with drawers underneath, or coffee tables that can extend or lift to become a desk. Not only do these pieces help reduce clutter, but they also free up floor space, making the room look more open. Bonus points for furniture that can be folded away when not in use. By choosing versatile pieces, you’re making the most of every inch without sacrificing style or comfort.

    2. Keep Pathways Clear

    Image Source: The Spruce

    One of the simplest yet most effective ways to make a small space feel bigger is to keep pathways and walkways clear. When furniture or clutter blocks natural movement through a room, it can make the space feel cramped and chaotic. Take a walk through your home and notice where you’re dodging corners or squeezing between pieces,those are areas to rethink. Opt for smaller furniture with slim profiles, or rearrange what you have to create an easy, natural flow. Open walkways help your eyes move freely through the room, making everything feel more spacious, breathable, and intentional. It’s all about giving yourself room to move,literally and visually.

    3. Use Glass and Lucite Furniture

    Image Source: The Spruce

    Transparent furniture made from glass or Lucitetakes up less visual space because you can see right through it. A glass coffee table or clear dining chairs can provide functionality without cluttering up the view. These pieces practically disappear into the background, which helps the room feel more open. They also add a touch of modern sophistication. When you need furniture but don’t want it to dominate the room, going clear is a clever design choice.

    4. Don’t Over-Clutter Your Space

    Image Source: House Beautiful

    In small spaces, clutter accumulates fast,and it visually shrinks your environment. The more items scattered around, the more cramped the room feels. Start by taking a critical look at what you own and asking: do I really need this here? Use storage bins, under-bed containers, or floating shelves to hide away what you don’t use daily. Keep surfaces like countertops, desks, and coffee tables as clear as possible. A minimal, clean setup allows the eye to rest and makes the space feel open and intentional. Remember: less stuff equals more space,both physically and mentally.

    5. Utilize Your Windows

    Image Source: House Beautiful

    Windows are like built-in art that can also dramatically affect how big or small your space feels. Don’t cover them with heavy drapes or clutter them with too many objects on the sill. Keep window treatments light and minimal,sheer curtains or roller blinds are perfect. If privacy isn’t a big concern, consider leaving them bare. Letting natural light flood in through your windows instantly opens up your space and makes it feel brighter and more expansive. You can also place mirrors or shiny surfaces near windows to reflect more light into the room and maximize their impact.

    6. Downsize Your Dining Table

    Image Source: House Beautiful

    A large dining table can dominate a small room, leaving little space to move or breathe. If you rarely entertain a big crowd, consider downsizing to a smaller round or drop-leaf table. These take up less visual and physical space and still offer enough room for daily meals. You can always keep a folding table or stackable chairs nearby for when guests do come over. Round tables are especially great for small spaces because they allow smoother traffic flow and eliminate awkward corners. Plus, a smaller table encourages intimacy during meals and helps the whole area feel more open and functional.

    7. Use Mirrors Strategically

    Image Source: The Tiny Cottage

    Mirrors can work magic in a small room. They reflect both natural and artificial light, which can instantly make a space feel larger and brighter. A large mirror on a wall opposite a window can double the amount of light in your room. Mirrored furniture or decor elements like trays and picture frames also help. Think about using mirrored closet doors or even creating a mirror gallery wall. It’s not just about brightness; mirrors also create a sense of depth, tricking the eye into seeing more space than there actually is.

    8. Install a Murphy Bed

    Image Source: House Beautiful

    A Murphy bedis a game-changer for anyone living in a tight space. It folds up into the wall or a cabinet when not in use, instantly transforming your bedroom into a living room, office, or workout area. This setup gives you the flexibility to have a multi-purpose room without sacrificing comfort. Modern Murphy beds often come with built-in shelves or desks, offering even more function without taking up extra space. If you want to reclaim your floor during the day and still get a good night’s sleep, this is one smart solution.

    9. Paint It White

    Image Source: House Beautiful

    Painting your walls white is one of the easiest and most effective tricks to make a space feel bigger. White reflects light, helping the room feel open, clean, and fresh. It creates a seamless look, making walls seem to recede and ceilings feel higher. You can still have fun with the space, layer in texture, subtle patterns, or neutral accessories to keep it from feeling sterile. White also acts as a blank canvas, letting your furniture and art stand out. Whether you’re decorating a studio apartment or a small home office, a fresh coat of white paint can work wonders.

    10. Prioritize Natural Light

    Image Source: The Spruce

    Natural light has an incredible ability to make any room feel more spacious and welcoming. To make the most of it, avoid blocking windows with bulky furniture or dark curtains. Consider using light-filtering shades or sheer curtains to let sunlight pour in while maintaining some privacy. Arrange mirrors or reflective surfaces like glossy tables and metallic decor to bounce the light around the room. Even placing furniture in a way that lets light flow freely can change how open your home feels. Natural light not only brightens your space but also boosts your mood, making it a double win.

    11. Maximize Shelving

    Image Source: House Beautiful

    When floor space is limited, vertical storage becomes your best ally. Floating shelves, wall-mounted units, or tall bookcases draw the eye upward, creating a sense of height and maximizing every inch. They’re perfect for books, plants, artwork, or even kitchen supplies if you’re short on cabinets. You can also install corner shelves to use often-overlooked spots. Keep them tidy and curated,group items by color, size, or theme for a visually pleasing look. Shelving helps reduce clutter on the floor and tabletops, keeping your home organized and visually open without requiring any extra square footage.

    12. Keep It Neutral

    Image Source: House Beautiful

    Neutral tones, like soft whites, light grays, warm beiges, and pale taupes,can make a space feel calm and cohesive. These colors reflect light well and reduce visual clutter, making your room appear larger. A neutral palette doesn’t mean boring; you can still play with textures, patterns, and accents within that color family. Add throw pillows, rugs, or wall art in layered neutrals for interest without overwhelming the space. When everything flows in similar tones, it creates continuity, which tricks the eye into seeing a more expansive area. It’s an effortless way to open up your home without lifting a hammer.

    13. Choose Benches, Not Chairs

    Image Source: House Beautiful

    When space is tight, traditional dining chairs or bulky accent seats can eat up more room than they’re worth. Benches, on the other hand, are a sleek, versatile alternative. They tuck neatly under tables when not in use, saving valuable floor space and keeping walkways open. In entryways, living rooms, or at the foot of a bed, a bench offers seating and can double as storage or display. Some come with built-in compartments or open space beneath for baskets. Plus, benches visually declutter the room with their simple, low-profile design.

    14. Use Vertical Spaces

    Image Source: The Spruce

    When you’re short on square footage, think vertical. Use tall bookshelves, wall-mounted shelves, and hanging storage to keep things off the floor. Vertical lines naturally draw the eye upward, which creates a feeling of height and openness. Consider mounting floating shelves for books, plants, or decorative items. Hooks and pegboards can add function without taking up space. Making use of your wall space not only maximizes storage but also frees up floor area, which visually enlarges the room.

    15. Add a Gallery Wall

    Image Source: House Beautiful

    It might seem counterintuitive, but adding a gallery wall can actually make a small space feel bigger,if done right. A curated display of art, photos, or prints draws the eye upward and outward, giving the illusion of a larger area. Stick to cohesive frames and colors to maintain a clean, intentional look. You can go symmetrical for a polished feel or get creative with an organic, freeform layout. Position the gallery higher on the wall to elongate the space visually. Just be sure not to overcrowd,balance is key. A thoughtful gallery wall adds personality without cluttering the room.

    Finishing Notes:

    Creating a spacious feel in a small home doesn’t require a sledgehammer or a major remodel, it just takes a bit of strategy and smart design. From downsizing your dining table to letting natural light pour in, each tip we’ve shared is an easy, budget-friendly way to visually open up your space.

    If you’re looking for even more inspiration, layout ideas, or style guides, be sure to explore Home Designing. It’s packed with expert advice, modern interior trends, and visual walkthroughs to help you transform your space, big or small, into something that truly feels like home.
    #how #make #small #space #look
    How Do I Make A Small Space Look Bigger Without Renovating
    Living in a small space doesn’t mean you have to feel cramped or boxed in. With the right design tricks, you can make even the tiniest room feel open, airy, and inviting, no renovation required. Whether you’re in a compact apartment, a small home, or just trying to make the most of a single room, smart styling and layout choices can dramatically shift how the space looks and feels. From strategic lighting and paint colors to furniture swaps and clever storage solutions, there are plenty of easy, affordable ways to stretch your square footage visually. Ready to transform your space? Here are some practical, design-savvy ideas to make your home feel bigger without tearing down a single wall. 1. Opt for Multi-Functional Furniture Image Source: House Beautiful In a small space, every piece of furniture should earn its keep. Look for multi-functional items: ottomans that open up for storage, beds with drawers underneath, or coffee tables that can extend or lift to become a desk. Not only do these pieces help reduce clutter, but they also free up floor space, making the room look more open. Bonus points for furniture that can be folded away when not in use. By choosing versatile pieces, you’re making the most of every inch without sacrificing style or comfort. 2. Keep Pathways Clear Image Source: The Spruce One of the simplest yet most effective ways to make a small space feel bigger is to keep pathways and walkways clear. When furniture or clutter blocks natural movement through a room, it can make the space feel cramped and chaotic. Take a walk through your home and notice where you’re dodging corners or squeezing between pieces,those are areas to rethink. Opt for smaller furniture with slim profiles, or rearrange what you have to create an easy, natural flow. Open walkways help your eyes move freely through the room, making everything feel more spacious, breathable, and intentional. It’s all about giving yourself room to move,literally and visually. 3. Use Glass and Lucite Furniture Image Source: The Spruce Transparent furniture made from glass or Lucitetakes up less visual space because you can see right through it. A glass coffee table or clear dining chairs can provide functionality without cluttering up the view. These pieces practically disappear into the background, which helps the room feel more open. They also add a touch of modern sophistication. When you need furniture but don’t want it to dominate the room, going clear is a clever design choice. 4. Don’t Over-Clutter Your Space Image Source: House Beautiful In small spaces, clutter accumulates fast,and it visually shrinks your environment. The more items scattered around, the more cramped the room feels. Start by taking a critical look at what you own and asking: do I really need this here? Use storage bins, under-bed containers, or floating shelves to hide away what you don’t use daily. Keep surfaces like countertops, desks, and coffee tables as clear as possible. A minimal, clean setup allows the eye to rest and makes the space feel open and intentional. Remember: less stuff equals more space,both physically and mentally. 5. Utilize Your Windows Image Source: House Beautiful Windows are like built-in art that can also dramatically affect how big or small your space feels. Don’t cover them with heavy drapes or clutter them with too many objects on the sill. Keep window treatments light and minimal,sheer curtains or roller blinds are perfect. If privacy isn’t a big concern, consider leaving them bare. Letting natural light flood in through your windows instantly opens up your space and makes it feel brighter and more expansive. You can also place mirrors or shiny surfaces near windows to reflect more light into the room and maximize their impact. 6. Downsize Your Dining Table Image Source: House Beautiful A large dining table can dominate a small room, leaving little space to move or breathe. If you rarely entertain a big crowd, consider downsizing to a smaller round or drop-leaf table. These take up less visual and physical space and still offer enough room for daily meals. You can always keep a folding table or stackable chairs nearby for when guests do come over. Round tables are especially great for small spaces because they allow smoother traffic flow and eliminate awkward corners. Plus, a smaller table encourages intimacy during meals and helps the whole area feel more open and functional. 7. Use Mirrors Strategically Image Source: The Tiny Cottage Mirrors can work magic in a small room. They reflect both natural and artificial light, which can instantly make a space feel larger and brighter. A large mirror on a wall opposite a window can double the amount of light in your room. Mirrored furniture or decor elements like trays and picture frames also help. Think about using mirrored closet doors or even creating a mirror gallery wall. It’s not just about brightness; mirrors also create a sense of depth, tricking the eye into seeing more space than there actually is. 8. Install a Murphy Bed Image Source: House Beautiful A Murphy bedis a game-changer for anyone living in a tight space. It folds up into the wall or a cabinet when not in use, instantly transforming your bedroom into a living room, office, or workout area. This setup gives you the flexibility to have a multi-purpose room without sacrificing comfort. Modern Murphy beds often come with built-in shelves or desks, offering even more function without taking up extra space. If you want to reclaim your floor during the day and still get a good night’s sleep, this is one smart solution. 9. Paint It White Image Source: House Beautiful Painting your walls white is one of the easiest and most effective tricks to make a space feel bigger. White reflects light, helping the room feel open, clean, and fresh. It creates a seamless look, making walls seem to recede and ceilings feel higher. You can still have fun with the space, layer in texture, subtle patterns, or neutral accessories to keep it from feeling sterile. White also acts as a blank canvas, letting your furniture and art stand out. Whether you’re decorating a studio apartment or a small home office, a fresh coat of white paint can work wonders. 10. Prioritize Natural Light Image Source: The Spruce Natural light has an incredible ability to make any room feel more spacious and welcoming. To make the most of it, avoid blocking windows with bulky furniture or dark curtains. Consider using light-filtering shades or sheer curtains to let sunlight pour in while maintaining some privacy. Arrange mirrors or reflective surfaces like glossy tables and metallic decor to bounce the light around the room. Even placing furniture in a way that lets light flow freely can change how open your home feels. Natural light not only brightens your space but also boosts your mood, making it a double win. 11. Maximize Shelving Image Source: House Beautiful When floor space is limited, vertical storage becomes your best ally. Floating shelves, wall-mounted units, or tall bookcases draw the eye upward, creating a sense of height and maximizing every inch. They’re perfect for books, plants, artwork, or even kitchen supplies if you’re short on cabinets. You can also install corner shelves to use often-overlooked spots. Keep them tidy and curated,group items by color, size, or theme for a visually pleasing look. Shelving helps reduce clutter on the floor and tabletops, keeping your home organized and visually open without requiring any extra square footage. 12. Keep It Neutral Image Source: House Beautiful Neutral tones, like soft whites, light grays, warm beiges, and pale taupes,can make a space feel calm and cohesive. These colors reflect light well and reduce visual clutter, making your room appear larger. A neutral palette doesn’t mean boring; you can still play with textures, patterns, and accents within that color family. Add throw pillows, rugs, or wall art in layered neutrals for interest without overwhelming the space. When everything flows in similar tones, it creates continuity, which tricks the eye into seeing a more expansive area. It’s an effortless way to open up your home without lifting a hammer. 13. Choose Benches, Not Chairs Image Source: House Beautiful When space is tight, traditional dining chairs or bulky accent seats can eat up more room than they’re worth. Benches, on the other hand, are a sleek, versatile alternative. They tuck neatly under tables when not in use, saving valuable floor space and keeping walkways open. In entryways, living rooms, or at the foot of a bed, a bench offers seating and can double as storage or display. Some come with built-in compartments or open space beneath for baskets. Plus, benches visually declutter the room with their simple, low-profile design. 14. Use Vertical Spaces Image Source: The Spruce When you’re short on square footage, think vertical. Use tall bookshelves, wall-mounted shelves, and hanging storage to keep things off the floor. Vertical lines naturally draw the eye upward, which creates a feeling of height and openness. Consider mounting floating shelves for books, plants, or decorative items. Hooks and pegboards can add function without taking up space. Making use of your wall space not only maximizes storage but also frees up floor area, which visually enlarges the room. 15. Add a Gallery Wall Image Source: House Beautiful It might seem counterintuitive, but adding a gallery wall can actually make a small space feel bigger,if done right. A curated display of art, photos, or prints draws the eye upward and outward, giving the illusion of a larger area. Stick to cohesive frames and colors to maintain a clean, intentional look. You can go symmetrical for a polished feel or get creative with an organic, freeform layout. Position the gallery higher on the wall to elongate the space visually. Just be sure not to overcrowd,balance is key. A thoughtful gallery wall adds personality without cluttering the room. Finishing Notes: Creating a spacious feel in a small home doesn’t require a sledgehammer or a major remodel, it just takes a bit of strategy and smart design. From downsizing your dining table to letting natural light pour in, each tip we’ve shared is an easy, budget-friendly way to visually open up your space. If you’re looking for even more inspiration, layout ideas, or style guides, be sure to explore Home Designing. It’s packed with expert advice, modern interior trends, and visual walkthroughs to help you transform your space, big or small, into something that truly feels like home. #how #make #small #space #look
    How Do I Make A Small Space Look Bigger Without Renovating
    www.home-designing.com
    Living in a small space doesn’t mean you have to feel cramped or boxed in. With the right design tricks, you can make even the tiniest room feel open, airy, and inviting, no renovation required. Whether you’re in a compact apartment, a small home, or just trying to make the most of a single room, smart styling and layout choices can dramatically shift how the space looks and feels. From strategic lighting and paint colors to furniture swaps and clever storage solutions, there are plenty of easy, affordable ways to stretch your square footage visually. Ready to transform your space? Here are some practical, design-savvy ideas to make your home feel bigger without tearing down a single wall. 1. Opt for Multi-Functional Furniture Image Source: House Beautiful In a small space, every piece of furniture should earn its keep. Look for multi-functional items: ottomans that open up for storage, beds with drawers underneath, or coffee tables that can extend or lift to become a desk. Not only do these pieces help reduce clutter, but they also free up floor space, making the room look more open. Bonus points for furniture that can be folded away when not in use. By choosing versatile pieces, you’re making the most of every inch without sacrificing style or comfort. 2. Keep Pathways Clear Image Source: The Spruce One of the simplest yet most effective ways to make a small space feel bigger is to keep pathways and walkways clear. When furniture or clutter blocks natural movement through a room, it can make the space feel cramped and chaotic. Take a walk through your home and notice where you’re dodging corners or squeezing between pieces,those are areas to rethink. Opt for smaller furniture with slim profiles, or rearrange what you have to create an easy, natural flow. Open walkways help your eyes move freely through the room, making everything feel more spacious, breathable, and intentional. It’s all about giving yourself room to move,literally and visually. 3. Use Glass and Lucite Furniture Image Source: The Spruce Transparent furniture made from glass or Lucite (acrylic) takes up less visual space because you can see right through it. A glass coffee table or clear dining chairs can provide functionality without cluttering up the view. These pieces practically disappear into the background, which helps the room feel more open. They also add a touch of modern sophistication. When you need furniture but don’t want it to dominate the room, going clear is a clever design choice. 4. Don’t Over-Clutter Your Space Image Source: House Beautiful In small spaces, clutter accumulates fast,and it visually shrinks your environment. The more items scattered around, the more cramped the room feels. Start by taking a critical look at what you own and asking: do I really need this here? Use storage bins, under-bed containers, or floating shelves to hide away what you don’t use daily. Keep surfaces like countertops, desks, and coffee tables as clear as possible. A minimal, clean setup allows the eye to rest and makes the space feel open and intentional. Remember: less stuff equals more space,both physically and mentally. 5. Utilize Your Windows Image Source: House Beautiful Windows are like built-in art that can also dramatically affect how big or small your space feels. Don’t cover them with heavy drapes or clutter them with too many objects on the sill. Keep window treatments light and minimal,sheer curtains or roller blinds are perfect. If privacy isn’t a big concern, consider leaving them bare. Letting natural light flood in through your windows instantly opens up your space and makes it feel brighter and more expansive. You can also place mirrors or shiny surfaces near windows to reflect more light into the room and maximize their impact. 6. Downsize Your Dining Table Image Source: House Beautiful A large dining table can dominate a small room, leaving little space to move or breathe. If you rarely entertain a big crowd, consider downsizing to a smaller round or drop-leaf table. These take up less visual and physical space and still offer enough room for daily meals. You can always keep a folding table or stackable chairs nearby for when guests do come over. Round tables are especially great for small spaces because they allow smoother traffic flow and eliminate awkward corners. Plus, a smaller table encourages intimacy during meals and helps the whole area feel more open and functional. 7. Use Mirrors Strategically Image Source: The Tiny Cottage Mirrors can work magic in a small room. They reflect both natural and artificial light, which can instantly make a space feel larger and brighter. A large mirror on a wall opposite a window can double the amount of light in your room. Mirrored furniture or decor elements like trays and picture frames also help. Think about using mirrored closet doors or even creating a mirror gallery wall. It’s not just about brightness; mirrors also create a sense of depth, tricking the eye into seeing more space than there actually is. 8. Install a Murphy Bed Image Source: House Beautiful A Murphy bed (also known as a wall bed) is a game-changer for anyone living in a tight space. It folds up into the wall or a cabinet when not in use, instantly transforming your bedroom into a living room, office, or workout area. This setup gives you the flexibility to have a multi-purpose room without sacrificing comfort. Modern Murphy beds often come with built-in shelves or desks, offering even more function without taking up extra space. If you want to reclaim your floor during the day and still get a good night’s sleep, this is one smart solution. 9. Paint It White Image Source: House Beautiful Painting your walls white is one of the easiest and most effective tricks to make a space feel bigger. White reflects light, helping the room feel open, clean, and fresh. It creates a seamless look, making walls seem to recede and ceilings feel higher. You can still have fun with the space, layer in texture, subtle patterns, or neutral accessories to keep it from feeling sterile. White also acts as a blank canvas, letting your furniture and art stand out. Whether you’re decorating a studio apartment or a small home office, a fresh coat of white paint can work wonders. 10. Prioritize Natural Light Image Source: The Spruce Natural light has an incredible ability to make any room feel more spacious and welcoming. To make the most of it, avoid blocking windows with bulky furniture or dark curtains. Consider using light-filtering shades or sheer curtains to let sunlight pour in while maintaining some privacy. Arrange mirrors or reflective surfaces like glossy tables and metallic decor to bounce the light around the room. Even placing furniture in a way that lets light flow freely can change how open your home feels. Natural light not only brightens your space but also boosts your mood, making it a double win. 11. Maximize Shelving Image Source: House Beautiful When floor space is limited, vertical storage becomes your best ally. Floating shelves, wall-mounted units, or tall bookcases draw the eye upward, creating a sense of height and maximizing every inch. They’re perfect for books, plants, artwork, or even kitchen supplies if you’re short on cabinets. You can also install corner shelves to use often-overlooked spots. Keep them tidy and curated,group items by color, size, or theme for a visually pleasing look. Shelving helps reduce clutter on the floor and tabletops, keeping your home organized and visually open without requiring any extra square footage. 12. Keep It Neutral Image Source: House Beautiful Neutral tones, like soft whites, light grays, warm beiges, and pale taupes,can make a space feel calm and cohesive. These colors reflect light well and reduce visual clutter, making your room appear larger. A neutral palette doesn’t mean boring; you can still play with textures, patterns, and accents within that color family. Add throw pillows, rugs, or wall art in layered neutrals for interest without overwhelming the space. When everything flows in similar tones, it creates continuity, which tricks the eye into seeing a more expansive area. It’s an effortless way to open up your home without lifting a hammer. 13. Choose Benches, Not Chairs Image Source: House Beautiful When space is tight, traditional dining chairs or bulky accent seats can eat up more room than they’re worth. Benches, on the other hand, are a sleek, versatile alternative. They tuck neatly under tables when not in use, saving valuable floor space and keeping walkways open. In entryways, living rooms, or at the foot of a bed, a bench offers seating and can double as storage or display. Some come with built-in compartments or open space beneath for baskets. Plus, benches visually declutter the room with their simple, low-profile design. 14. Use Vertical Spaces Image Source: The Spruce When you’re short on square footage, think vertical. Use tall bookshelves, wall-mounted shelves, and hanging storage to keep things off the floor. Vertical lines naturally draw the eye upward, which creates a feeling of height and openness. Consider mounting floating shelves for books, plants, or decorative items. Hooks and pegboards can add function without taking up space. Making use of your wall space not only maximizes storage but also frees up floor area, which visually enlarges the room. 15. Add a Gallery Wall Image Source: House Beautiful It might seem counterintuitive, but adding a gallery wall can actually make a small space feel bigger,if done right. A curated display of art, photos, or prints draws the eye upward and outward, giving the illusion of a larger area. Stick to cohesive frames and colors to maintain a clean, intentional look. You can go symmetrical for a polished feel or get creative with an organic, freeform layout. Position the gallery higher on the wall to elongate the space visually. Just be sure not to overcrowd,balance is key. A thoughtful gallery wall adds personality without cluttering the room. Finishing Notes: Creating a spacious feel in a small home doesn’t require a sledgehammer or a major remodel, it just takes a bit of strategy and smart design. From downsizing your dining table to letting natural light pour in, each tip we’ve shared is an easy, budget-friendly way to visually open up your space. If you’re looking for even more inspiration, layout ideas, or style guides, be sure to explore Home Designing. It’s packed with expert advice, modern interior trends, and visual walkthroughs to help you transform your space, big or small, into something that truly feels like home.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Elden Ring Nightreign may be co-op, but I’m having a blast solo

    Imagine playing Fortnite, but instead of fighting other players, all you want to do is break into houses to look for caches of slurp juice. Yes, the storm is closing in on you, and there’s a bunch of enemies waiting to kill you, but all you want to do is take a walking tour of Tilted Towers. Then when the match is over, instead of queueing again, you start reading the in-game lore for Peely and Sabrina Carpenter. You can count your number of player kills on one hand meanwhile your number of deaths is in the hundreds. You’ve never achieved a victory royale, but you’ve never had more fun.That’s how I play Elden Ring Nightreign.Nightreign is FromSoftware’s first Elden Ring spinoff, and it’s unlike any Souls game that the developer has done before. Nightreign has the conceit of so many battle royale games — multiplayer combat focused on acquiring resources across a large map that slowly shrinks over time — wrapped in the narrative, visual aesthetics, and combat of Elden Ring. Instead of the Tarnished, you are a Nightfarer. Instead of the expansive Lands Between, you are sent to Limveld, an island with an ever-shifting landscape. And instead of becoming the Elden Lord, your goal is to defeat the Night Lord and end the destructive storm that scours the land.Elden Ring Nightreign aura-farming exhibit A.In Nightreign, gameplay sessions are broken up into expeditions, each of which is divided into three day-night cycles. During the day, you — either solo or with two other players — explore the world looking for weapon upgrades and fighting bosses for the enhancements they reward. You’ll be forced to move as the deadly Night’s Tide slowly consumes the map, whittling your health to nothing if you’re caught in it. When the map is at its smallest, you face a tough midboss. Defeat it to commence day two of the expedition or die and start it all over. Then, on the third day, you face the expedition’s final boss. There are several expeditions to conquer each with different bosses, mid-bosses, weapons to collect, and all kinds of events that make each run unique.I had the opportunity to play Nightreign once before earlier this year, and it wasn’t the best preview, as the game was plagued with all kinds of issues that didn’t allow me to experience it the way the developers intended. Those technical issues have been ironed out but I still haven’t completed the game’s most basic objective: beat the first expedition. This isn’t because of any technical or gameplay issues I had. For the times I wanted to play as intended, my colleague Jay Peters stepped in to help me and I had no problem finding party members to tackle expeditions with on my own… I just never really wanted to. And part of the reason why I’m enjoying Nightreign so much is because the game lets me play it in a way that’s completely counterintuitive – slowly and alone.Collaborative gaming doesn’t always feel good to me. I want to take things at my own pace, and that’s hard to do when there’s a group of people frustrated with me because they need my help to kill a boss while I’m still delving into a dungeon a mile away. But the ability to solo queue does come with a significant catch – you’re not gonna get very far. I died often and to everything from random enemies to bosses. It’s not often that I even make it to that first boss fight without dying to the warm-up battles that precede it. This should frustrate me, but I don’t care in the slightest. I’m just so pleased that I can go at my own pace to explore more of Elden Ring’s visually gorgeous and narratively sumptuous world.You get by with a little help from your friends. I, however, am built different. Image: FromSoftwareWhich brings me to my favorite part: its characters. Nightreign has eight new classes, each with their own unique abilities. The classes can still use every weapon you findso there’s an option to tailor a character to fit your playstyle. There are certain kinds of classes I gravitate toward, specifically ranged combat, but for the first time in a class-based game, I love every one of them. It is so much fun shredding enemies to ribbons with the Duchess, using her Restage ability to replay the attacks done to an enemy essentially doubling the damage they receive. I love the Raider’s powers of just being a big fuckin’ dude, slamming things with big ass great weapons. And true to my ranged combat loving heart, Ironeye’s specialty with bows makes it so nice when I wanna kill things without putting myself in danger.Then there’s the Guardian. Look at him. He’s a giant armored bird-person with the busted wing and the huge-ass halberd and shield. His story involves being a protector who failed his flock and has found a new one in the other Nightfarers. I fell to my knees reading one of his codex entries and seeing how the Recluse, the mage character, helped him with his damaged wing. Every character has a codex that updates with their personal story the more expeditions you attempt. This is the shit I get out of bed for. The Guardian is the coolest FromSoftware character since Patches and I have a crush on him. Image: FromSoftwareI thought I was going to hate the concept of Nightreign. I want more Elden Ring: I love that world, so any chance I can have to go back, I’ll take but… I just don’t like multiplayer games. Describing Nightreign makes it sound like the reason why it exists is because an out of touch CEO looked at the popularity of Elden Ring and at all the money Fortnite prints and went “Yeah, let’s do that.” Even if that’s the case, Nightreign has been constructed so that it still appeals to lore freaks like me and I can ignore the less savory bits around multiplayer with relative ease. If I can take a moment and borrow a pair of words from my Gen Z niblings to describe Nightreign it’d be “aura” and “aura farming.” Aura is used to describe a person’s general coolness or badassery while aura farming is the activities one can engage in to increase one’s aura. John Wick has aura. In the first movie, when he performs his monologue about getting back in the assassin business spitting and screaming – that’s aura farming.And between the cooperative nature of the game, its rapid-paced combat, and the new characters, abilities, and story, Elden Ring Nightreign has a ton of aura that I’m having a lot of fun farming – just not in the way I expected.Elden Ring Nightreign is out now on Xbox, PlayStation, and PC.See More:
    #elden #ring #nightreign #coop #but
    Elden Ring Nightreign may be co-op, but I’m having a blast solo
    Imagine playing Fortnite, but instead of fighting other players, all you want to do is break into houses to look for caches of slurp juice. Yes, the storm is closing in on you, and there’s a bunch of enemies waiting to kill you, but all you want to do is take a walking tour of Tilted Towers. Then when the match is over, instead of queueing again, you start reading the in-game lore for Peely and Sabrina Carpenter. You can count your number of player kills on one hand meanwhile your number of deaths is in the hundreds. You’ve never achieved a victory royale, but you’ve never had more fun.That’s how I play Elden Ring Nightreign.Nightreign is FromSoftware’s first Elden Ring spinoff, and it’s unlike any Souls game that the developer has done before. Nightreign has the conceit of so many battle royale games — multiplayer combat focused on acquiring resources across a large map that slowly shrinks over time — wrapped in the narrative, visual aesthetics, and combat of Elden Ring. Instead of the Tarnished, you are a Nightfarer. Instead of the expansive Lands Between, you are sent to Limveld, an island with an ever-shifting landscape. And instead of becoming the Elden Lord, your goal is to defeat the Night Lord and end the destructive storm that scours the land.Elden Ring Nightreign aura-farming exhibit A.In Nightreign, gameplay sessions are broken up into expeditions, each of which is divided into three day-night cycles. During the day, you — either solo or with two other players — explore the world looking for weapon upgrades and fighting bosses for the enhancements they reward. You’ll be forced to move as the deadly Night’s Tide slowly consumes the map, whittling your health to nothing if you’re caught in it. When the map is at its smallest, you face a tough midboss. Defeat it to commence day two of the expedition or die and start it all over. Then, on the third day, you face the expedition’s final boss. There are several expeditions to conquer each with different bosses, mid-bosses, weapons to collect, and all kinds of events that make each run unique.I had the opportunity to play Nightreign once before earlier this year, and it wasn’t the best preview, as the game was plagued with all kinds of issues that didn’t allow me to experience it the way the developers intended. Those technical issues have been ironed out but I still haven’t completed the game’s most basic objective: beat the first expedition. This isn’t because of any technical or gameplay issues I had. For the times I wanted to play as intended, my colleague Jay Peters stepped in to help me and I had no problem finding party members to tackle expeditions with on my own… I just never really wanted to. And part of the reason why I’m enjoying Nightreign so much is because the game lets me play it in a way that’s completely counterintuitive – slowly and alone.Collaborative gaming doesn’t always feel good to me. I want to take things at my own pace, and that’s hard to do when there’s a group of people frustrated with me because they need my help to kill a boss while I’m still delving into a dungeon a mile away. But the ability to solo queue does come with a significant catch – you’re not gonna get very far. I died often and to everything from random enemies to bosses. It’s not often that I even make it to that first boss fight without dying to the warm-up battles that precede it. This should frustrate me, but I don’t care in the slightest. I’m just so pleased that I can go at my own pace to explore more of Elden Ring’s visually gorgeous and narratively sumptuous world.You get by with a little help from your friends. I, however, am built different. Image: FromSoftwareWhich brings me to my favorite part: its characters. Nightreign has eight new classes, each with their own unique abilities. The classes can still use every weapon you findso there’s an option to tailor a character to fit your playstyle. There are certain kinds of classes I gravitate toward, specifically ranged combat, but for the first time in a class-based game, I love every one of them. It is so much fun shredding enemies to ribbons with the Duchess, using her Restage ability to replay the attacks done to an enemy essentially doubling the damage they receive. I love the Raider’s powers of just being a big fuckin’ dude, slamming things with big ass great weapons. And true to my ranged combat loving heart, Ironeye’s specialty with bows makes it so nice when I wanna kill things without putting myself in danger.Then there’s the Guardian. Look at him. He’s a giant armored bird-person with the busted wing and the huge-ass halberd and shield. His story involves being a protector who failed his flock and has found a new one in the other Nightfarers. I fell to my knees reading one of his codex entries and seeing how the Recluse, the mage character, helped him with his damaged wing. Every character has a codex that updates with their personal story the more expeditions you attempt. This is the shit I get out of bed for. The Guardian is the coolest FromSoftware character since Patches and I have a crush on him. Image: FromSoftwareI thought I was going to hate the concept of Nightreign. I want more Elden Ring: I love that world, so any chance I can have to go back, I’ll take but… I just don’t like multiplayer games. Describing Nightreign makes it sound like the reason why it exists is because an out of touch CEO looked at the popularity of Elden Ring and at all the money Fortnite prints and went “Yeah, let’s do that.” Even if that’s the case, Nightreign has been constructed so that it still appeals to lore freaks like me and I can ignore the less savory bits around multiplayer with relative ease. If I can take a moment and borrow a pair of words from my Gen Z niblings to describe Nightreign it’d be “aura” and “aura farming.” Aura is used to describe a person’s general coolness or badassery while aura farming is the activities one can engage in to increase one’s aura. John Wick has aura. In the first movie, when he performs his monologue about getting back in the assassin business spitting and screaming – that’s aura farming.And between the cooperative nature of the game, its rapid-paced combat, and the new characters, abilities, and story, Elden Ring Nightreign has a ton of aura that I’m having a lot of fun farming – just not in the way I expected.Elden Ring Nightreign is out now on Xbox, PlayStation, and PC.See More: #elden #ring #nightreign #coop #but
    Elden Ring Nightreign may be co-op, but I’m having a blast solo
    www.theverge.com
    Imagine playing Fortnite, but instead of fighting other players, all you want to do is break into houses to look for caches of slurp juice. Yes, the storm is closing in on you, and there’s a bunch of enemies waiting to kill you, but all you want to do is take a walking tour of Tilted Towers. Then when the match is over, instead of queueing again, you start reading the in-game lore for Peely and Sabrina Carpenter. You can count your number of player kills on one hand meanwhile your number of deaths is in the hundreds. You’ve never achieved a victory royale, but you’ve never had more fun.That’s how I play Elden Ring Nightreign.Nightreign is FromSoftware’s first Elden Ring spinoff, and it’s unlike any Souls game that the developer has done before. Nightreign has the conceit of so many battle royale games — multiplayer combat focused on acquiring resources across a large map that slowly shrinks over time — wrapped in the narrative, visual aesthetics, and combat of Elden Ring. Instead of the Tarnished, you are a Nightfarer. Instead of the expansive Lands Between, you are sent to Limveld, an island with an ever-shifting landscape. And instead of becoming the Elden Lord, your goal is to defeat the Night Lord and end the destructive storm that scours the land.Elden Ring Nightreign aura-farming exhibit A.In Nightreign, gameplay sessions are broken up into expeditions, each of which is divided into three day-night cycles. During the day, you — either solo or with two other players — explore the world looking for weapon upgrades and fighting bosses for the enhancements they reward. You’ll be forced to move as the deadly Night’s Tide slowly consumes the map, whittling your health to nothing if you’re caught in it. When the map is at its smallest, you face a tough midboss. Defeat it to commence day two of the expedition or die and start it all over. Then, on the third day, you face the expedition’s final boss. There are several expeditions to conquer each with different bosses, mid-bosses, weapons to collect, and all kinds of events that make each run unique.I had the opportunity to play Nightreign once before earlier this year (and during a more recent network test) , and it wasn’t the best preview, as the game was plagued with all kinds of issues that didn’t allow me to experience it the way the developers intended. Those technical issues have been ironed out but I still haven’t completed the game’s most basic objective: beat the first expedition. This isn’t because of any technical or gameplay issues I had. For the times I wanted to play as intended, my colleague Jay Peters stepped in to help me and I had no problem finding party members to tackle expeditions with on my own… I just never really wanted to. And part of the reason why I’m enjoying Nightreign so much is because the game lets me play it in a way that’s completely counterintuitive – slowly and alone.Collaborative gaming doesn’t always feel good to me. I want to take things at my own pace, and that’s hard to do when there’s a group of people frustrated with me because they need my help to kill a boss while I’m still delving into a dungeon a mile away. But the ability to solo queue does come with a significant catch – you’re not gonna get very far. I died often and to everything from random enemies to bosses. It’s not often that I even make it to that first boss fight without dying to the warm-up battles that precede it. This should frustrate me, but I don’t care in the slightest. I’m just so pleased that I can go at my own pace to explore more of Elden Ring’s visually gorgeous and narratively sumptuous world.You get by with a little help from your friends. I, however, am built different. Image: FromSoftwareWhich brings me to my favorite part: its characters. Nightreign has eight new classes, each with their own unique abilities. The classes can still use every weapon you find (with some locked behind level requirements) so there’s an option to tailor a character to fit your playstyle. There are certain kinds of classes I gravitate toward, specifically ranged combat, but for the first time in a class-based game, I love every one of them. It is so much fun shredding enemies to ribbons with the Duchess, using her Restage ability to replay the attacks done to an enemy essentially doubling the damage they receive. I love the Raider’s powers of just being a big fuckin’ dude, slamming things with big ass great weapons. And true to my ranged combat loving heart, Ironeye’s specialty with bows makes it so nice when I wanna kill things without putting myself in danger.Then there’s the Guardian. Look at him. He’s a giant armored bird-person with the busted wing and the huge-ass halberd and shield. His story involves being a protector who failed his flock and has found a new one in the other Nightfarers. I fell to my knees reading one of his codex entries and seeing how the Recluse, the mage character, helped him with his damaged wing. Every character has a codex that updates with their personal story the more expeditions you attempt. This is the shit I get out of bed for. The Guardian is the coolest FromSoftware character since Patches and I have a crush on him. Image: FromSoftwareI thought I was going to hate the concept of Nightreign. I want more Elden Ring: I love that world, so any chance I can have to go back, I’ll take but… I just don’t like multiplayer games. Describing Nightreign makes it sound like the reason why it exists is because an out of touch CEO looked at the popularity of Elden Ring and at all the money Fortnite prints and went “Yeah, let’s do that.” Even if that’s the case, Nightreign has been constructed so that it still appeals to lore freaks like me and I can ignore the less savory bits around multiplayer with relative ease. If I can take a moment and borrow a pair of words from my Gen Z niblings to describe Nightreign it’d be “aura” and “aura farming.” Aura is used to describe a person’s general coolness or badassery while aura farming is the activities one can engage in to increase one’s aura. John Wick has aura. In the first movie, when he performs his monologue about getting back in the assassin business spitting and screaming – that’s aura farming.And between the cooperative nature of the game, its rapid-paced combat, and the new characters, abilities, and story, Elden Ring Nightreign has a ton of aura that I’m having a lot of fun farming – just not in the way I expected.Elden Ring Nightreign is out now on Xbox, PlayStation, and PC.See More:
    0 Reacties ·0 aandelen ·0 voorbeeld
  • A trip to the farm where loofahs grow on vines

    If you've ever wondered where loofahs come from, take a trip with us.
     
    Image: Penpak Ngamsathain / Getty Images

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    If you’ve spent most of your life under the impression that loofahs are some type of sea sponge and that these scratchy natural scrubbers are the last thing you’d want to use on your body on a daily basis, you’re not alone. But in fact, the Luffa Aegyptiacais the taxonomic name of a species of gourd that grows on land, and it’s a genetic descendant of the wild cucumber. What’s more, if it’s locally grown with minimal processing, it’s plenty soft enough for not just your skin, but plenty of other applications, too.
    What is a luffa?
    In the States, you’d be excused for not being familiar with this unique plant, as luffa is far more popular in Asia and tropical regions. In fact, very few farmers grow the plant commercially for the American market—there are just two farms in the country and, according to Brooklynn Gamble, farm supervisor at The Luffa Farm in Nipomo, California, both are located in the West Coast state. But the plant isn’t endemic to countries this far north, so cultivating it requires lots of care and attention.
    Luffa plants growing on vines at The Luffa Farm. Image: Courtesy of The Luffa Farm
    Fortunately, luffa farmer Deanne Coon was willing to offer both, which is how The Luffa Farm was born in 2000 after growing the plant as part of a friend’s biology class experiment and then spending nearly two decades experimenting. Thanks to Nipomo’s location in a decidedly non-tropical climate, Coon had to account for things like cooler seasons, coastal windsand gophers. 
    Now semi-retired, she and a team run the small farm peppered with avocado and citrus trees and decorated with quirky custom yard art. They also offer tours during open hours so visitors can learn a little something about luffa.
    Guests saunter through a steamy greenhouse where long green gourds that resemble zucchini hang from trellises in impressive quantities. They learn that while some Asian cultures raise smaller varieties that are green, tender, and edible when young, it’s not popular as a culinary ingredient in the U.S. And when they inquire about why crispy brown gourds are still hanging on the vine, they learn that luffa isn’t harvested until well after you think it’s dead. “When it’s completely brown and dry we cut it off the vine,” Gamble explains.
    Only then, and after it is peeled, will it finally be recognizable as the fibrous exfoliating sponge many know and love.
    In areas of Asia, the luffa fruit is used in culinary dishes. Image: Courtesy of The Luffa Farm
    It’s what’s on the inside that matters
    Getting to that point, however, takes time and unique biological functions that aren’t visible to the naked eye. It takes six to nine months after planting luffa seeds for them to be ready to harvest, Gamble explains. It takes three to four months just for slim green baby gourds to start sprouting from reaching vines and the male flowers, which are necessary for pollination, to bloom. 
    Once that happens and pollination is complete, the squash are technically edible and ripe for picking. The inner fruit tends to be slimy like okra, so it’s a bit of an acquired taste. However, there are certainly recipes from around the world that incorporate this nutritional veggie.
    But The Luffa Farm isn’t in the business of unpopular produce, so the fruit is left on the vine where it can grow as large and heavy as the trellised vines can handle, Gamble continues. As that happens, the interior plant fibers act as the veins that feed water and nutrients to the seeds, the care of which is the plant’s number one directive. Those veins get thicker and denser to nourish the seeds as the gourd grows.

    When the gourd gets too big—about the size of an oversized zucchini—the vine, which can grow 30 to 40 feet in any direction, cuts off the water supply to the whole fruit in order to redistribute resources to other plants on the vine that are still growing. “As the vine sucks the water out and recycles it,dries up,” Gamble describes. When that happens, instead of rotting like most other produce, the luffa turns from deep green to yellow to brown and hard.
    When that happens, the gourd feels light as air because all the liquid and vegetable matter has dried up, leaving only a fibrous network of cellulose inside the now-hard, shell-like skin. That’s when it’s time to harvest. The skin is cracked open and the seeds, which can be replanted, are shaken out. Harvesters soak the whole gourd in water for five minutes, which rehydrates the thin layer of vegetable residue on the underside and then “the skin so it slides right off,” Gamble says.
    What’s left over is an airy, light, sponge-like spidery network of plant fibers that make an excellent natural multi-purpose sponge that’s pliable when dry and even softer when wet. That’s what makes it such an attractive option among skincare enthusiasts.
    Not all luffa are created equal
    If that doesn’t sound at all like the rigid, compressed luffa you see for sale at your local health food store, you’re not wrong. Most luffa are imported, and since they’re a plant, they must be treated beforehand to ensure they won’t transport bugs, disease, or other agricultural blights, Gamble explains. 
    “Those heat treatments in particular are what damage the fibers,” she states. It shrinks the otherwise light and loose cellulose structures and makes the luffa hard, compact, and less pliable. Compromising the structure also makes them more prone to bacterial growth, because they don’t dry out as easily or completely between uses.
    Luffas grown and sold at The Luffa Farm. Image: Courtesy of The Luffa Farm
    Luffa grown in the U.S., like the ones from The Luffa Farm, don’t have to be treated with anything since they’re not imported from overseas. They just get a quick rinse before they’re sold. As a result, they’re softer, more pleasant on skin, more versatile, and longer lasting. One might last up to a year of regular use. Plus, because they’re highly porous, “they don’t create the same breeding ground for bacteria,” Gamble offers.
    A plant with unlimited uses
    But exfoliating isn’t all these plants are good for. On the contrary, Gamble says there are many uses for luffa. Softer varieties can be used as a facial sponge in place of a washcloth. They can even be tossed in the washer for a deep clean, though you should avoid putting them in the dryer. They make excellent dish sponges and pot scrubbers. Gamble uses one on her stainless steel stove. 
    A wet luffa makes quick work of washing your car, too, especially when it comes to scrubbing bugs off your grill, Gamble recommends. The fibers won’t even scratch the finish. They’ve even been used as insulation in mud brick houses and as industrial filters and may have inspired a sunlight-powered porous hydrogel that could potentially purify water. The best part: untreated luffa sponges are compostable, making them an eco-friendly alternative to synthetic sponges.
    “They are so unique as a plant,” Gamble says, a truly multifunctional and sustainable natural product whose uses go far beyond bath time exfoliation. And yes, it’s one that grows on land, not underwater.
    #trip #farm #where #loofahs #grow
    A trip to the farm where loofahs grow on vines
    If you've ever wondered where loofahs come from, take a trip with us.   Image: Penpak Ngamsathain / Getty Images Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. If you’ve spent most of your life under the impression that loofahs are some type of sea sponge and that these scratchy natural scrubbers are the last thing you’d want to use on your body on a daily basis, you’re not alone. But in fact, the Luffa Aegyptiacais the taxonomic name of a species of gourd that grows on land, and it’s a genetic descendant of the wild cucumber. What’s more, if it’s locally grown with minimal processing, it’s plenty soft enough for not just your skin, but plenty of other applications, too. What is a luffa? In the States, you’d be excused for not being familiar with this unique plant, as luffa is far more popular in Asia and tropical regions. In fact, very few farmers grow the plant commercially for the American market—there are just two farms in the country and, according to Brooklynn Gamble, farm supervisor at The Luffa Farm in Nipomo, California, both are located in the West Coast state. But the plant isn’t endemic to countries this far north, so cultivating it requires lots of care and attention. Luffa plants growing on vines at The Luffa Farm. Image: Courtesy of The Luffa Farm Fortunately, luffa farmer Deanne Coon was willing to offer both, which is how The Luffa Farm was born in 2000 after growing the plant as part of a friend’s biology class experiment and then spending nearly two decades experimenting. Thanks to Nipomo’s location in a decidedly non-tropical climate, Coon had to account for things like cooler seasons, coastal windsand gophers.  Now semi-retired, she and a team run the small farm peppered with avocado and citrus trees and decorated with quirky custom yard art. They also offer tours during open hours so visitors can learn a little something about luffa. Guests saunter through a steamy greenhouse where long green gourds that resemble zucchini hang from trellises in impressive quantities. They learn that while some Asian cultures raise smaller varieties that are green, tender, and edible when young, it’s not popular as a culinary ingredient in the U.S. And when they inquire about why crispy brown gourds are still hanging on the vine, they learn that luffa isn’t harvested until well after you think it’s dead. “When it’s completely brown and dry we cut it off the vine,” Gamble explains. Only then, and after it is peeled, will it finally be recognizable as the fibrous exfoliating sponge many know and love. In areas of Asia, the luffa fruit is used in culinary dishes. Image: Courtesy of The Luffa Farm It’s what’s on the inside that matters Getting to that point, however, takes time and unique biological functions that aren’t visible to the naked eye. It takes six to nine months after planting luffa seeds for them to be ready to harvest, Gamble explains. It takes three to four months just for slim green baby gourds to start sprouting from reaching vines and the male flowers, which are necessary for pollination, to bloom.  Once that happens and pollination is complete, the squash are technically edible and ripe for picking. The inner fruit tends to be slimy like okra, so it’s a bit of an acquired taste. However, there are certainly recipes from around the world that incorporate this nutritional veggie. But The Luffa Farm isn’t in the business of unpopular produce, so the fruit is left on the vine where it can grow as large and heavy as the trellised vines can handle, Gamble continues. As that happens, the interior plant fibers act as the veins that feed water and nutrients to the seeds, the care of which is the plant’s number one directive. Those veins get thicker and denser to nourish the seeds as the gourd grows. When the gourd gets too big—about the size of an oversized zucchini—the vine, which can grow 30 to 40 feet in any direction, cuts off the water supply to the whole fruit in order to redistribute resources to other plants on the vine that are still growing. “As the vine sucks the water out and recycles it,dries up,” Gamble describes. When that happens, instead of rotting like most other produce, the luffa turns from deep green to yellow to brown and hard. When that happens, the gourd feels light as air because all the liquid and vegetable matter has dried up, leaving only a fibrous network of cellulose inside the now-hard, shell-like skin. That’s when it’s time to harvest. The skin is cracked open and the seeds, which can be replanted, are shaken out. Harvesters soak the whole gourd in water for five minutes, which rehydrates the thin layer of vegetable residue on the underside and then “the skin so it slides right off,” Gamble says. What’s left over is an airy, light, sponge-like spidery network of plant fibers that make an excellent natural multi-purpose sponge that’s pliable when dry and even softer when wet. That’s what makes it such an attractive option among skincare enthusiasts. Not all luffa are created equal If that doesn’t sound at all like the rigid, compressed luffa you see for sale at your local health food store, you’re not wrong. Most luffa are imported, and since they’re a plant, they must be treated beforehand to ensure they won’t transport bugs, disease, or other agricultural blights, Gamble explains.  “Those heat treatments in particular are what damage the fibers,” she states. It shrinks the otherwise light and loose cellulose structures and makes the luffa hard, compact, and less pliable. Compromising the structure also makes them more prone to bacterial growth, because they don’t dry out as easily or completely between uses. Luffas grown and sold at The Luffa Farm. Image: Courtesy of The Luffa Farm Luffa grown in the U.S., like the ones from The Luffa Farm, don’t have to be treated with anything since they’re not imported from overseas. They just get a quick rinse before they’re sold. As a result, they’re softer, more pleasant on skin, more versatile, and longer lasting. One might last up to a year of regular use. Plus, because they’re highly porous, “they don’t create the same breeding ground for bacteria,” Gamble offers. A plant with unlimited uses But exfoliating isn’t all these plants are good for. On the contrary, Gamble says there are many uses for luffa. Softer varieties can be used as a facial sponge in place of a washcloth. They can even be tossed in the washer for a deep clean, though you should avoid putting them in the dryer. They make excellent dish sponges and pot scrubbers. Gamble uses one on her stainless steel stove.  A wet luffa makes quick work of washing your car, too, especially when it comes to scrubbing bugs off your grill, Gamble recommends. The fibers won’t even scratch the finish. They’ve even been used as insulation in mud brick houses and as industrial filters and may have inspired a sunlight-powered porous hydrogel that could potentially purify water. The best part: untreated luffa sponges are compostable, making them an eco-friendly alternative to synthetic sponges. “They are so unique as a plant,” Gamble says, a truly multifunctional and sustainable natural product whose uses go far beyond bath time exfoliation. And yes, it’s one that grows on land, not underwater. #trip #farm #where #loofahs #grow
    A trip to the farm where loofahs grow on vines
    www.popsci.com
    If you've ever wondered where loofahs come from, take a trip with us.   Image: Penpak Ngamsathain / Getty Images Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. If you’ve spent most of your life under the impression that loofahs are some type of sea sponge and that these scratchy natural scrubbers are the last thing you’d want to use on your body on a daily basis, you’re not alone. But in fact, the Luffa Aegyptiaca (often known as loofah in the U.S.) is the taxonomic name of a species of gourd that grows on land, and it’s a genetic descendant of the wild cucumber. What’s more, if it’s locally grown with minimal processing, it’s plenty soft enough for not just your skin, but plenty of other applications, too. What is a luffa? In the States, you’d be excused for not being familiar with this unique plant, as luffa is far more popular in Asia and tropical regions. In fact, very few farmers grow the plant commercially for the American market—there are just two farms in the country and, according to Brooklynn Gamble, farm supervisor at The Luffa Farm in Nipomo, California, both are located in the West Coast state. But the plant isn’t endemic to countries this far north, so cultivating it requires lots of care and attention. Luffa plants growing on vines at The Luffa Farm. Image: Courtesy of The Luffa Farm Fortunately, luffa farmer Deanne Coon was willing to offer both, which is how The Luffa Farm was born in 2000 after growing the plant as part of a friend’s biology class experiment and then spending nearly two decades experimenting. Thanks to Nipomo’s location in a decidedly non-tropical climate, Coon had to account for things like cooler seasons (she grows in greenhouses), coastal winds (also greenhouses) and gophers (she grows plants in pots instead of directly in the ground).  Now semi-retired, she and a team run the small farm peppered with avocado and citrus trees and decorated with quirky custom yard art. They also offer tours during open hours so visitors can learn a little something about luffa. Guests saunter through a steamy greenhouse where long green gourds that resemble zucchini hang from trellises in impressive quantities. They learn that while some Asian cultures raise smaller varieties that are green, tender, and edible when young, it’s not popular as a culinary ingredient in the U.S. And when they inquire about why crispy brown gourds are still hanging on the vine, they learn that luffa isn’t harvested until well after you think it’s dead. “When it’s completely brown and dry we cut it off the vine,” Gamble explains. Only then, and after it is peeled, will it finally be recognizable as the fibrous exfoliating sponge many know and love. In areas of Asia, the luffa fruit is used in culinary dishes. Image: Courtesy of The Luffa Farm It’s what’s on the inside that matters Getting to that point, however, takes time and unique biological functions that aren’t visible to the naked eye. It takes six to nine months after planting luffa seeds for them to be ready to harvest, Gamble explains (longer in winter, shorter in summer). It takes three to four months just for slim green baby gourds to start sprouting from reaching vines and the male flowers, which are necessary for pollination, to bloom.  Once that happens and pollination is complete, the squash are technically edible and ripe for picking. The inner fruit tends to be slimy like okra, so it’s a bit of an acquired taste. However, there are certainly recipes from around the world that incorporate this nutritional veggie. But The Luffa Farm isn’t in the business of unpopular produce, so the fruit is left on the vine where it can grow as large and heavy as the trellised vines can handle, Gamble continues. As that happens, the interior plant fibers act as the veins that feed water and nutrients to the seeds, the care of which is the plant’s number one directive. Those veins get thicker and denser to nourish the seeds as the gourd grows. When the gourd gets too big—about the size of an oversized zucchini—the vine, which can grow 30 to 40 feet in any direction, cuts off the water supply to the whole fruit in order to redistribute resources to other plants on the vine that are still growing. “As the vine sucks the water out and recycles it, [the gourd] dries up,” Gamble describes. When that happens, instead of rotting like most other produce, the luffa turns from deep green to yellow to brown and hard. When that happens, the gourd feels light as air because all the liquid and vegetable matter has dried up, leaving only a fibrous network of cellulose inside the now-hard, shell-like skin. That’s when it’s time to harvest. The skin is cracked open and the seeds, which can be replanted, are shaken out. Harvesters soak the whole gourd in water for five minutes, which rehydrates the thin layer of vegetable residue on the underside and then “the skin so it slides right off,” Gamble says. What’s left over is an airy, light, sponge-like spidery network of plant fibers that make an excellent natural multi-purpose sponge that’s pliable when dry and even softer when wet. That’s what makes it such an attractive option among skincare enthusiasts. Not all luffa are created equal If that doesn’t sound at all like the rigid, compressed luffa you see for sale at your local health food store, you’re not wrong. Most luffa are imported, and since they’re a plant, they must be treated beforehand to ensure they won’t transport bugs, disease, or other agricultural blights, Gamble explains.  “Those heat treatments in particular are what damage the fibers,” she states. It shrinks the otherwise light and loose cellulose structures and makes the luffa hard, compact, and less pliable. Compromising the structure also makes them more prone to bacterial growth, because they don’t dry out as easily or completely between uses. Luffas grown and sold at The Luffa Farm. Image: Courtesy of The Luffa Farm Luffa grown in the U.S., like the ones from The Luffa Farm, don’t have to be treated with anything since they’re not imported from overseas. They just get a quick rinse before they’re sold. As a result, they’re softer, more pleasant on skin, more versatile, and longer lasting. One might last up to a year of regular use. Plus, because they’re highly porous, “they don’t create the same breeding ground for bacteria,” Gamble offers. A plant with unlimited uses But exfoliating isn’t all these plants are good for. On the contrary, Gamble says there are many uses for luffa. Softer varieties can be used as a facial sponge in place of a washcloth. They can even be tossed in the washer for a deep clean, though you should avoid putting them in the dryer. They make excellent dish sponges and pot scrubbers. Gamble uses one on her stainless steel stove.  A wet luffa makes quick work of washing your car, too, especially when it comes to scrubbing bugs off your grill, Gamble recommends. The fibers won’t even scratch the finish. They’ve even been used as insulation in mud brick houses and as industrial filters and may have inspired a sunlight-powered porous hydrogel that could potentially purify water. The best part: untreated luffa sponges are compostable, making them an eco-friendly alternative to synthetic sponges. “They are so unique as a plant,” Gamble says, a truly multifunctional and sustainable natural product whose uses go far beyond bath time exfoliation. And yes, it’s one that grows on land, not underwater.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Google is shrinking Pixel phones’ At a Glance widget

    Google’s new Material 3 Expressive design language includes a welcome surprise for Pixel owners: the mandatory At a Glance home screen widget has shrunk, leaving space for an extra row of apps.

    The new look is included in the latest version of the Android 16 beta. Upon installation, Pixel owners are greeted with a pop-up message on the home screen:

    Enjoy more space for apps

    Good news! Your home screen has a new layout, which means there’s space for more apps & widgets

    The new design both shrinks the At a Glance widget and removes some of the dead space between the other rows, compressing the entire screen. It leaves room for a full extra row of apps below the redesigned widget.

    The bad news is that Google still won’t let you turn off the widget, which is a mandatory part of the Pixel home screen, just like the Google search bar at the bottom. Making it smaller will at least go some way to appeasing Pixel owners who’ve long hoped for the same home screen flexibility as other Android phones.

    Material 3 Expressive is a colorful, bouncy new aesthetic for Android that Google unveiled last week. It was made available in the new Android 16 beta yesterday, and should roll out widely later this year, after the OS update launches in full next month.
    #google #shrinking #pixel #phones #glance
    Google is shrinking Pixel phones’ At a Glance widget
    Google’s new Material 3 Expressive design language includes a welcome surprise for Pixel owners: the mandatory At a Glance home screen widget has shrunk, leaving space for an extra row of apps. The new look is included in the latest version of the Android 16 beta. Upon installation, Pixel owners are greeted with a pop-up message on the home screen: Enjoy more space for apps Good news! Your home screen has a new layout, which means there’s space for more apps & widgets The new design both shrinks the At a Glance widget and removes some of the dead space between the other rows, compressing the entire screen. It leaves room for a full extra row of apps below the redesigned widget. The bad news is that Google still won’t let you turn off the widget, which is a mandatory part of the Pixel home screen, just like the Google search bar at the bottom. Making it smaller will at least go some way to appeasing Pixel owners who’ve long hoped for the same home screen flexibility as other Android phones. Material 3 Expressive is a colorful, bouncy new aesthetic for Android that Google unveiled last week. It was made available in the new Android 16 beta yesterday, and should roll out widely later this year, after the OS update launches in full next month. #google #shrinking #pixel #phones #glance
    Google is shrinking Pixel phones’ At a Glance widget
    www.theverge.com
    Google’s new Material 3 Expressive design language includes a welcome surprise for Pixel owners: the mandatory At a Glance home screen widget has shrunk, leaving space for an extra row of apps. The new look is included in the latest version of the Android 16 beta. Upon installation, Pixel owners are greeted with a pop-up message on the home screen: Enjoy more space for apps Good news! Your home screen has a new layout, which means there’s space for more apps & widgets The new design both shrinks the At a Glance widget and removes some of the dead space between the other rows, compressing the entire screen. It leaves room for a full extra row of apps below the redesigned widget. The bad news is that Google still won’t let you turn off the widget, which is a mandatory part of the Pixel home screen, just like the Google search bar at the bottom. Making it smaller will at least go some way to appeasing Pixel owners who’ve long hoped for the same home screen flexibility as other Android phones. Material 3 Expressive is a colorful, bouncy new aesthetic for Android that Google unveiled last week. It was made available in the new Android 16 beta yesterday, and should roll out widely later this year, after the OS update launches in full next month.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Link Tank: Lena Dunham Brings ‘Too Much’ to Netflix This Summer

    Lena Dunham–following the success of her 2012 comedy-drama series Girls–is my Roman Empire. Everyone knows the story: the published essays dissecting the sudden end of her long-term relationship with producer Jack Antonoff; the widespread criticisms of Girls for its lack of diversity; the poorly received comments about abortion. We witnessed the public turn on Dunham, branding her one of the most controversial writers of her time. But what was she feeling during that wave of backlash? 
    On July 10, Dunham and her husband, Luis Felber, will offer a loose interpretation of what life was like when they first met in 2021. If it’s anything like her past work, the series will be packed with comedy, emotional honesty and sharp introspection. As Dunham put it in a Variety interview about meeting Felber: “When I met my husband, I was dazzled by just how much baggage two people could bring to the table.”

    The highly anticipated series, Too Much, co-written by Dunham and Felber, premieres on Netflix on July 10. 
    “Co-created by Dunham and Luis Ferber, Too Much centers on Jessicaa New York workaholic in her mid-thirties, reeling from a broken relationship that she thought would last forever and slowly isolating everyone she knows. When every block in New York tells a story of her own bad behaviour, the only solution is to take a job in London, where she plans to live a life of solitude like a Bronte sister. But when she meets Felix– a walking series of red flags – she finds that their unusual connection is impossible to ignore, even as it creates more problems than it solves. Now they have to ask themselves: do Americans and Brits actually speak the same language?”

    at Deadline
    A period romance about two young artists that fall in love in Paris just before becoming recognized as cultural icons– yes, please; plus, the Bill Pohlad directed project, Miles and Juliette, will be produced by none other than Mick Jagger– double yes! 
    Just in time for the recent wave of buzz surrounding new movies during the Cannes Film Festival, Damson Idris, who is currently filming opposite Brad Pitt in Paramount’s upcoming fantasy film Children of Blood and Bone, has been set to play Miles Davis, alongside Anamaria Vartolomei as Juliette Gréco in Pohlad’s latest project. 
    “According to the project’s description, Miles & Juliette follows ‘22-year-old Miles Davison a transformative trip to Paris in 1949, where he falls into a passionate romance with Juliette Gréco, the French singer, actress, and Left Bank icon. What begins as an intimate affair blossoms into a profound connection between two young artists — just before they became cultural legends.’”
    at The Hollywood Reporter
    On Sunday, Sony Pictures Television was able to secure global distribution rights to Jennifer Ames and Steve Turner’s relationship comedy series, The Miniature Wife. 

    The series features a star-studded cast of regulars and recurring characters, with Elizabeth Banks and Emmy-winner Matthew Macfadyen in the leading roles. You’re probably thinking: What do Ames and Turner mean by the miniature wife? The Media Res produced comedy series is actually based off of author Manuel Gonzalez’s short story of the same name about a man who accidentally shrinks his wife. 

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    “The relationship comedy series explores the shifting power dynamics between spouses as a technological mishap triggers the ultimate marital crisis, pitting them against each other in a battle for dominance. It will air on Peacock in the U.S.” 
    at Variety
    Tuesday marked the start of the 2025 Cannes Film Festival. Long-awaited projects from Wes Anderson and Ari Aster are set to premiere as well as six female directed films in the Official Competition. 
    Last year, Sean Baker snagged the Palme d’Or, which is the highest prize awarded to the director of the Best Feature Film of the Official Competition, for Anora– which later won the Academy Award for Best Picture on March 2. Who will win this year’s Palm d’Or, and will the films featured at Cannes be an early indicator for the 2026 awards season like last year? We’ll have to see, but for now, even if you can’t attend the festival, keep a close eye on the daily happenings in Cannes over the next two weeks. 
    “Pundits complained last year’s Cannes was a light affair, but that had to do with the bottleneck created by the strikes, for one. Many auteurs return to the Croisette this year to make for a highly anticipated festival.”

    at IndieWire
    The film that I’m the most excited to hear about following its premiere at the 2025 Cannes Film Festival is by far Ethan Coen’s upcoming dark comedy, Honey Don’t! The film follows Margaret Qualley as Honey O’Donahue, a small-town private investigator, who looks into strange deaths tied to a mysterious church. Who’s the priest of that deadly church? Captain America, of course! 
    Though it’s not in competition, the film will premiere in Cannes on the final day of the festival, as part of the midnight screenings. Though I’ll have to wait until Aug. 22 to see the dark comedy in theaters, I have a feeling there will be a lot of buzz surrounding the film following its premiere. 
    “Once again starring Margaret Qualley, this time as a lesbian private investigator on the trail of a cultist played by former Captain America Chris Evans, Ethan Coen’s latest, co-written alongside longtime collaborator and wife Tricia Cooke, looks to be a dark comedy through and through, firmly of a piece with Drive-Away Dolls.” 
    at Empire
    Watch the trailer of HONEY, DON’T! below:
    #link #tank #lena #dunham #brings
    Link Tank: Lena Dunham Brings ‘Too Much’ to Netflix This Summer
    Lena Dunham–following the success of her 2012 comedy-drama series Girls–is my Roman Empire. Everyone knows the story: the published essays dissecting the sudden end of her long-term relationship with producer Jack Antonoff; the widespread criticisms of Girls for its lack of diversity; the poorly received comments about abortion. We witnessed the public turn on Dunham, branding her one of the most controversial writers of her time. But what was she feeling during that wave of backlash?  On July 10, Dunham and her husband, Luis Felber, will offer a loose interpretation of what life was like when they first met in 2021. If it’s anything like her past work, the series will be packed with comedy, emotional honesty and sharp introspection. As Dunham put it in a Variety interview about meeting Felber: “When I met my husband, I was dazzled by just how much baggage two people could bring to the table.” The highly anticipated series, Too Much, co-written by Dunham and Felber, premieres on Netflix on July 10.  “Co-created by Dunham and Luis Ferber, Too Much centers on Jessicaa New York workaholic in her mid-thirties, reeling from a broken relationship that she thought would last forever and slowly isolating everyone she knows. When every block in New York tells a story of her own bad behaviour, the only solution is to take a job in London, where she plans to live a life of solitude like a Bronte sister. But when she meets Felix– a walking series of red flags – she finds that their unusual connection is impossible to ignore, even as it creates more problems than it solves. Now they have to ask themselves: do Americans and Brits actually speak the same language?” at Deadline A period romance about two young artists that fall in love in Paris just before becoming recognized as cultural icons– yes, please; plus, the Bill Pohlad directed project, Miles and Juliette, will be produced by none other than Mick Jagger– double yes!  Just in time for the recent wave of buzz surrounding new movies during the Cannes Film Festival, Damson Idris, who is currently filming opposite Brad Pitt in Paramount’s upcoming fantasy film Children of Blood and Bone, has been set to play Miles Davis, alongside Anamaria Vartolomei as Juliette Gréco in Pohlad’s latest project.  “According to the project’s description, Miles & Juliette follows ‘22-year-old Miles Davison a transformative trip to Paris in 1949, where he falls into a passionate romance with Juliette Gréco, the French singer, actress, and Left Bank icon. What begins as an intimate affair blossoms into a profound connection between two young artists — just before they became cultural legends.’” at The Hollywood Reporter On Sunday, Sony Pictures Television was able to secure global distribution rights to Jennifer Ames and Steve Turner’s relationship comedy series, The Miniature Wife.  The series features a star-studded cast of regulars and recurring characters, with Elizabeth Banks and Emmy-winner Matthew Macfadyen in the leading roles. You’re probably thinking: What do Ames and Turner mean by the miniature wife? The Media Res produced comedy series is actually based off of author Manuel Gonzalez’s short story of the same name about a man who accidentally shrinks his wife.  Join our mailing list Get the best of Den of Geek delivered right to your inbox! “The relationship comedy series explores the shifting power dynamics between spouses as a technological mishap triggers the ultimate marital crisis, pitting them against each other in a battle for dominance. It will air on Peacock in the U.S.”  at Variety Tuesday marked the start of the 2025 Cannes Film Festival. Long-awaited projects from Wes Anderson and Ari Aster are set to premiere as well as six female directed films in the Official Competition.  Last year, Sean Baker snagged the Palme d’Or, which is the highest prize awarded to the director of the Best Feature Film of the Official Competition, for Anora– which later won the Academy Award for Best Picture on March 2. Who will win this year’s Palm d’Or, and will the films featured at Cannes be an early indicator for the 2026 awards season like last year? We’ll have to see, but for now, even if you can’t attend the festival, keep a close eye on the daily happenings in Cannes over the next two weeks.  “Pundits complained last year’s Cannes was a light affair, but that had to do with the bottleneck created by the strikes, for one. Many auteurs return to the Croisette this year to make for a highly anticipated festival.” at IndieWire The film that I’m the most excited to hear about following its premiere at the 2025 Cannes Film Festival is by far Ethan Coen’s upcoming dark comedy, Honey Don’t! The film follows Margaret Qualley as Honey O’Donahue, a small-town private investigator, who looks into strange deaths tied to a mysterious church. Who’s the priest of that deadly church? Captain America, of course!  Though it’s not in competition, the film will premiere in Cannes on the final day of the festival, as part of the midnight screenings. Though I’ll have to wait until Aug. 22 to see the dark comedy in theaters, I have a feeling there will be a lot of buzz surrounding the film following its premiere.  “Once again starring Margaret Qualley, this time as a lesbian private investigator on the trail of a cultist played by former Captain America Chris Evans, Ethan Coen’s latest, co-written alongside longtime collaborator and wife Tricia Cooke, looks to be a dark comedy through and through, firmly of a piece with Drive-Away Dolls.”  at Empire Watch the trailer of HONEY, DON’T! below: #link #tank #lena #dunham #brings
    Link Tank: Lena Dunham Brings ‘Too Much’ to Netflix This Summer
    www.denofgeek.com
    Lena Dunham–following the success of her 2012 comedy-drama series Girls–is my Roman Empire. Everyone knows the story: the published essays dissecting the sudden end of her long-term relationship with producer Jack Antonoff; the widespread criticisms of Girls for its lack of diversity; the poorly received comments about abortion. We witnessed the public turn on Dunham, branding her one of the most controversial writers of her time. But what was she feeling during that wave of backlash?  On July 10, Dunham and her husband, Luis Felber, will offer a loose interpretation of what life was like when they first met in 2021. If it’s anything like her past work, the series will be packed with comedy, emotional honesty and sharp introspection. As Dunham put it in a Variety interview about meeting Felber: “When I met my husband, I was dazzled by just how much baggage two people could bring to the table.” The highly anticipated series, Too Much, co-written by Dunham and Felber, premieres on Netflix on July 10.  “Co-created by Dunham and Luis Ferber, Too Much centers on Jessica (Meg Stalter) a New York workaholic in her mid-thirties, reeling from a broken relationship that she thought would last forever and slowly isolating everyone she knows. When every block in New York tells a story of her own bad behaviour, the only solution is to take a job in London, where she plans to live a life of solitude like a Bronte sister. But when she meets Felix (Will Sharpe) – a walking series of red flags – she finds that their unusual connection is impossible to ignore, even as it creates more problems than it solves. Now they have to ask themselves: do Americans and Brits actually speak the same language?” Read more at Deadline A period romance about two young artists that fall in love in Paris just before becoming recognized as cultural icons– yes, please; plus, the Bill Pohlad directed project, Miles and Juliette, will be produced by none other than Mick Jagger– double yes!  Just in time for the recent wave of buzz surrounding new movies during the Cannes Film Festival, Damson Idris, who is currently filming opposite Brad Pitt in Paramount’s upcoming fantasy film Children of Blood and Bone, has been set to play Miles Davis, alongside Anamaria Vartolomei as Juliette Gréco in Pohlad’s latest project.  “According to the project’s description, Miles & Juliette follows ‘22-year-old Miles Davis (Idris) on a transformative trip to Paris in 1949, where he falls into a passionate romance with Juliette Gréco (Vartolomei), the French singer, actress, and Left Bank icon. What begins as an intimate affair blossoms into a profound connection between two young artists — just before they became cultural legends.’” Read more at The Hollywood Reporter On Sunday, Sony Pictures Television was able to secure global distribution rights to Jennifer Ames and Steve Turner’s relationship comedy series, The Miniature Wife.  The series features a star-studded cast of regulars and recurring characters, with Elizabeth Banks and Emmy-winner Matthew Macfadyen in the leading roles. You’re probably thinking: What do Ames and Turner mean by the miniature wife? The Media Res produced comedy series is actually based off of author Manuel Gonzalez’s short story of the same name about a man who accidentally shrinks his wife.  Join our mailing list Get the best of Den of Geek delivered right to your inbox! “The relationship comedy series explores the shifting power dynamics between spouses as a technological mishap triggers the ultimate marital crisis, pitting them against each other in a battle for dominance. It will air on Peacock in the U.S.”  Read more at Variety Tuesday marked the start of the 2025 Cannes Film Festival. Long-awaited projects from Wes Anderson and Ari Aster are set to premiere as well as six female directed films in the Official Competition.  Last year, Sean Baker snagged the Palme d’Or (Golden Palm), which is the highest prize awarded to the director of the Best Feature Film of the Official Competition, for Anora– which later won the Academy Award for Best Picture on March 2. Who will win this year’s Palm d’Or, and will the films featured at Cannes be an early indicator for the 2026 awards season like last year? We’ll have to see, but for now, even if you can’t attend the festival, keep a close eye on the daily happenings in Cannes over the next two weeks.  “Pundits complained last year’s Cannes was a light affair, but that had to do with the bottleneck created by the strikes, for one. Many auteurs return to the Croisette this year to make for a highly anticipated festival.” Read more at IndieWire The film that I’m the most excited to hear about following its premiere at the 2025 Cannes Film Festival is by far Ethan Coen’s upcoming dark comedy, Honey Don’t! The film follows Margaret Qualley as Honey O’Donahue, a small-town private investigator, who looks into strange deaths tied to a mysterious church. Who’s the priest of that deadly church? Captain America, of course!  Though it’s not in competition, the film will premiere in Cannes on the final day of the festival, as part of the midnight screenings. Though I’ll have to wait until Aug. 22 to see the dark comedy in theaters, I have a feeling there will be a lot of buzz surrounding the film following its premiere.  “Once again starring Margaret Qualley, this time as a lesbian private investigator on the trail of a cultist played by former Captain America Chris Evans, Ethan Coen’s latest, co-written alongside longtime collaborator and wife Tricia Cooke, looks to be a dark comedy through and through, firmly of a piece with Drive-Away Dolls.”  Read more at Empire Watch the trailer of HONEY, DON’T! below:
    0 Reacties ·0 aandelen ·0 voorbeeld
  • AI for grown-ups

    On an unremarkable Monday last February, Andrej Karpathy fired off a tweet that gave the internet its new favorite buzz-phrase: vibe coding.Within hours, people watched tools like Bolt, v0, and Lovable conjure apps from mock-ups never designed or developed.The internet cheered—speed looks spectacular in demo reels—but more senior groups were quietly wincing as AI began to add technical debt to large codebases at previously impossible rates.Why demo-first AI fails mature teamsHere’s how senior designers, developers, and marketers feel the pain.Designers: Prefab design systems ≠ your design systemToday’s one-click generators choose their own colors, border-radii, and fonts. The instant they collide with a house style they don’t recognize, they hard-code new hex values that overwrite your brand tokens, throw off the grid, and leave designers sending screenshots of mismatched styles to the bug tracker.When you draw a picture, you spend your time carefully crafting the shapes and lines. Great design comes from consistency of intention.Today’s AI designers give you a page full of chicken scratches, an eraser, and a “good luck.”Developers: One-shot codegen ≠ production code.A “fully-working” React page arrives as a single 1,200-line component—no tests, no accessibility tags, no separation of concerns. The diff looks like a CVS receipt.The dev who merges it now basically owns a tarball that will resist every future refactor. Code feels like inheriting a junior’s side project, but the junior in this case is an LLM that never sleeps and can’t be taught.Clean, maintainable code is measured by how many lines you can still cut. Without context of your existing codebase or a lot of fine-tuning, AI’s tendency is to be too verbose.Marketers: Fake data ≠ live experiments.Marketing can now crank out a landing page in fifteen minutes, but every testimonial, price, and CTA is still lorem ipsum.Wiring the page to the CMS—and to analytics—means rewriting half the markup by hand. Every 10x sprint lands on an engineering backlog.You can generate pages, sure, but you’re gonna have to wait on the revenue.What’s left for us?Call it all the 80/20 hangover: AI generates the first 80% of the job in 20% of the time… then teams grind through the next 80% of the time patching the final 20% of the work.It’s the perfect inversion of healthy collaboration. Humans drown in drudgery and plead with the machine to handle the craft. But here’s the thing: The lesson isn’t that AI is snake oil; it’s that big team problems aren’t solved by fast AI alone.Sure, “Make me a meal-planning app” is a fun toy, but as we’ve seen, speed without fidelity just moves the bottleneck downstream.What we need is to respect the craft already living in your components, tokens, data, and tests. We need AI for grown-ups.The two-sided contract of AI for grown-upsHere’s the proposal: what if we started building AI with competent teams in mind?For that kind of tooling, two mindset shifts need to happen, both from the builders of AI software and the professionals that use it.Builders need to…Respect the stack. Every pixel, prop, token, schema, and test that already lives in a repo becomes the rails, not a suggestion. Professionals should be able to easily predict what AI outputs will look like.
    Stay in their lane. Instead of inventing Yet-Another-Canvas, the tooling embeds itself in the software the best teams already use: Figma, IDEs, headless CMSes, etc. Professionals work hard to choose their tools, and AI alone is not a compelling enough reason to switch.
    Expose control. Codegen happens in daylight—via CLI hooks, readable diffs, and human review gates—so senior engineers can keep the quality bar where it belongs. Professionals are smart enough to handle the machine, so long as it shows them what it’s doing.It’s not up to the professional world to keep perfect pace with the runaway train of AI. It’s the tool builders’ responsibility to stop breaking so much stuff.Professionals need to…Feed the context. Put in the work to document processes, map prototypes to real components, hand over design tokens, and write the first round of tests. Give the model a fighting chance at fidelity. Builders will use this to make AI generations far more deterministic.Stay accountable. A merged PR is still a human name on the line for the most intricate of crafts: UX polish, performance budgets, sparkling copy, etc. Builders don’t have to design all-star AI; they can focus on making it consistent at the grunt-work.Recognize AI limits. Designers, developers, and marketers use AI primarily as the translation layer between deeply skillful fields. Builders don’t need to make AI that replaces teams, but rather AI that fosters communication in the handoffs.It’s not up to the Builders to anticipate every team’s exact use case. Instead, they can trust that professionals will rise to the challenge of adapting AI to their needs. How AI for grown-ups works on a given MondayWell, that’s all great: People focus on people things; AI focuses on AI things. But what does it look like in practice?Let’s take the perspective of our three personas again—designers, developers, and marketers—and see what an ideal world looks like for them.Designers, still in FigmaA designer adjusts the corner radius on the hero card and watches the live hand-off panel flag the exact React prop it will touch.A token audit bot pings him only if the value breaks the design system—no more screenshot spreadsheets.Freed from red-lining, he spends the afternoon nudging micro-interactions and pairing with motion design, really making everything sing.Way more time on polish. No more token-drift bugs filed.Developers, still in the IDEA developer pulls the latest “design-to-code” PR: it’s already split into sensible components, test stubs green-lit, and the diff is small enough to review over coffee.She runs the performance benchmark—numbers hold, thanks to preset budgets the generator never crosses.With boilerplate gone, she dives into accessible keyboard flows and edge-case logic that actually moves the product forward.Review cycles on layout bugs drop by half with AI taking over scaffolding.Marketers, still in the CMSA marketer duplicates last week’s landing page variant, swaps headline copy, and clicks “Stage Experiment.”The AI wires analytics, snapshots the control, and opens a PR tagged for growth-team review—no Jira ticket, no dev backlog delay.They schedule the A/B test in the same dashboard and spend their saved hour crafting social hooks for the launch.Campaign velocity doubles, while engineering time spent on “tiny copy changes” shrinks to near-zero.Same tools, less tedium, deeper ownership. Each role feeds context to the system, stays accountable, and lets the AI do the grunt work—that’s the contract in action.Keep your Cursor, keep your ChatGPT—the glue-layer AI just has to play nice with them while sitting squarely in the team’s shared stack. How we’re building toward AI for grown-upsSo, how close are we to that perfect Monday?At Builder, we find ourselves in a unique position to help, and we’ve been pivoting to meet the demand, especially for large teams.A tiny history lessonBuilder launched in 2019 as a headless CMS plus visual editor whose goal was simple: let developers surface their own JavaScript framework components, and then let non-developers arrange those components without touching code.To pull that off, we built our editor on Mitosis, an open-source layer that describes components once and compiles them to whatever JS framework a team already runs.Because of that early direction, three “grown-up parts” of our product were firmly in place before the generative-AI wave arrived:Fully-featured visual editing: Our Webflow-like editor creates space for designers and marketers to tweak components and pages that compile to any JS framework.
    Deterministic component mapping: Every Figma frame and its design tokens can round-trip with a real, version-controlled component—not a look-alike snippet.
    A data source with real content: The CMS holds marketing content, sure, but also all the data engineers need for accurate UI states.So when large language models became practical, we didn’t bolt AI onto a blank slate; we layered it onto an editor, a mapper, and a CMS that were already respecting the stack.That head-start is why our current AI features can focus on removing drudgery instead of reinventing fundamentals.What we’ve made so farOur product works as a bunch of incrementally adoptable layers. No big bang rewrites needed.Visual Editor → Code. Think Bolt or v0, but with Webflow-level tweakability. Prompt the AI to draft a page—or paste any URL for an instant, editable clone—then fine-tune any class, token, or breakpoint by hand. Drop in production-ready components from 21st.devand ship designs that respect your CSS, not ours.Figma → Visual Editor. Install the free Figma Visual Copilot plug-in, draw your frame, and click Export. Copilot converts the frame’s exact Auto-Layout geometry into clean, responsive framework code that can drop straight into the repo or open in Builder’s visual editor for tweaks. Designers still hand over a Figma link; developers run one CLI command to scaffold production-ready code—no guessing, no style drift.Repo → Visual Editor → Repo PR. We’ll be announcing a product shortly that allows anyone to import any GitHub repo to the Visual Editor and make changes that get auto-sent as a PR. Marketers and designers can file engineering tickets with the code already in nice shape.Component Mapping. Tell Builder which Figma button matches which code button once, click “Publish,” and you’re set. From then on, every design export that uses that component drops the real one from your repo into the generated diff—props, tokens, everything—so code reviews focus on ideas, not mismatched styles.Builder Publish: It’s not just a patch for lorem ipsum; it’s your entire headless CMS. Wire any page to real-time content and analytics so marketing can run A/B tests without tapping devs—and get the full schema, versioning, and single-source-of-truth perks that come with a modern CMS, all under the same roof.Where we’re still workingWe’re close, not finished. Next on our list:Less manual context. Auto-detect more tokens and components so mapping feels like autocomplete, not data entry.
    Deeper control. Let power users crack open every AI draft—props, tests, perf budgets—without leaving the editor.
    Broader design-system support. Shadcn today, your bespoke kit tomorrow. Mapping should be one click, not a weekend.That “perfect Monday” vision is where we’re headed, and our roadmap will get us there one feature at a time.We’re definitely interested in your feedback: What gaps do you still feel between design, code, and marketing, and where could an “AI for grown-ups” save you the most pain?Drop your thoughts in the comments or tag us on our socials.Turning the 80/20 right-side-upIf Software 2.0 taught us that networks can write code, and the Software 3.0 hype reminds us they can write whole interfaces, then the middle road we’ve walked today—call it Software 2.5—insists that learned programs still deserve linting, budgeting, and daylight.Boring constraints create extraordinary freedom.Speed is intoxicating, but alone it flips the 80/20 rule on its head, leaving humans elbow-deep in cleanup. AI for grown-ups reverses that trade.Tool builders promise rails, not rabbit holes. Professionals promise context and accountability. Every mapped component, every token lock, every human review makes the next generation more predictable—and predictability is what compounds into real velocity.We don’t need to be dragged kicking and screaming into the new world. We’re perfectly capable of walking there at our own pace, spending our hours on craft no model can fake. Introducing Visual Copilot: convert Figma designs to high quality code in a single click.Try Visual CopilotGet a demo
    #grownups
    AI for grown-ups
    On an unremarkable Monday last February, Andrej Karpathy fired off a tweet that gave the internet its new favorite buzz-phrase: vibe coding.Within hours, people watched tools like Bolt, v0, and Lovable conjure apps from mock-ups never designed or developed.The internet cheered—speed looks spectacular in demo reels—but more senior groups were quietly wincing as AI began to add technical debt to large codebases at previously impossible rates.Why demo-first AI fails mature teamsHere’s how senior designers, developers, and marketers feel the pain.Designers: Prefab design systems ≠ your design systemToday’s one-click generators choose their own colors, border-radii, and fonts. The instant they collide with a house style they don’t recognize, they hard-code new hex values that overwrite your brand tokens, throw off the grid, and leave designers sending screenshots of mismatched styles to the bug tracker.When you draw a picture, you spend your time carefully crafting the shapes and lines. Great design comes from consistency of intention.Today’s AI designers give you a page full of chicken scratches, an eraser, and a “good luck.”Developers: One-shot codegen ≠ production code.A “fully-working” React page arrives as a single 1,200-line component—no tests, no accessibility tags, no separation of concerns. The diff looks like a CVS receipt.The dev who merges it now basically owns a tarball that will resist every future refactor. Code feels like inheriting a junior’s side project, but the junior in this case is an LLM that never sleeps and can’t be taught.Clean, maintainable code is measured by how many lines you can still cut. Without context of your existing codebase or a lot of fine-tuning, AI’s tendency is to be too verbose.Marketers: Fake data ≠ live experiments.Marketing can now crank out a landing page in fifteen minutes, but every testimonial, price, and CTA is still lorem ipsum.Wiring the page to the CMS—and to analytics—means rewriting half the markup by hand. Every 10x sprint lands on an engineering backlog.You can generate pages, sure, but you’re gonna have to wait on the revenue.What’s left for us?Call it all the 80/20 hangover: AI generates the first 80% of the job in 20% of the time… then teams grind through the next 80% of the time patching the final 20% of the work.It’s the perfect inversion of healthy collaboration. Humans drown in drudgery and plead with the machine to handle the craft. But here’s the thing: The lesson isn’t that AI is snake oil; it’s that big team problems aren’t solved by fast AI alone.Sure, “Make me a meal-planning app” is a fun toy, but as we’ve seen, speed without fidelity just moves the bottleneck downstream.What we need is to respect the craft already living in your components, tokens, data, and tests. We need AI for grown-ups.The two-sided contract of AI for grown-upsHere’s the proposal: what if we started building AI with competent teams in mind?For that kind of tooling, two mindset shifts need to happen, both from the builders of AI software and the professionals that use it.Builders need to…Respect the stack. Every pixel, prop, token, schema, and test that already lives in a repo becomes the rails, not a suggestion. Professionals should be able to easily predict what AI outputs will look like. Stay in their lane. Instead of inventing Yet-Another-Canvas, the tooling embeds itself in the software the best teams already use: Figma, IDEs, headless CMSes, etc. Professionals work hard to choose their tools, and AI alone is not a compelling enough reason to switch. Expose control. Codegen happens in daylight—via CLI hooks, readable diffs, and human review gates—so senior engineers can keep the quality bar where it belongs. Professionals are smart enough to handle the machine, so long as it shows them what it’s doing.It’s not up to the professional world to keep perfect pace with the runaway train of AI. It’s the tool builders’ responsibility to stop breaking so much stuff.Professionals need to…Feed the context. Put in the work to document processes, map prototypes to real components, hand over design tokens, and write the first round of tests. Give the model a fighting chance at fidelity. Builders will use this to make AI generations far more deterministic.Stay accountable. A merged PR is still a human name on the line for the most intricate of crafts: UX polish, performance budgets, sparkling copy, etc. Builders don’t have to design all-star AI; they can focus on making it consistent at the grunt-work.Recognize AI limits. Designers, developers, and marketers use AI primarily as the translation layer between deeply skillful fields. Builders don’t need to make AI that replaces teams, but rather AI that fosters communication in the handoffs.It’s not up to the Builders to anticipate every team’s exact use case. Instead, they can trust that professionals will rise to the challenge of adapting AI to their needs. How AI for grown-ups works on a given MondayWell, that’s all great: People focus on people things; AI focuses on AI things. But what does it look like in practice?Let’s take the perspective of our three personas again—designers, developers, and marketers—and see what an ideal world looks like for them.Designers, still in FigmaA designer adjusts the corner radius on the hero card and watches the live hand-off panel flag the exact React prop it will touch.A token audit bot pings him only if the value breaks the design system—no more screenshot spreadsheets.Freed from red-lining, he spends the afternoon nudging micro-interactions and pairing with motion design, really making everything sing.Way more time on polish. No more token-drift bugs filed.Developers, still in the IDEA developer pulls the latest “design-to-code” PR: it’s already split into sensible components, test stubs green-lit, and the diff is small enough to review over coffee.She runs the performance benchmark—numbers hold, thanks to preset budgets the generator never crosses.With boilerplate gone, she dives into accessible keyboard flows and edge-case logic that actually moves the product forward.Review cycles on layout bugs drop by half with AI taking over scaffolding.Marketers, still in the CMSA marketer duplicates last week’s landing page variant, swaps headline copy, and clicks “Stage Experiment.”The AI wires analytics, snapshots the control, and opens a PR tagged for growth-team review—no Jira ticket, no dev backlog delay.They schedule the A/B test in the same dashboard and spend their saved hour crafting social hooks for the launch.Campaign velocity doubles, while engineering time spent on “tiny copy changes” shrinks to near-zero.Same tools, less tedium, deeper ownership. Each role feeds context to the system, stays accountable, and lets the AI do the grunt work—that’s the contract in action.Keep your Cursor, keep your ChatGPT—the glue-layer AI just has to play nice with them while sitting squarely in the team’s shared stack. How we’re building toward AI for grown-upsSo, how close are we to that perfect Monday?At Builder, we find ourselves in a unique position to help, and we’ve been pivoting to meet the demand, especially for large teams.A tiny history lessonBuilder launched in 2019 as a headless CMS plus visual editor whose goal was simple: let developers surface their own JavaScript framework components, and then let non-developers arrange those components without touching code.To pull that off, we built our editor on Mitosis, an open-source layer that describes components once and compiles them to whatever JS framework a team already runs.Because of that early direction, three “grown-up parts” of our product were firmly in place before the generative-AI wave arrived:Fully-featured visual editing: Our Webflow-like editor creates space for designers and marketers to tweak components and pages that compile to any JS framework. Deterministic component mapping: Every Figma frame and its design tokens can round-trip with a real, version-controlled component—not a look-alike snippet. A data source with real content: The CMS holds marketing content, sure, but also all the data engineers need for accurate UI states.So when large language models became practical, we didn’t bolt AI onto a blank slate; we layered it onto an editor, a mapper, and a CMS that were already respecting the stack.That head-start is why our current AI features can focus on removing drudgery instead of reinventing fundamentals.What we’ve made so farOur product works as a bunch of incrementally adoptable layers. No big bang rewrites needed.Visual Editor → Code. Think Bolt or v0, but with Webflow-level tweakability. Prompt the AI to draft a page—or paste any URL for an instant, editable clone—then fine-tune any class, token, or breakpoint by hand. Drop in production-ready components from 21st.devand ship designs that respect your CSS, not ours.Figma → Visual Editor. Install the free Figma Visual Copilot plug-in, draw your frame, and click Export. Copilot converts the frame’s exact Auto-Layout geometry into clean, responsive framework code that can drop straight into the repo or open in Builder’s visual editor for tweaks. Designers still hand over a Figma link; developers run one CLI command to scaffold production-ready code—no guessing, no style drift.Repo → Visual Editor → Repo PR. We’ll be announcing a product shortly that allows anyone to import any GitHub repo to the Visual Editor and make changes that get auto-sent as a PR. Marketers and designers can file engineering tickets with the code already in nice shape.Component Mapping. Tell Builder which Figma button matches which code button once, click “Publish,” and you’re set. From then on, every design export that uses that component drops the real one from your repo into the generated diff—props, tokens, everything—so code reviews focus on ideas, not mismatched styles.Builder Publish: It’s not just a patch for lorem ipsum; it’s your entire headless CMS. Wire any page to real-time content and analytics so marketing can run A/B tests without tapping devs—and get the full schema, versioning, and single-source-of-truth perks that come with a modern CMS, all under the same roof.Where we’re still workingWe’re close, not finished. Next on our list:Less manual context. Auto-detect more tokens and components so mapping feels like autocomplete, not data entry. Deeper control. Let power users crack open every AI draft—props, tests, perf budgets—without leaving the editor. Broader design-system support. Shadcn today, your bespoke kit tomorrow. Mapping should be one click, not a weekend.That “perfect Monday” vision is where we’re headed, and our roadmap will get us there one feature at a time.We’re definitely interested in your feedback: What gaps do you still feel between design, code, and marketing, and where could an “AI for grown-ups” save you the most pain?Drop your thoughts in the comments or tag us on our socials.Turning the 80/20 right-side-upIf Software 2.0 taught us that networks can write code, and the Software 3.0 hype reminds us they can write whole interfaces, then the middle road we’ve walked today—call it Software 2.5—insists that learned programs still deserve linting, budgeting, and daylight.Boring constraints create extraordinary freedom.Speed is intoxicating, but alone it flips the 80/20 rule on its head, leaving humans elbow-deep in cleanup. AI for grown-ups reverses that trade.Tool builders promise rails, not rabbit holes. Professionals promise context and accountability. Every mapped component, every token lock, every human review makes the next generation more predictable—and predictability is what compounds into real velocity.We don’t need to be dragged kicking and screaming into the new world. We’re perfectly capable of walking there at our own pace, spending our hours on craft no model can fake. Introducing Visual Copilot: convert Figma designs to high quality code in a single click.Try Visual CopilotGet a demo #grownups
    AI for grown-ups
    www.builder.io
    On an unremarkable Monday last February, Andrej Karpathy fired off a tweet that gave the internet its new favorite buzz-phrase: vibe coding.Within hours, people watched tools like Bolt, v0, and Lovable conjure apps from mock-ups never designed or developed.The internet cheered—speed looks spectacular in demo reels—but more senior groups were quietly wincing as AI began to add technical debt to large codebases at previously impossible rates.Why demo-first AI fails mature teamsHere’s how senior designers, developers, and marketers feel the pain.Designers: Prefab design systems ≠ your design systemToday’s one-click generators choose their own colors, border-radii, and fonts. The instant they collide with a house style they don’t recognize, they hard-code new hex values that overwrite your brand tokens, throw off the grid, and leave designers sending screenshots of mismatched styles to the bug tracker.When you draw a picture, you spend your time carefully crafting the shapes and lines. Great design comes from consistency of intention.Today’s AI designers give you a page full of chicken scratches, an eraser, and a “good luck.”Developers: One-shot codegen ≠ production code.A “fully-working” React page arrives as a single 1,200-line component—no tests, no accessibility tags, no separation of concerns. The diff looks like a CVS receipt.The dev who merges it now basically owns a tarball that will resist every future refactor. Code feels like inheriting a junior’s side project, but the junior in this case is an LLM that never sleeps and can’t be taught.Clean, maintainable code is measured by how many lines you can still cut. Without context of your existing codebase or a lot of fine-tuning, AI’s tendency is to be too verbose.Marketers: Fake data ≠ live experiments.Marketing can now crank out a landing page in fifteen minutes, but every testimonial, price, and CTA is still lorem ipsum.Wiring the page to the CMS—and to analytics—means rewriting half the markup by hand. Every 10x sprint lands on an engineering backlog.You can generate pages, sure, but you’re gonna have to wait on the revenue.What’s left for us?Call it all the 80/20 hangover: AI generates the first 80% of the job in 20% of the time… then teams grind through the next 80% of the time patching the final 20% of the work.It’s the perfect inversion of healthy collaboration. Humans drown in drudgery and plead with the machine to handle the craft. But here’s the thing: The lesson isn’t that AI is snake oil; it’s that big team problems aren’t solved by fast AI alone.Sure, “Make me a meal-planning app” is a fun toy, but as we’ve seen, speed without fidelity just moves the bottleneck downstream.What we need is to respect the craft already living in your components, tokens, data, and tests. We need AI for grown-ups.The two-sided contract of AI for grown-upsHere’s the proposal: what if we started building AI with competent teams in mind?For that kind of tooling, two mindset shifts need to happen, both from the builders of AI software and the professionals that use it.Builders need to…Respect the stack. Every pixel, prop, token, schema, and test that already lives in a repo becomes the rails, not a suggestion. Professionals should be able to easily predict what AI outputs will look like. Stay in their lane. Instead of inventing Yet-Another-Canvas, the tooling embeds itself in the software the best teams already use: Figma, IDEs, headless CMSes, etc. Professionals work hard to choose their tools, and AI alone is not a compelling enough reason to switch. Expose control. Codegen happens in daylight—via CLI hooks, readable diffs, and human review gates—so senior engineers can keep the quality bar where it belongs. Professionals are smart enough to handle the machine, so long as it shows them what it’s doing.It’s not up to the professional world to keep perfect pace with the runaway train of AI. It’s the tool builders’ responsibility to stop breaking so much stuff.Professionals need to…Feed the context. Put in the work to document processes, map prototypes to real components, hand over design tokens, and write the first round of tests. Give the model a fighting chance at fidelity. Builders will use this to make AI generations far more deterministic.Stay accountable. A merged PR is still a human name on the line for the most intricate of crafts: UX polish, performance budgets, sparkling copy, etc. Builders don’t have to design all-star AI; they can focus on making it consistent at the grunt-work.Recognize AI limits. Designers, developers, and marketers use AI primarily as the translation layer between deeply skillful fields. Builders don’t need to make AI that replaces teams, but rather AI that fosters communication in the handoffs.It’s not up to the Builders to anticipate every team’s exact use case. Instead, they can trust that professionals will rise to the challenge of adapting AI to their needs. How AI for grown-ups works on a given MondayWell, that’s all great: People focus on people things; AI focuses on AI things. But what does it look like in practice?Let’s take the perspective of our three personas again—designers, developers, and marketers—and see what an ideal world looks like for them.Designers, still in FigmaA designer adjusts the corner radius on the hero card and watches the live hand-off panel flag the exact React prop it will touch.A token audit bot pings him only if the value breaks the design system—no more screenshot spreadsheets.Freed from red-lining, he spends the afternoon nudging micro-interactions and pairing with motion design, really making everything sing.Way more time on polish. No more token-drift bugs filed.Developers, still in the IDEA developer pulls the latest “design-to-code” PR: it’s already split into sensible components, test stubs green-lit, and the diff is small enough to review over coffee.She runs the performance benchmark—numbers hold, thanks to preset budgets the generator never crosses.With boilerplate gone, she dives into accessible keyboard flows and edge-case logic that actually moves the product forward.Review cycles on layout bugs drop by half with AI taking over scaffolding.Marketers, still in the CMSA marketer duplicates last week’s landing page variant, swaps headline copy, and clicks “Stage Experiment.”The AI wires analytics, snapshots the control, and opens a PR tagged for growth-team review—no Jira ticket, no dev backlog delay.They schedule the A/B test in the same dashboard and spend their saved hour crafting social hooks for the launch.Campaign velocity doubles, while engineering time spent on “tiny copy changes” shrinks to near-zero.Same tools, less tedium, deeper ownership. Each role feeds context to the system, stays accountable, and lets the AI do the grunt work—that’s the contract in action.Keep your Cursor, keep your ChatGPT—the glue-layer AI just has to play nice with them while sitting squarely in the team’s shared stack. How we’re building toward AI for grown-upsSo, how close are we to that perfect Monday?At Builder, we find ourselves in a unique position to help, and we’ve been pivoting to meet the demand, especially for large teams.A tiny history lessonBuilder launched in 2019 as a headless CMS plus visual editor whose goal was simple: let developers surface their own JavaScript framework components, and then let non-developers arrange those components without touching code.To pull that off, we built our editor on Mitosis, an open-source layer that describes components once and compiles them to whatever JS framework a team already runs.Because of that early direction, three “grown-up parts” of our product were firmly in place before the generative-AI wave arrived:Fully-featured visual editing: Our Webflow-like editor creates space for designers and marketers to tweak components and pages that compile to any JS framework. Deterministic component mapping: Every Figma frame and its design tokens can round-trip with a real, version-controlled component—not a look-alike snippet. A data source with real content: The CMS holds marketing content, sure, but also all the data engineers need for accurate UI states.So when large language models became practical, we didn’t bolt AI onto a blank slate; we layered it onto an editor, a mapper, and a CMS that were already respecting the stack.That head-start is why our current AI features can focus on removing drudgery instead of reinventing fundamentals.What we’ve made so farOur product works as a bunch of incrementally adoptable layers. No big bang rewrites needed.Visual Editor → Code. Think Bolt or v0, but with Webflow-level tweakability. Prompt the AI to draft a page—or paste any URL for an instant, editable clone—then fine-tune any class, token, or breakpoint by hand. Drop in production-ready components from 21st.dev (or your own repo) and ship designs that respect your CSS, not ours.Figma → Visual Editor (and Code). Install the free Figma Visual Copilot plug-in, draw your frame, and click Export. Copilot converts the frame’s exact Auto-Layout geometry into clean, responsive framework code that can drop straight into the repo or open in Builder’s visual editor for tweaks. Designers still hand over a Figma link; developers run one CLI command to scaffold production-ready code—no guessing, no style drift.Repo → Visual Editor → Repo PR [coming soon]. We’ll be announcing a product shortly that allows anyone to import any GitHub repo to the Visual Editor and make changes that get auto-sent as a PR. Marketers and designers can file engineering tickets with the code already in nice shape.Component Mapping. Tell Builder which Figma button matches which code button once, click “Publish,” and you’re set. From then on, every design export that uses that component drops the real one from your repo into the generated diff—props, tokens, everything—so code reviews focus on ideas, not mismatched styles.Builder Publish: It’s not just a patch for lorem ipsum; it’s your entire headless CMS. Wire any page to real-time content and analytics so marketing can run A/B tests without tapping devs—and get the full schema, versioning, and single-source-of-truth perks that come with a modern CMS, all under the same roof.Where we’re still workingWe’re close, not finished. Next on our list:Less manual context. Auto-detect more tokens and components so mapping feels like autocomplete, not data entry. Deeper control. Let power users crack open every AI draft—props, tests, perf budgets—without leaving the editor. Broader design-system support. Shadcn today, your bespoke kit tomorrow. Mapping should be one click, not a weekend.That “perfect Monday” vision is where we’re headed, and our roadmap will get us there one feature at a time.We’re definitely interested in your feedback: What gaps do you still feel between design, code, and marketing, and where could an “AI for grown-ups” save you the most pain?Drop your thoughts in the comments or tag us on our socials (see footer for links).Turning the 80/20 right-side-upIf Software 2.0 taught us that networks can write code, and the Software 3.0 hype reminds us they can write whole interfaces, then the middle road we’ve walked today—call it Software 2.5—insists that learned programs still deserve linting, budgeting, and daylight.Boring constraints create extraordinary freedom.Speed is intoxicating, but alone it flips the 80/20 rule on its head, leaving humans elbow-deep in cleanup. AI for grown-ups reverses that trade.Tool builders promise rails, not rabbit holes. Professionals promise context and accountability. Every mapped component, every token lock, every human review makes the next generation more predictable—and predictability is what compounds into real velocity.We don’t need to be dragged kicking and screaming into the new world. We’re perfectly capable of walking there at our own pace, spending our hours on craft no model can fake. Introducing Visual Copilot: convert Figma designs to high quality code in a single click.Try Visual CopilotGet a demo
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Strength in Numbers: Ensembling Models with Bagging and Boosting

    Bagging and boosting are two powerful ensemble techniques in machine learning – they are must-knows for data scientists! After reading this article, you are going to have a solid understanding of how bagging and boosting work and when to use them. We’ll cover the following topics, relying heavily on examples to give hands-on illustration of the key concepts:

    How Ensembling helps create powerful models

    Bagging: Adding stability to ML models

    Boosting: Reducing bias in weak learners

    Bagging vs. Boosting – when to use each and why

    Creating powerful models with ensembling

    In Machine Learning, ensembling is a broad term that refers to any technique that creates predictions by combining the predictions from multiple models. If there is more than one model involved in making a prediction, the technique is using ensembling!

    Ensembling approaches can often improve the performance of a single model. Ensembling can help reduce:

    Variance by averaging multiple models

    Bias by iteratively improving on errors

    Overfitting because using multiple models can increase robustness to spurious relationships

    Bagging and boosting are both ensemble methods that can perform much better than their single-model counterparts. Let’s get into the details of these now!

    Bagging: Adding stability to ML models

    Bagging is a specific ensembling technique that is used to reduce the variance of a predictive model. Here, I’m talking about variance in the machine learning sense – i.e., how much a model varies with changes to the training dataset – not variance in the statistical sense which measures the spread of a distribution. Because bagging helps reduce an ML model’s variance, it will often improve models that are high variancebut won’t do much good for models that are low variance.

    Now that we understand when bagging helps, let’s get into the details of the inner workings to understand how it helps! The bagging algorithm is iterative in nature – it builds multiple models by repeating the following three steps:

    Bootstrap a dataset from the original training data

    Train a model on the bootstrapped dataset

    the trained model

    The collection of models created in this process is called an ensemble. When it is time to make a prediction, each model in the ensemble makes its own prediction – the final bagged prediction is the averageor majority voteof all of the ensemble’s predictions.

    Now that we understand how bagging works, let’s take a few minutes to build an intuition for why it works. We’ll borrow a familiar idea from traditional statistics: sampling to estimate a population mean.

    In statistics, each sample drawn from a distribution is a random variable. Small sample sizes tend to have high variance and may provide poor estimates of the true mean. But as we collect more samples, the average of those samples becomes a much better approximation of the population mean.

    Similarly, we can think of each of our individual decision trees as a random variable — after all, each tree is trained on a different random sample of the data! By averaging predictions from many trees, bagging reduces variance and produces an ensemble model that better captures the true relationships in the data.

    Bagging Example

    We will be using the load_diabetes1 dataset from the scikit-learn Python package to illustrate a simple bagging example. The dataset has 10 input variables – Age, Sex, BMI, Blood Pressure and 6 blood serum levels. And a single output variable that is a measurement of disease progression. The code below pulls in our data and does some very simple cleaning. With our dataset established, let’s start modeling!

    # pull in and format data
    from sklearn.datasets import load_diabetes

    diabetes = load_diabetesdf = pd.DataFramedf.loc= diabetes.target
    df = df.dropnaFor our example, we will use basic decision trees as our base models for bagging. Let’s first verify that our decision trees are indeed high variance. We will do this by training three decision trees on different bootstrapped datasets and observing the variance of the predictions for a test dataset. The graph below shows the predictions of three different decision trees on the same test dataset. Each dotted vertical line is an individual observation from the test dataset. The three dots on each line are the predictions from the three different decision trees.

    Variance of decision trees on test data points – image by author

    In the chart above, we see that individual trees can give very different predictionswhen trained on bootstrapped datasets. This is the variance we have been talking about!

    Now that we see that our trees aren’t very robust to training samples – let’s average the predictions to see how bagging can help! The chart below shows the average of the three trees. The diagonal line represents perfect predictions. As you can see, with bagging, our points are tighter and more centered around the diagonal.

    image by author

    We’ve already seen significant improvement in our model with the average of just three trees. Let’s beef up our bagging algorithm with more trees!

    Here is the code to bag as many trees as we want:

    def train_bagging_trees:

    '''
    Creates a decision tree bagging model by training multiple
    decision trees on bootstrapped data.

    inputs
    df: training data with both target and input columns
    target_col: name of target column
    pred_cols: list of predictor column names
    n_trees: number of trees to be trained in the ensemble

    output:
    train_trees: list of trained trees

    '''

    train_trees =for i in range:

    # bootstrap training data
    temp_boot = bootstrap#train tree
    temp_tree = plain_vanilla_tree# save trained tree in list
    train_trees.appendreturn train_trees

    def bagging_trees_pred:

    '''
    Takes a list of bagged trees and creates predictions by averaging
    the predictions of each individual tree.

    inputs
    df: training data with both target and input columns
    train_trees: ensemble model - which is a list of trained decision trees
    target_col: name of target column
    pred_cols: list of predictor column names

    output:
    avg_preds: list of predictions from the ensembled trees

    '''

    x = dfy = dfpreds =# make predictions on data with each decision tree
    for tree in train_trees:
    temp_pred = tree.predictpreds.append# get average of the trees' predictions
    sum_preds =avg_preds =return avg_preds

    The functions above are very simple, the first trains the bagging ensemble model, the second takes the ensembleand makes predictions given a dataset.

    With our code established, let’s run multiple ensemble models and see how our out-of-bag predictions change as we increase the number of trees.

    Out-of-bag predictions vs. actuals colored by number of bagged trees – image by author

    Admittedly, this chart looks a little crazy. Don’t get too bogged down with all of the individual data points, the lines dashed tell the main story! Here we have 1 basic decision tree model and 3 bagged decision tree models – with 3, 50 and 150 trees. The color-coded dotted lines mark the upper and lower ranges for each model’s residuals. There are two main takeaways here:as we add more trees, the range of the residuals shrinks andthere is diminishing returns to adding more trees – when we go from 1 to 3 trees, we see the range shrink a lot, when we go from 50 to 150 trees, the range tightens just a little.

    Now that we’ve successfully gone through a full bagging example, we are about ready to move onto boosting! Let’s do a quick overview of what we covered in this section:

    Bagging reduces variance of ML models by averaging the predictions of multiple individual models

    Bagging is most helpful with high-variance models

    The more models we bag, the lower the variance of the ensemble – but there are diminishing returns to the variance reduction benefit

    Okay, let’s move on to boosting!

    Boosting: Reducing bias in weak learners

    With bagging, we create multiple independent models – the independence of the models helps average out the noise of individual models. Boosting is also an ensembling technique; similar to bagging, we will be training multiple models…. But very different from bagging, the models we train will be dependent. Boosting is a modeling technique that trains an initial model and then sequentially trains additional models to improve the predictions of prior models. The primary target of boosting is to reduce bias – though it can also help reduce variance.

    We’ve established that boosting iteratively improves predictions – let’s go deeper into how. Boosting algorithms can iteratively improve model predictions in two ways:

    Directly predicting the residuals of the last model and adding them to the prior predictions – think of it as residual corrections

    Adding more weight to the observations that the prior model predicted poorly

    Because boosting’s main goal is to reduce bias, it works well with base models that typically have more bias. For our examples, we are going to use shallow decision trees as our base model – we will only cover the residual prediction approach in this article for brevity. Let’s jump into the boosting example!

    Predicting prior residuals

    The residuals prediction approach starts off with an initial modeland we calculate the residuals of that initial prediction. The second model in the ensemble predicts the residuals of the first model. With our residual predictions in-hand, we add the residual predictions to our initial predictionand recalculate the updated residuals…. we continue this process until we have created the number of base models we specified. This process is pretty simple, but is a little hard to explain with just words – the flowchart below shows a simple, 4-model boosting algorithm.

    Flowchart of simple, 4 model boosting algorithm – image by author

    When boosting, we need to set three main parameters:the number of trees,the tree depth andthe learning rate. I’ll spend a little time discussing these inputs now.

    Number of Trees

    For boosting, the number of trees means the same thing as in bagging – i.e., the total number of trees that will be trained for the ensemble. But, unlike boosting, we should not err on the side of more trees! The chart below shows the test RMSE against the number of trees for the diabetes dataset.

    Unlike with bagging, too many trees in boosting leads to overfitting! – image by author

    This shows that the test RMSE drops quickly with the number of trees up until about 200 trees, then it starts to creep back up. It looks like a classic ‘overfitting’ chart – we reach a point where more trees becomes worse for the model. This is a key difference between bagging and boosting – with bagging, more trees eventually stop helping, with boosting more trees eventually start hurting!

    With bagging, more trees eventually stops helping, with boosting more trees eventually starts hurting!

    We now know that too many trees are bad, and too few trees are bad as well. We will use hyperparameter tuning to select the number of trees. Note – hyperparameter tuning is a huge subject and way outside of the scope of this article. I’ll demonstrate a simple grid search with a train and test dataset for our example a little later.

    Tree Depth

    This is the maximum depth for each tree in the ensemble. With bagging, trees are often allowed to go as deep they want because we are looking for low bias, high variance models. With boosting however, we use sequential models to address the bias in the base learners – so we aren’t as concerned about generating low-bias trees. How do we decide how the maximum depth? The same technique that we’ll use with the number of trees, hyperparameter tuning.

    Learning Rate

    The number of trees and the tree depth are familiar parameters from bagging– but this ‘learning rate’ character is a new face! Let’s take a moment to get familiar. The learning rate is a number between 0 and 1 that is multiplied by the current model’s residual predictions before it is added to the overall predictions.

    Here’s a simple example of the prediction calculations with a learning rate of 0.5. Once we understand the mechanics of how the learning rate works, we will discuss the why the learning rate is important.

    The learning rate discounts the residual prediction before updating the actual target prediction – image by author

    So, why would we want to ‘discount’ our residual predictions, wouldn’t that make our predictions worse? Well, yes and no. For a single iteration, it will likely make our predictions worse – but, we are doing multiple iterations. For multiple iterations, the learning rate keeps the model from overreacting to a single tree’s predictions. It will probably make our current predictions worse, but don’t worry, we will go through this process multiple times! Ultimately, the learning rate helps mitigate overfitting in our boosting model by lowering the influence of any single tree in the ensemble. You can think of it as slowly turning the steering wheel to correct your driving rather than jerking it. In practice, the number of trees and the learning rate have an opposite relationship, i.e., as the learning rate goes down, the number of trees goes up. This is intuitive, because if we only allow a small amount of each tree’s residual prediction to be added to the overall prediction, we are going to need a lot more trees before our overall prediction will start looking good.

    Ultimately, the learning rate helps mitigate overfitting in our boosting model by lowering the influence of any single tree in the ensemble.

    Alright, now that we’ve covered the main inputs in boosting, let’s get into the Python coding! We need a couple of functions to create our boosting algorithm:

    Base decision tree function – a simple function to create and train a single decision tree. We will use the same function from the last section called ‘plain_vanilla_tree.’

    Boosting training function – this function sequentially trains and updates residuals for as many decision trees as the user specifies. In our code, this function is called ‘boost_resid_correction.’

    Boosting prediction function – this function takes a series of boosted models and makes final ensemble predictions. We call this function ‘boost_resid_correction_pred.’

    Here are the functions written in Python:

    # same base tree function as in prior section
    def plain_vanilla_tree:

    X_train = df_trainy_train = df_traintree = DecisionTreeRegressorif weights:
    tree.fitelse:
    tree.fitreturn tree

    # residual predictions
    def boost_resid_correction:
    '''
    Creates boosted decision tree ensemble model.
    Inputs:
    df_train: contains training data
    target_col: name of target column
    pred_col: target column names
    num_models: number of models to use in boosting
    learning_rate: discount given to residual predictions
    takes values between: max depth of each tree model

    Outputs:
    boosting_model: contains everything needed to use model
    to make predictions - includes list of all
    trees in the ensemble
    '''

    # create initial predictions
    model1 = plain_vanilla_treeinitial_preds = model1.predictdf_train= df_train- initial_preds

    # create multiple models, each predicting the updated residuals
    models =for i in range:
    temp_model = plain_vanilla_treemodels.appendtemp_pred_resids = temp_model.predictdf_train= df_train-boosting_model = {'initial_model' : model1,
    'models' : models,
    'learning_rate' : learning_rate,
    'pred_cols' : pred_cols}

    return boosting_model

    # This function takes the residual boosted model and scores data
    def boost_resid_correction_predict:

    '''
    Creates predictions on a dataset given a boosted model.

    Inputs:
    df: data to make predictions
    boosting_models: dictionary containing all pertinent
    boosted model data
    chart: indicates if performance chart should
    be created
    Outputs:
    pred: predictions from boosted model
    rmse: RMSE of predictions
    '''

    # get initial predictions
    initial_model = boosting_modelspred_cols = boosting_modelspred = initial_model.predict# calculate residual predictions from each model and add
    models = boosting_modelslearning_rate = boosting_modelsfor model in models:
    temp_resid_preds = model.predictpred += learning_rate*temp_resid_preds

    if chart:
    plt.scatterplt.showrmse = np.sqrt)

    return pred, rmse

    Sweet, let’s make a model on the same diabetes dataset that we used in the bagging section. We’ll do a quick grid searchto tune our three parameters and then we’ll train the final model using the boost_resid_correction function.

    # tune parameters with grid search
    n_trees =learning_rates =max_depths = my_list = list)

    # Create a dictionary to hold test RMSE for each 'square' in grid
    perf_dict = {}
    for tree in n_trees:
    for learning_rate in learning_rates:
    for max_depth in max_depths:
    temp_boosted_model = boost_resid_correctiontemp_boosted_model= 'target'
    preds, rmse = boost_resid_correction_predictdict_key = '_'.joinfor x in)
    perf_dict= rmse

    min_key = minprintAnd our winner is — 50 trees, a learning rate of 0.1 and a max depth of 1! Let’s take a look and see how our predictions did.

    Tuned boosting actuals vs. residuals – image by author

    While our boosting ensemble model seems to capture the trend reasonably well, we can see off the bat that it isn’t predicting as well as the bagging model. We could probably spend more time tuning – but it could also be the case that the bagging approach fits this specific data better. With that said, we’ve now earned an understanding of bagging and boosting – let’s compare them in the next section!

    Bagging vs. Boosting – understanding the differences

    We’ve covered bagging and boosting separately, the table below brings all the information we’ve covered to concisely compare the approaches:

    image by author

    Note: In this article, we wrote our own bagging and boosting code for educational purposes. In practice you will just use the excellent code that is available in Python packages or other software. Also, people rarely use ‘pure’ bagging or boosting – it is much more common to use more advanced algorithms that modify the plain vanilla bagging and boosting to improve performance.

    Wrapping it up

    Bagging and boosting are powerful and practical ways to improve weak learners like the humble but flexible decision tree. Both approaches use the power of ensembling to address different problems – bagging for variance, boosting for bias. In practice, pre-packaged code is almost always used to train more advanced machine learning models that use the main ideas of bagging and boosting but, expand on them with multiple improvements.

    I hope that this has been helpful and interesting – happy modeling!

    Dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases and is distributed under the public domain license for use without restriction.

    The post Strength in Numbers: Ensembling Models with Bagging and Boosting appeared first on Towards Data Science.
    #strength #numbers #ensembling #models #with
    Strength in Numbers: Ensembling Models with Bagging and Boosting
    Bagging and boosting are two powerful ensemble techniques in machine learning – they are must-knows for data scientists! After reading this article, you are going to have a solid understanding of how bagging and boosting work and when to use them. We’ll cover the following topics, relying heavily on examples to give hands-on illustration of the key concepts: How Ensembling helps create powerful models Bagging: Adding stability to ML models Boosting: Reducing bias in weak learners Bagging vs. Boosting – when to use each and why Creating powerful models with ensembling In Machine Learning, ensembling is a broad term that refers to any technique that creates predictions by combining the predictions from multiple models. If there is more than one model involved in making a prediction, the technique is using ensembling! Ensembling approaches can often improve the performance of a single model. Ensembling can help reduce: Variance by averaging multiple models Bias by iteratively improving on errors Overfitting because using multiple models can increase robustness to spurious relationships Bagging and boosting are both ensemble methods that can perform much better than their single-model counterparts. Let’s get into the details of these now! Bagging: Adding stability to ML models Bagging is a specific ensembling technique that is used to reduce the variance of a predictive model. Here, I’m talking about variance in the machine learning sense – i.e., how much a model varies with changes to the training dataset – not variance in the statistical sense which measures the spread of a distribution. Because bagging helps reduce an ML model’s variance, it will often improve models that are high variancebut won’t do much good for models that are low variance. Now that we understand when bagging helps, let’s get into the details of the inner workings to understand how it helps! The bagging algorithm is iterative in nature – it builds multiple models by repeating the following three steps: Bootstrap a dataset from the original training data Train a model on the bootstrapped dataset the trained model The collection of models created in this process is called an ensemble. When it is time to make a prediction, each model in the ensemble makes its own prediction – the final bagged prediction is the averageor majority voteof all of the ensemble’s predictions. Now that we understand how bagging works, let’s take a few minutes to build an intuition for why it works. We’ll borrow a familiar idea from traditional statistics: sampling to estimate a population mean. In statistics, each sample drawn from a distribution is a random variable. Small sample sizes tend to have high variance and may provide poor estimates of the true mean. But as we collect more samples, the average of those samples becomes a much better approximation of the population mean. Similarly, we can think of each of our individual decision trees as a random variable — after all, each tree is trained on a different random sample of the data! By averaging predictions from many trees, bagging reduces variance and produces an ensemble model that better captures the true relationships in the data. Bagging Example We will be using the load_diabetes1 dataset from the scikit-learn Python package to illustrate a simple bagging example. The dataset has 10 input variables – Age, Sex, BMI, Blood Pressure and 6 blood serum levels. And a single output variable that is a measurement of disease progression. The code below pulls in our data and does some very simple cleaning. With our dataset established, let’s start modeling! # pull in and format data from sklearn.datasets import load_diabetes diabetes = load_diabetesdf = pd.DataFramedf.loc= diabetes.target df = df.dropnaFor our example, we will use basic decision trees as our base models for bagging. Let’s first verify that our decision trees are indeed high variance. We will do this by training three decision trees on different bootstrapped datasets and observing the variance of the predictions for a test dataset. The graph below shows the predictions of three different decision trees on the same test dataset. Each dotted vertical line is an individual observation from the test dataset. The three dots on each line are the predictions from the three different decision trees. Variance of decision trees on test data points – image by author In the chart above, we see that individual trees can give very different predictionswhen trained on bootstrapped datasets. This is the variance we have been talking about! Now that we see that our trees aren’t very robust to training samples – let’s average the predictions to see how bagging can help! The chart below shows the average of the three trees. The diagonal line represents perfect predictions. As you can see, with bagging, our points are tighter and more centered around the diagonal. image by author We’ve already seen significant improvement in our model with the average of just three trees. Let’s beef up our bagging algorithm with more trees! Here is the code to bag as many trees as we want: def train_bagging_trees: ''' Creates a decision tree bagging model by training multiple decision trees on bootstrapped data. inputs df: training data with both target and input columns target_col: name of target column pred_cols: list of predictor column names n_trees: number of trees to be trained in the ensemble output: train_trees: list of trained trees ''' train_trees =for i in range: # bootstrap training data temp_boot = bootstrap#train tree temp_tree = plain_vanilla_tree# save trained tree in list train_trees.appendreturn train_trees def bagging_trees_pred: ''' Takes a list of bagged trees and creates predictions by averaging the predictions of each individual tree. inputs df: training data with both target and input columns train_trees: ensemble model - which is a list of trained decision trees target_col: name of target column pred_cols: list of predictor column names output: avg_preds: list of predictions from the ensembled trees ''' x = dfy = dfpreds =# make predictions on data with each decision tree for tree in train_trees: temp_pred = tree.predictpreds.append# get average of the trees' predictions sum_preds =avg_preds =return avg_preds The functions above are very simple, the first trains the bagging ensemble model, the second takes the ensembleand makes predictions given a dataset. With our code established, let’s run multiple ensemble models and see how our out-of-bag predictions change as we increase the number of trees. Out-of-bag predictions vs. actuals colored by number of bagged trees – image by author Admittedly, this chart looks a little crazy. Don’t get too bogged down with all of the individual data points, the lines dashed tell the main story! Here we have 1 basic decision tree model and 3 bagged decision tree models – with 3, 50 and 150 trees. The color-coded dotted lines mark the upper and lower ranges for each model’s residuals. There are two main takeaways here:as we add more trees, the range of the residuals shrinks andthere is diminishing returns to adding more trees – when we go from 1 to 3 trees, we see the range shrink a lot, when we go from 50 to 150 trees, the range tightens just a little. Now that we’ve successfully gone through a full bagging example, we are about ready to move onto boosting! Let’s do a quick overview of what we covered in this section: Bagging reduces variance of ML models by averaging the predictions of multiple individual models Bagging is most helpful with high-variance models The more models we bag, the lower the variance of the ensemble – but there are diminishing returns to the variance reduction benefit Okay, let’s move on to boosting! Boosting: Reducing bias in weak learners With bagging, we create multiple independent models – the independence of the models helps average out the noise of individual models. Boosting is also an ensembling technique; similar to bagging, we will be training multiple models…. But very different from bagging, the models we train will be dependent. Boosting is a modeling technique that trains an initial model and then sequentially trains additional models to improve the predictions of prior models. The primary target of boosting is to reduce bias – though it can also help reduce variance. We’ve established that boosting iteratively improves predictions – let’s go deeper into how. Boosting algorithms can iteratively improve model predictions in two ways: Directly predicting the residuals of the last model and adding them to the prior predictions – think of it as residual corrections Adding more weight to the observations that the prior model predicted poorly Because boosting’s main goal is to reduce bias, it works well with base models that typically have more bias. For our examples, we are going to use shallow decision trees as our base model – we will only cover the residual prediction approach in this article for brevity. Let’s jump into the boosting example! Predicting prior residuals The residuals prediction approach starts off with an initial modeland we calculate the residuals of that initial prediction. The second model in the ensemble predicts the residuals of the first model. With our residual predictions in-hand, we add the residual predictions to our initial predictionand recalculate the updated residuals…. we continue this process until we have created the number of base models we specified. This process is pretty simple, but is a little hard to explain with just words – the flowchart below shows a simple, 4-model boosting algorithm. Flowchart of simple, 4 model boosting algorithm – image by author When boosting, we need to set three main parameters:the number of trees,the tree depth andthe learning rate. I’ll spend a little time discussing these inputs now. Number of Trees For boosting, the number of trees means the same thing as in bagging – i.e., the total number of trees that will be trained for the ensemble. But, unlike boosting, we should not err on the side of more trees! The chart below shows the test RMSE against the number of trees for the diabetes dataset. Unlike with bagging, too many trees in boosting leads to overfitting! – image by author This shows that the test RMSE drops quickly with the number of trees up until about 200 trees, then it starts to creep back up. It looks like a classic ‘overfitting’ chart – we reach a point where more trees becomes worse for the model. This is a key difference between bagging and boosting – with bagging, more trees eventually stop helping, with boosting more trees eventually start hurting! With bagging, more trees eventually stops helping, with boosting more trees eventually starts hurting! We now know that too many trees are bad, and too few trees are bad as well. We will use hyperparameter tuning to select the number of trees. Note – hyperparameter tuning is a huge subject and way outside of the scope of this article. I’ll demonstrate a simple grid search with a train and test dataset for our example a little later. Tree Depth This is the maximum depth for each tree in the ensemble. With bagging, trees are often allowed to go as deep they want because we are looking for low bias, high variance models. With boosting however, we use sequential models to address the bias in the base learners – so we aren’t as concerned about generating low-bias trees. How do we decide how the maximum depth? The same technique that we’ll use with the number of trees, hyperparameter tuning. Learning Rate The number of trees and the tree depth are familiar parameters from bagging– but this ‘learning rate’ character is a new face! Let’s take a moment to get familiar. The learning rate is a number between 0 and 1 that is multiplied by the current model’s residual predictions before it is added to the overall predictions. Here’s a simple example of the prediction calculations with a learning rate of 0.5. Once we understand the mechanics of how the learning rate works, we will discuss the why the learning rate is important. The learning rate discounts the residual prediction before updating the actual target prediction – image by author So, why would we want to ‘discount’ our residual predictions, wouldn’t that make our predictions worse? Well, yes and no. For a single iteration, it will likely make our predictions worse – but, we are doing multiple iterations. For multiple iterations, the learning rate keeps the model from overreacting to a single tree’s predictions. It will probably make our current predictions worse, but don’t worry, we will go through this process multiple times! Ultimately, the learning rate helps mitigate overfitting in our boosting model by lowering the influence of any single tree in the ensemble. You can think of it as slowly turning the steering wheel to correct your driving rather than jerking it. In practice, the number of trees and the learning rate have an opposite relationship, i.e., as the learning rate goes down, the number of trees goes up. This is intuitive, because if we only allow a small amount of each tree’s residual prediction to be added to the overall prediction, we are going to need a lot more trees before our overall prediction will start looking good. Ultimately, the learning rate helps mitigate overfitting in our boosting model by lowering the influence of any single tree in the ensemble. Alright, now that we’ve covered the main inputs in boosting, let’s get into the Python coding! We need a couple of functions to create our boosting algorithm: Base decision tree function – a simple function to create and train a single decision tree. We will use the same function from the last section called ‘plain_vanilla_tree.’ Boosting training function – this function sequentially trains and updates residuals for as many decision trees as the user specifies. In our code, this function is called ‘boost_resid_correction.’ Boosting prediction function – this function takes a series of boosted models and makes final ensemble predictions. We call this function ‘boost_resid_correction_pred.’ Here are the functions written in Python: # same base tree function as in prior section def plain_vanilla_tree: X_train = df_trainy_train = df_traintree = DecisionTreeRegressorif weights: tree.fitelse: tree.fitreturn tree # residual predictions def boost_resid_correction: ''' Creates boosted decision tree ensemble model. Inputs: df_train: contains training data target_col: name of target column pred_col: target column names num_models: number of models to use in boosting learning_rate: discount given to residual predictions takes values between: max depth of each tree model Outputs: boosting_model: contains everything needed to use model to make predictions - includes list of all trees in the ensemble ''' # create initial predictions model1 = plain_vanilla_treeinitial_preds = model1.predictdf_train= df_train- initial_preds # create multiple models, each predicting the updated residuals models =for i in range: temp_model = plain_vanilla_treemodels.appendtemp_pred_resids = temp_model.predictdf_train= df_train-boosting_model = {'initial_model' : model1, 'models' : models, 'learning_rate' : learning_rate, 'pred_cols' : pred_cols} return boosting_model # This function takes the residual boosted model and scores data def boost_resid_correction_predict: ''' Creates predictions on a dataset given a boosted model. Inputs: df: data to make predictions boosting_models: dictionary containing all pertinent boosted model data chart: indicates if performance chart should be created Outputs: pred: predictions from boosted model rmse: RMSE of predictions ''' # get initial predictions initial_model = boosting_modelspred_cols = boosting_modelspred = initial_model.predict# calculate residual predictions from each model and add models = boosting_modelslearning_rate = boosting_modelsfor model in models: temp_resid_preds = model.predictpred += learning_rate*temp_resid_preds if chart: plt.scatterplt.showrmse = np.sqrt) return pred, rmse Sweet, let’s make a model on the same diabetes dataset that we used in the bagging section. We’ll do a quick grid searchto tune our three parameters and then we’ll train the final model using the boost_resid_correction function. # tune parameters with grid search n_trees =learning_rates =max_depths = my_list = list) # Create a dictionary to hold test RMSE for each 'square' in grid perf_dict = {} for tree in n_trees: for learning_rate in learning_rates: for max_depth in max_depths: temp_boosted_model = boost_resid_correctiontemp_boosted_model= 'target' preds, rmse = boost_resid_correction_predictdict_key = '_'.joinfor x in) perf_dict= rmse min_key = minprintAnd our winner is — 50 trees, a learning rate of 0.1 and a max depth of 1! Let’s take a look and see how our predictions did. Tuned boosting actuals vs. residuals – image by author While our boosting ensemble model seems to capture the trend reasonably well, we can see off the bat that it isn’t predicting as well as the bagging model. We could probably spend more time tuning – but it could also be the case that the bagging approach fits this specific data better. With that said, we’ve now earned an understanding of bagging and boosting – let’s compare them in the next section! Bagging vs. Boosting – understanding the differences We’ve covered bagging and boosting separately, the table below brings all the information we’ve covered to concisely compare the approaches: image by author Note: In this article, we wrote our own bagging and boosting code for educational purposes. In practice you will just use the excellent code that is available in Python packages or other software. Also, people rarely use ‘pure’ bagging or boosting – it is much more common to use more advanced algorithms that modify the plain vanilla bagging and boosting to improve performance. Wrapping it up Bagging and boosting are powerful and practical ways to improve weak learners like the humble but flexible decision tree. Both approaches use the power of ensembling to address different problems – bagging for variance, boosting for bias. In practice, pre-packaged code is almost always used to train more advanced machine learning models that use the main ideas of bagging and boosting but, expand on them with multiple improvements. I hope that this has been helpful and interesting – happy modeling! Dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases and is distributed under the public domain license for use without restriction. The post Strength in Numbers: Ensembling Models with Bagging and Boosting appeared first on Towards Data Science. #strength #numbers #ensembling #models #with
    Strength in Numbers: Ensembling Models with Bagging and Boosting
    towardsdatascience.com
    Bagging and boosting are two powerful ensemble techniques in machine learning – they are must-knows for data scientists! After reading this article, you are going to have a solid understanding of how bagging and boosting work and when to use them. We’ll cover the following topics, relying heavily on examples to give hands-on illustration of the key concepts: How Ensembling helps create powerful models Bagging: Adding stability to ML models Boosting: Reducing bias in weak learners Bagging vs. Boosting – when to use each and why Creating powerful models with ensembling In Machine Learning, ensembling is a broad term that refers to any technique that creates predictions by combining the predictions from multiple models. If there is more than one model involved in making a prediction, the technique is using ensembling! Ensembling approaches can often improve the performance of a single model. Ensembling can help reduce: Variance by averaging multiple models Bias by iteratively improving on errors Overfitting because using multiple models can increase robustness to spurious relationships Bagging and boosting are both ensemble methods that can perform much better than their single-model counterparts. Let’s get into the details of these now! Bagging: Adding stability to ML models Bagging is a specific ensembling technique that is used to reduce the variance of a predictive model. Here, I’m talking about variance in the machine learning sense – i.e., how much a model varies with changes to the training dataset – not variance in the statistical sense which measures the spread of a distribution. Because bagging helps reduce an ML model’s variance, it will often improve models that are high variance (e.g., decision trees and KNN) but won’t do much good for models that are low variance (e.g., linear regression). Now that we understand when bagging helps (high variance models), let’s get into the details of the inner workings to understand how it helps! The bagging algorithm is iterative in nature – it builds multiple models by repeating the following three steps: Bootstrap a dataset from the original training data Train a model on the bootstrapped dataset Save the trained model The collection of models created in this process is called an ensemble. When it is time to make a prediction, each model in the ensemble makes its own prediction – the final bagged prediction is the average (for regression) or majority vote (for classification) of all of the ensemble’s predictions. Now that we understand how bagging works, let’s take a few minutes to build an intuition for why it works. We’ll borrow a familiar idea from traditional statistics: sampling to estimate a population mean. In statistics, each sample drawn from a distribution is a random variable. Small sample sizes tend to have high variance and may provide poor estimates of the true mean. But as we collect more samples, the average of those samples becomes a much better approximation of the population mean. Similarly, we can think of each of our individual decision trees as a random variable — after all, each tree is trained on a different random sample of the data! By averaging predictions from many trees, bagging reduces variance and produces an ensemble model that better captures the true relationships in the data. Bagging Example We will be using the load_diabetes1 dataset from the scikit-learn Python package to illustrate a simple bagging example. The dataset has 10 input variables – Age, Sex, BMI, Blood Pressure and 6 blood serum levels (S1-S6). And a single output variable that is a measurement of disease progression. The code below pulls in our data and does some very simple cleaning. With our dataset established, let’s start modeling! # pull in and format data from sklearn.datasets import load_diabetes diabetes = load_diabetes(as_frame=True) df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names) df.loc[:, 'target'] = diabetes.target df = df.dropna() For our example, we will use basic decision trees as our base models for bagging. Let’s first verify that our decision trees are indeed high variance. We will do this by training three decision trees on different bootstrapped datasets and observing the variance of the predictions for a test dataset. The graph below shows the predictions of three different decision trees on the same test dataset. Each dotted vertical line is an individual observation from the test dataset. The three dots on each line are the predictions from the three different decision trees. Variance of decision trees on test data points – image by author In the chart above, we see that individual trees can give very different predictions (spread of the three dots on each vertical line) when trained on bootstrapped datasets. This is the variance we have been talking about! Now that we see that our trees aren’t very robust to training samples – let’s average the predictions to see how bagging can help! The chart below shows the average of the three trees. The diagonal line represents perfect predictions. As you can see, with bagging, our points are tighter and more centered around the diagonal. image by author We’ve already seen significant improvement in our model with the average of just three trees. Let’s beef up our bagging algorithm with more trees! Here is the code to bag as many trees as we want: def train_bagging_trees(df, target_col, pred_cols, n_trees): ''' Creates a decision tree bagging model by training multiple decision trees on bootstrapped data. inputs df (pandas DataFrame) : training data with both target and input columns target_col (str) : name of target column pred_cols (list) : list of predictor column names n_trees (int) : number of trees to be trained in the ensemble output: train_trees (list) : list of trained trees ''' train_trees = [] for i in range(n_trees): # bootstrap training data temp_boot = bootstrap(train_df) #train tree temp_tree = plain_vanilla_tree(temp_boot, target_col, pred_cols) # save trained tree in list train_trees.append(temp_tree) return train_trees def bagging_trees_pred(df, train_trees, target_col, pred_cols): ''' Takes a list of bagged trees and creates predictions by averaging the predictions of each individual tree. inputs df (pandas DataFrame) : training data with both target and input columns train_trees (list) : ensemble model - which is a list of trained decision trees target_col (str) : name of target column pred_cols (list) : list of predictor column names output: avg_preds (list) : list of predictions from the ensembled trees ''' x = df[pred_cols] y = df[target_col] preds = [] # make predictions on data with each decision tree for tree in train_trees: temp_pred = tree.predict(x) preds.append(temp_pred) # get average of the trees' predictions sum_preds = [sum(x) for x in zip(*preds)] avg_preds = [x / len(train_trees) for x in sum_preds] return avg_preds The functions above are very simple, the first trains the bagging ensemble model, the second takes the ensemble (simply a list of trained trees) and makes predictions given a dataset. With our code established, let’s run multiple ensemble models and see how our out-of-bag predictions change as we increase the number of trees. Out-of-bag predictions vs. actuals colored by number of bagged trees – image by author Admittedly, this chart looks a little crazy. Don’t get too bogged down with all of the individual data points, the lines dashed tell the main story! Here we have 1 basic decision tree model and 3 bagged decision tree models – with 3, 50 and 150 trees. The color-coded dotted lines mark the upper and lower ranges for each model’s residuals. There are two main takeaways here: (1) as we add more trees, the range of the residuals shrinks and (2) there is diminishing returns to adding more trees – when we go from 1 to 3 trees, we see the range shrink a lot, when we go from 50 to 150 trees, the range tightens just a little. Now that we’ve successfully gone through a full bagging example, we are about ready to move onto boosting! Let’s do a quick overview of what we covered in this section: Bagging reduces variance of ML models by averaging the predictions of multiple individual models Bagging is most helpful with high-variance models The more models we bag, the lower the variance of the ensemble – but there are diminishing returns to the variance reduction benefit Okay, let’s move on to boosting! Boosting: Reducing bias in weak learners With bagging, we create multiple independent models – the independence of the models helps average out the noise of individual models. Boosting is also an ensembling technique; similar to bagging, we will be training multiple models…. But very different from bagging, the models we train will be dependent. Boosting is a modeling technique that trains an initial model and then sequentially trains additional models to improve the predictions of prior models. The primary target of boosting is to reduce bias – though it can also help reduce variance. We’ve established that boosting iteratively improves predictions – let’s go deeper into how. Boosting algorithms can iteratively improve model predictions in two ways: Directly predicting the residuals of the last model and adding them to the prior predictions – think of it as residual corrections Adding more weight to the observations that the prior model predicted poorly Because boosting’s main goal is to reduce bias, it works well with base models that typically have more bias (e.g., shallow decision trees). For our examples, we are going to use shallow decision trees as our base model – we will only cover the residual prediction approach in this article for brevity. Let’s jump into the boosting example! Predicting prior residuals The residuals prediction approach starts off with an initial model (some algorithms provide a constant, others use one iteration of the base model) and we calculate the residuals of that initial prediction. The second model in the ensemble predicts the residuals of the first model. With our residual predictions in-hand, we add the residual predictions to our initial prediction (this gives us residual corrected predictions) and recalculate the updated residuals…. we continue this process until we have created the number of base models we specified. This process is pretty simple, but is a little hard to explain with just words – the flowchart below shows a simple, 4-model boosting algorithm. Flowchart of simple, 4 model boosting algorithm – image by author When boosting, we need to set three main parameters: (1) the number of trees, (2) the tree depth and (3) the learning rate. I’ll spend a little time discussing these inputs now. Number of Trees For boosting, the number of trees means the same thing as in bagging – i.e., the total number of trees that will be trained for the ensemble. But, unlike boosting, we should not err on the side of more trees! The chart below shows the test RMSE against the number of trees for the diabetes dataset. Unlike with bagging, too many trees in boosting leads to overfitting! – image by author This shows that the test RMSE drops quickly with the number of trees up until about 200 trees, then it starts to creep back up. It looks like a classic ‘overfitting’ chart – we reach a point where more trees becomes worse for the model. This is a key difference between bagging and boosting – with bagging, more trees eventually stop helping, with boosting more trees eventually start hurting! With bagging, more trees eventually stops helping, with boosting more trees eventually starts hurting! We now know that too many trees are bad, and too few trees are bad as well. We will use hyperparameter tuning to select the number of trees. Note – hyperparameter tuning is a huge subject and way outside of the scope of this article. I’ll demonstrate a simple grid search with a train and test dataset for our example a little later. Tree Depth This is the maximum depth for each tree in the ensemble. With bagging, trees are often allowed to go as deep they want because we are looking for low bias, high variance models. With boosting however, we use sequential models to address the bias in the base learners – so we aren’t as concerned about generating low-bias trees. How do we decide how the maximum depth? The same technique that we’ll use with the number of trees, hyperparameter tuning. Learning Rate The number of trees and the tree depth are familiar parameters from bagging (although in bagging we often didn’t put a limit on the tree depth) – but this ‘learning rate’ character is a new face! Let’s take a moment to get familiar. The learning rate is a number between 0 and 1 that is multiplied by the current model’s residual predictions before it is added to the overall predictions. Here’s a simple example of the prediction calculations with a learning rate of 0.5. Once we understand the mechanics of how the learning rate works, we will discuss the why the learning rate is important. The learning rate discounts the residual prediction before updating the actual target prediction – image by author So, why would we want to ‘discount’ our residual predictions, wouldn’t that make our predictions worse? Well, yes and no. For a single iteration, it will likely make our predictions worse – but, we are doing multiple iterations. For multiple iterations, the learning rate keeps the model from overreacting to a single tree’s predictions. It will probably make our current predictions worse, but don’t worry, we will go through this process multiple times! Ultimately, the learning rate helps mitigate overfitting in our boosting model by lowering the influence of any single tree in the ensemble. You can think of it as slowly turning the steering wheel to correct your driving rather than jerking it. In practice, the number of trees and the learning rate have an opposite relationship, i.e., as the learning rate goes down, the number of trees goes up. This is intuitive, because if we only allow a small amount of each tree’s residual prediction to be added to the overall prediction, we are going to need a lot more trees before our overall prediction will start looking good. Ultimately, the learning rate helps mitigate overfitting in our boosting model by lowering the influence of any single tree in the ensemble. Alright, now that we’ve covered the main inputs in boosting, let’s get into the Python coding! We need a couple of functions to create our boosting algorithm: Base decision tree function – a simple function to create and train a single decision tree. We will use the same function from the last section called ‘plain_vanilla_tree.’ Boosting training function – this function sequentially trains and updates residuals for as many decision trees as the user specifies. In our code, this function is called ‘boost_resid_correction.’ Boosting prediction function – this function takes a series of boosted models and makes final ensemble predictions. We call this function ‘boost_resid_correction_pred.’ Here are the functions written in Python: # same base tree function as in prior section def plain_vanilla_tree(df_train, target_col, pred_cols, max_depth = 3, weights=[]): X_train = df_train[pred_cols] y_train = df_train[target_col] tree = DecisionTreeRegressor(max_depth = max_depth, random_state=42) if weights: tree.fit(X_train, y_train, sample_weights=weights) else: tree.fit(X_train, y_train) return tree # residual predictions def boost_resid_correction(df_train, target_col, pred_cols, num_models, learning_rate=1, max_depth=3): ''' Creates boosted decision tree ensemble model. Inputs: df_train (pd.DataFrame) : contains training data target_col (str) : name of target column pred_col (list) : target column names num_models (int) : number of models to use in boosting learning_rate (float, def = 1) : discount given to residual predictions takes values between (0, 1] max_depth (int, def = 3) : max depth of each tree model Outputs: boosting_model (dict) : contains everything needed to use model to make predictions - includes list of all trees in the ensemble ''' # create initial predictions model1 = plain_vanilla_tree(df_train, target_col, pred_cols, max_depth = max_depth) initial_preds = model1.predict(df_train[pred_cols]) df_train['resids'] = df_train[target_col] - initial_preds # create multiple models, each predicting the updated residuals models = [] for i in range(num_models): temp_model = plain_vanilla_tree(df_train, 'resids', pred_cols) models.append(temp_model) temp_pred_resids = temp_model.predict(df_train[pred_cols]) df_train['resids'] = df_train['resids'] - (learning_rate*temp_pred_resids) boosting_model = {'initial_model' : model1, 'models' : models, 'learning_rate' : learning_rate, 'pred_cols' : pred_cols} return boosting_model # This function takes the residual boosted model and scores data def boost_resid_correction_predict(df, boosting_models, chart = False): ''' Creates predictions on a dataset given a boosted model. Inputs: df (pd.DataFrame) : data to make predictions boosting_models (dict) : dictionary containing all pertinent boosted model data chart (bool, def = False) : indicates if performance chart should be created Outputs: pred (np.array) : predictions from boosted model rmse (float) : RMSE of predictions ''' # get initial predictions initial_model = boosting_models['initial_model'] pred_cols = boosting_models['pred_cols'] pred = initial_model.predict(df[pred_cols]) # calculate residual predictions from each model and add models = boosting_models['models'] learning_rate = boosting_models['learning_rate'] for model in models: temp_resid_preds = model.predict(df[pred_cols]) pred += learning_rate*temp_resid_preds if chart: plt.scatter(df['target'], pred) plt.show() rmse = np.sqrt(mean_squared_error(df['target'], pred)) return pred, rmse Sweet, let’s make a model on the same diabetes dataset that we used in the bagging section. We’ll do a quick grid search (again, not doing anything fancy with the tuning here) to tune our three parameters and then we’ll train the final model using the boost_resid_correction function. # tune parameters with grid search n_trees = [5,10,30,50,100,125,150,200,250,300] learning_rates = [0.001, 0.01, 0.1, 0.25, 0.50, 0.75, 0.95, 1] max_depths = my_list = list(range(1, 16)) # Create a dictionary to hold test RMSE for each 'square' in grid perf_dict = {} for tree in n_trees: for learning_rate in learning_rates: for max_depth in max_depths: temp_boosted_model = boost_resid_correction(train_df, 'target', pred_cols, tree, learning_rate=learning_rate, max_depth=max_depth) temp_boosted_model['target_col'] = 'target' preds, rmse = boost_resid_correction_predict(test_df, temp_boosted_model) dict_key = '_'.join(str(x) for x in [tree, learning_rate, max_depth]) perf_dict[dict_key] = rmse min_key = min(perf_dict, key=perf_dict.get) print(perf_dict[min_key]) And our winner is — 50 trees, a learning rate of 0.1 and a max depth of 1! Let’s take a look and see how our predictions did. Tuned boosting actuals vs. residuals – image by author While our boosting ensemble model seems to capture the trend reasonably well, we can see off the bat that it isn’t predicting as well as the bagging model. We could probably spend more time tuning – but it could also be the case that the bagging approach fits this specific data better. With that said, we’ve now earned an understanding of bagging and boosting – let’s compare them in the next section! Bagging vs. Boosting – understanding the differences We’ve covered bagging and boosting separately, the table below brings all the information we’ve covered to concisely compare the approaches: image by author Note: In this article, we wrote our own bagging and boosting code for educational purposes. In practice you will just use the excellent code that is available in Python packages or other software. Also, people rarely use ‘pure’ bagging or boosting – it is much more common to use more advanced algorithms that modify the plain vanilla bagging and boosting to improve performance. Wrapping it up Bagging and boosting are powerful and practical ways to improve weak learners like the humble but flexible decision tree. Both approaches use the power of ensembling to address different problems – bagging for variance, boosting for bias. In practice, pre-packaged code is almost always used to train more advanced machine learning models that use the main ideas of bagging and boosting but, expand on them with multiple improvements. I hope that this has been helpful and interesting – happy modeling! Dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases and is distributed under the public domain license for use without restriction. The post Strength in Numbers: Ensembling Models with Bagging and Boosting appeared first on Towards Data Science.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • iPhone Shipments Crash 50% In China As Local Brands Dominate

    Apple's smartphone shipments in China plunged nearly 50% year-over-year in March 2025, as domestic brands like Huawei and Vivo surged ahead -- now controlling 92% of the market. MacRumors reports: The steep decline saw shipments fall to just 1.89 million units, down from 3.75 million during the same period last year. That shrinks Apple's share of the Chinese market to approximately 8%, while domestic brands now control 92% of smartphone shipments. For the entire first quarter, non-Chinese brand shipments declined over 25%, while total smartphone shipments in China actually increased by 3.3%.

    Apple's struggles come as domestic competitors have gained ground. Counterpoint Research reports Huawei now leads with a 19.4% share, followed by Vivo, Xiaomi, and Oppo. Apple has slipped to fifth place with 14.1%. Several factors are driving Apple's declining fortunes. The company faces competition from rejuvenated local brands like Huawei, which has rebounded with proprietary chips and its HarmonyOS Next software. Chinese government policies appear to be playing a role too. Under government subsidies, consumers of electronics get a 15% refund of products that are priced under 6,000 yuan. Apple's standard iPhone 16 starts at 5,999 yuan.

    of this story at Slashdot.
    #iphone #shipments #crash #china #local
    iPhone Shipments Crash 50% In China As Local Brands Dominate
    Apple's smartphone shipments in China plunged nearly 50% year-over-year in March 2025, as domestic brands like Huawei and Vivo surged ahead -- now controlling 92% of the market. MacRumors reports: The steep decline saw shipments fall to just 1.89 million units, down from 3.75 million during the same period last year. That shrinks Apple's share of the Chinese market to approximately 8%, while domestic brands now control 92% of smartphone shipments. For the entire first quarter, non-Chinese brand shipments declined over 25%, while total smartphone shipments in China actually increased by 3.3%. Apple's struggles come as domestic competitors have gained ground. Counterpoint Research reports Huawei now leads with a 19.4% share, followed by Vivo, Xiaomi, and Oppo. Apple has slipped to fifth place with 14.1%. Several factors are driving Apple's declining fortunes. The company faces competition from rejuvenated local brands like Huawei, which has rebounded with proprietary chips and its HarmonyOS Next software. Chinese government policies appear to be playing a role too. Under government subsidies, consumers of electronics get a 15% refund of products that are priced under 6,000 yuan. Apple's standard iPhone 16 starts at 5,999 yuan. of this story at Slashdot. #iphone #shipments #crash #china #local
    iPhone Shipments Crash 50% In China As Local Brands Dominate
    mobile.slashdot.org
    Apple's smartphone shipments in China plunged nearly 50% year-over-year in March 2025, as domestic brands like Huawei and Vivo surged ahead -- now controlling 92% of the market. MacRumors reports: The steep decline saw shipments fall to just 1.89 million units, down from 3.75 million during the same period last year. That shrinks Apple's share of the Chinese market to approximately 8%, while domestic brands now control 92% of smartphone shipments. For the entire first quarter, non-Chinese brand shipments declined over 25%, while total smartphone shipments in China actually increased by 3.3%. Apple's struggles come as domestic competitors have gained ground. Counterpoint Research reports Huawei now leads with a 19.4% share, followed by Vivo (17%), Xiaomi (16.6%), and Oppo (14.6%). Apple has slipped to fifth place with 14.1%. Several factors are driving Apple's declining fortunes. The company faces competition from rejuvenated local brands like Huawei, which has rebounded with proprietary chips and its HarmonyOS Next software. Chinese government policies appear to be playing a role too. Under government subsidies, consumers of electronics get a 15% refund of products that are priced under 6,000 yuan ($820). Apple's standard iPhone 16 starts at 5,999 yuan. Read more of this story at Slashdot.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • The iPhone 18’s edgeless curved display seems like a certainty now

    Macworld

    2027 is the 20th anniversary of the iPhone, and it appears that Apple has some big plans in store. The Information and Bloomberg’s Mark Gurman reported that Apple is going to dramatically redesign the iPhone with a mostly glass, curved iPhone. On Wednesday, a report by Electronic Times sheds a little bit of light on the display that will be used.

    Apple plans to use “four-sided bending display technologies” that would allow the display to wrap around the edges of the iPhone so that the phone would not have a bezel. Apple also plans to implement an OLED display driver chip that is based on a 16nm fin field-effect transistorprocess, a change from the 28nm planar process currently used. Implementing FinFET will improve power efficiency, but it is not clear if this results in longer battery life or if the power demands of a four-sided bending display and AI processing offset any gains.

    One way Apple may address the power demands of the new phone is by using new battery technology, according to the Electronic Times report. Current lithium-ion batteries have a graphite anode, but the 2027 iPhone could use a battery with a silicon anode instead. Silicon anodes store more lithium ions, resulting in higher energy density and longer battery power capacity.

    Previous reports have stated that Apple is making an effort to design this iPhone without any screen cutouts or minimize its use. The Information and analyst Ross Young have previously reported that Apple plans to most of the Face ID sensors underneath the display. The Information has also reported that the phone would have a small cutout for the front-facing camera, a cutout that would be smaller than the pill-shaped one in the current iPhone. It’s not clear whether Apple would still offer the Dynamic Island feature if the camera cutout shrinks.
    #iphone #18s #edgeless #curved #display
    The iPhone 18’s edgeless curved display seems like a certainty now
    Macworld 2027 is the 20th anniversary of the iPhone, and it appears that Apple has some big plans in store. The Information and Bloomberg’s Mark Gurman reported that Apple is going to dramatically redesign the iPhone with a mostly glass, curved iPhone. On Wednesday, a report by Electronic Times sheds a little bit of light on the display that will be used. Apple plans to use “four-sided bending display technologies” that would allow the display to wrap around the edges of the iPhone so that the phone would not have a bezel. Apple also plans to implement an OLED display driver chip that is based on a 16nm fin field-effect transistorprocess, a change from the 28nm planar process currently used. Implementing FinFET will improve power efficiency, but it is not clear if this results in longer battery life or if the power demands of a four-sided bending display and AI processing offset any gains. One way Apple may address the power demands of the new phone is by using new battery technology, according to the Electronic Times report. Current lithium-ion batteries have a graphite anode, but the 2027 iPhone could use a battery with a silicon anode instead. Silicon anodes store more lithium ions, resulting in higher energy density and longer battery power capacity. Previous reports have stated that Apple is making an effort to design this iPhone without any screen cutouts or minimize its use. The Information and analyst Ross Young have previously reported that Apple plans to most of the Face ID sensors underneath the display. The Information has also reported that the phone would have a small cutout for the front-facing camera, a cutout that would be smaller than the pill-shaped one in the current iPhone. It’s not clear whether Apple would still offer the Dynamic Island feature if the camera cutout shrinks. #iphone #18s #edgeless #curved #display
    The iPhone 18’s edgeless curved display seems like a certainty now
    www.macworld.com
    Macworld 2027 is the 20th anniversary of the iPhone, and it appears that Apple has some big plans in store. The Information and Bloomberg’s Mark Gurman reported that Apple is going to dramatically redesign the iPhone with a mostly glass, curved iPhone (that will follow the release of Apple’s first folding phone). On Wednesday, a report by Electronic Times sheds a little bit of light on the display that will be used. Apple plans to use “four-sided bending display technologies” that would allow the display to wrap around the edges of the iPhone so that the phone would not have a bezel. Apple also plans to implement an OLED display driver chip that is based on a 16nm fin field-effect transistor (FinFET) process, a change from the 28nm planar process currently used. Implementing FinFET will improve power efficiency, but it is not clear if this results in longer battery life or if the power demands of a four-sided bending display and AI processing offset any gains. One way Apple may address the power demands of the new phone is by using new battery technology, according to the Electronic Times report. Current lithium-ion batteries have a graphite anode, but the 2027 iPhone could use a battery with a silicon anode instead. Silicon anodes store more lithium ions, resulting in higher energy density and longer battery power capacity. Previous reports have stated that Apple is making an effort to design this iPhone without any screen cutouts or minimize its use. The Information and analyst Ross Young have previously reported that Apple plans to most of the Face ID sensors underneath the display. The Information has also reported that the phone would have a small cutout for the front-facing camera, a cutout that would be smaller than the pill-shaped one in the current iPhone. It’s not clear whether Apple would still offer the Dynamic Island feature if the camera cutout shrinks.
    0 Reacties ·0 aandelen ·0 voorbeeld
CGShares https://cgshares.com