• 50 Preppy Fonts with Rich & Fancy Vibes

    In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.Preppy fonts capture that quintessential East Coast elite vibe – think Nantucket summers, yacht clubs, and monogrammed everything. These typefaces embody the perfect balance of tradition and refinement that makes preppy design so timeless and aspirational.
    But here’s the thing: not all fonts can pull off that coveted preppy aesthetic. The best preppy fonts have a certain je ne sais quoi – they’re classic without being stuffy, elegant without being pretentious, and refined without being inaccessible.
    In this comprehensive guide, we’ll explore the most gorgeous preppy fonts that’ll have your designs looking like they belong in the pages of Town & Country magazine. So grab your pearls and let’s dive into this typographic treasure trove!
    Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The Preppiest Fonts That Define 2025
    Let’s start with the crème de la crème – the fonts that truly embody that preppy spirit. I’ve curated this list based on their ability to channel that classic New England charm while remaining versatile enough for modern design needs.

    Gatsby Prelude

    Gatsby Prelude is an elegant and modern Art Deco font duo. It combines sans-serif characters with decorative elements, perfect for creating sophisticated designs with a touch of vintage glamour.Burtuqol

    Burtuqol is a vintage slab serif font that exudes a retro charm. Its bold, chunky serifs and aged appearance make it ideal for projects requiring a nostalgic or timeworn aesthetic.Gafler

    Gafler is a classy vintage serif font with decorative elements. It combines elegance with a touch of old-world charm, making it perfect for high-end branding and classic design projects.Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere.

    Kagnue

    Kagnue is a modern and classy serif font. It offers a fresh take on traditional serif typefaces, blending contemporary design with timeless elegance for versatile use in various design contexts.The Blendinroom

    The Blendinroom is a retro serif typeface featuring luxurious ligatures. Its vintage-inspired design and intricate details make it ideal for creating sophisticated, old-world aesthetics in design projects.MODER BULES

    MODER BULES is a playful sans-serif font with a fun, childlike appeal. Its quirky design makes it perfect for kids-oriented projects or Halloween-themed designs, adding a touch of whimsy to typography.Nickey Vintage

    Nickey Vintage is a decorative display font with a strong vintage flair. Its bold, eye-catching characters make it ideal for headlines, logos, and designs that require a striking retro aesthetic.Ladger

    Ladger is a casual script font that exudes luxury and elegance. Its flowing lines and graceful curves make it perfect for logo designs, high-end branding, and projects requiring a touch of sophistication.Hadnich

    Hadnich is a modern script font with a brush-like quality. Its versatile design makes it suitable for various applications, from signage to branding, offering a contemporary take on handwritten typography.Belly and Park

    Belly and Park is a condensed beauty classic font family featuring both serif and sans-serif styles. Its vintage-inspired design and narrow characters make it ideal for creating elegant, space-efficient layouts.Loubag

    Loubag is a modern retro font family encompassing sans-serif, serif, and decorative styles. Its bold, fashion-forward design makes it perfect for creating eye-catching headlines and trendy branding materials.Petter And Sons

    Petter And Sons is a romantic beauty script font with decorative elements. Its elegant, flowing design makes it ideal for wedding invitations, luxury branding, and projects requiring a touch of refined beauty.Preteoria

    Preteoria is a modern cursive font with a sleek, contemporary feel. Its smooth curves and clean lines make it versatile for various design applications, from branding to digital media projects.Delauney

    Delauney is an Art Deco-inspired sans-serif font that captures the essence of the roaring twenties. Its geometric shapes and sleek lines make it perfect for creating designs with a bold, metropolitan flair.Amadi Vintage

    Amadi Vintage is a chic and beautiful serif font with a timeless appeal. Its elegant design and vintage-inspired details make it ideal for creating sophisticated, classic-looking designs and branding materials.LEDERSON

    LEDERSON is a vintage-inspired shadow font. Its weathered look and strong character make it perfect for designs requiring an authentic, aged aesthetic.Fancyou

    Fancyou is a versatile serif font with alternate characters. Its elegant design and customizable options make it suitable for a wide range of projects, from formal invitations to modern branding materials.Catterpie Font

    Catterpie is a handwritten script font that mimics natural handlettering. Its fluid, signature-like style makes it perfect for creating personal, authentic-looking designs and branding materials.Jemmy Wonder

    Jemmy Wonder is a Victorian-inspired serif font with a strong vintage character. Its ornate details and old-world charm make it ideal for creating designs with a classic, nostalgic feel.Monthey

    Monthey is a bold, elegant vintage display serif font. Its chunky characters and 70s-inspired design make it perfect for creating eye-catching headlines and retro-themed branding materials.Madville

    Madville is a classy script font with a versatile design. Its elegant curves and smooth transitions make it suitable for a wide range of projects, from formal invitations to modern branding materials.Crowk

    Crowk is a luxury serif font with a timeless, elegant appeal. Its refined design and classic proportions make it ideal for high-end branding, editorial layouts, and sophisticated design projects.Peachy Fantasy

    Peachy Fantasy is an Art Nouveau-inspired display font with decorative elements. Its vintage charm and unique character make it perfect for creating eye-catching headlines and artistic design projects.Cormier

    Cormier is a decorative sans-serif font with a strong artistic flair. Its unique design and fashion-forward aesthetic make it ideal for creating bold, attention-grabbing headlines and branding materials.Syntage

    Syntage is a decorative modern luxury font with both serif and ornamental elements. Its retro-inspired design and luxurious details make it perfect for high-end branding and sophisticated design projects.Jeniffer Selfies

    Jeniffer Selfies is a retro-inspired bold font combining sans-serif and script styles. Its playful design and vintage feel make it ideal for creating nostalgic, fun-loving designs and branding materials.The Rilman

    The Rilman is a ligature-rich rounded sans-serif font with a 90s-inspired design. Its retro charm and smooth edges make it perfect for creating playful, nostalgic designs and branding materials.Milky Croffle

    Milky Croffle is a classic beauty elegant serif font. Its refined design and timeless appeal make it ideal for creating sophisticated layouts, high-end branding, and projects requiring a touch of traditional elegance.
    What Makes a Font Feel Preppy?
    You might be wondering what exactly gives a font that unmistakable preppy vibe. After years of working with typography, I’ve identified several key characteristics that define the preppy aesthetic:
    Classic Serif Structure: Most preppy fonts are serifs, drawing inspiration from traditional typography used in prestigious publications and academic institutions. These serifs aren’t just decorative – they’re a nod to centuries of refined typographic tradition.
    Elegant Proportions: Preppy fonts tend to have well-balanced letterforms with moderate contrast between thick and thin strokes. They’re neither too delicate nor too bold – just perfectly poised, like a well-tailored blazer.
    Timeless Appeal: The best preppy fonts don’t scream “trendy.” Instead, they whisper “timeless.” They’re the typography equivalent of a strand of pearls – always appropriate, never out of style.
    Sophisticated Details: Look for subtle refinements in letterforms – graceful curves, well-crafted terminals, and thoughtful spacing. These details separate truly preppy fonts from their more pedestrian cousins.
    Heritage Inspiration: Many preppy fonts draw inspiration from historical typefaces used by Ivy League universities, prestigious publishing houses, and old-money families. This connection to tradition is what gives them their authentic preppy pedigree.
    Where to Use Preppy FontsPreppy fonts aren’t one-size-fits-all solutions, but when used appropriately, they’re absolutely magical. Here’s where they shine brightest:
    Wedding Invitations: Nothing says “elegant affair” quite like a beautifully chosen preppy serif. These fonts are perfect for formal invitations, save-the-dates, and wedding stationery that needs to feel sophisticated and timeless.
    Luxury Branding: Brands targeting affluent audiences or positioning themselves as premium often benefit from preppy typography. Think boutique hotels, high-end fashion, or artisanal goods.
    Editorial Design: Magazines, newsletters, and publications focusing on lifestyle, fashion, or culture can leverage preppy fonts to establish credibility and sophistication.
    Corporate Identity: Professional services, law firms, financial institutions, and consulting companies often choose preppy fonts to convey trustworthiness and establishment credibility.
    Academic Materials: Universities, prep schools, and educational institutions naturally gravitate toward preppy typography that reflects their traditional values and heritage.
    However, preppy fonts might not be the best choice for:
    Tech Startups: The traditional nature of preppy fonts can feel at odds with innovation and disruption. Modern sans serifs usually work better for tech companies.
    Children’s Brands: While elegant, preppy fonts might feel too formal for products targeting young children. Playful, rounded fonts are typically more appropriate.
    Casual Brands: If your brand personality is laid-back and approachable, overly formal preppy fonts might create distance between you and your audience.
    How to Choose the Perfect Preppy Font
    Selecting the right preppy font requires careful consideration of several factors. Here’s my tried-and-true process:
    Consider Your Audience: Are you designing for actual prep school alumni, or are you trying to capture that aspirational preppy aesthetic for a broader audience? Your target demographic should influence how traditional or accessible your font choice is.
    Evaluate the Context: A wedding invitation can handle more ornate details than a business card. Consider where your text will appear and how much personality the context can support.
    Test Readability: Preppy doesn’t mean hard to read. Always test your chosen font at various sizes to ensure it remains legible. Your typography should enhance communication, not hinder it.
    Think About Pairing: Will you be using this font alone or pairing it with others? Consider how your preppy serif will work alongside sans serifs for body text or script fonts for accents.
    Consider Your Medium: Some preppy fonts work beautifully in print but struggle on screens. Others are optimized for digital use but lose their charm in print. Choose accordingly.
    Pairing Preppy Fonts Like a Pro
    The magic of preppy typography often lies in thoughtful font pairing. Here are some winning combinations that never fail:
    Classic Serif + Clean Sans Serif: Pair your preppy serif headline font with a crisp, readable sans serif for body text. This creates hierarchy while maintaining sophistication.
    Traditional Serif + Script Accent: Use a refined script font sparingly for special elements like signatures or decorative text, balanced by a solid preppy serif for main content.
    Serif + Serif Variation: Sometimes pairing two serifs from the same family – perhaps a regular weight for body text and a bold condensed version for headlines – creates beautiful, cohesive designs.
    Remember, less is often more with preppy design. Stick to two or three fonts maximum, and let the inherent elegance of your chosen typefaces do the heavy lifting.
    The Psychology Behind Preppy Typography
    Understanding why preppy fonts work so well psychologically can help you use them more effectively. These typefaces tap into powerful associations:
    Trust and Reliability: The traditional nature of preppy fonts suggests stability and permanence. When people see these fonts, they subconsciously associate them with established institutions and time-tested values.
    Sophistication and Education: Preppy fonts are reminiscent of academic institutions and intellectual pursuits. They suggest refinement, education, and cultural awareness.
    Exclusivity and Status: Let’s be honest – part of the preppy aesthetic’s appeal is its association with privilege and exclusivity. These fonts can make designs feel more premium and aspirational.
    Quality and Craftsmanship: The careful attention to typographic detail in preppy fonts suggests similar attention to quality in whatever they’re representing.
    Modern Takes on Classic Preppy Style
    While preppy fonts are rooted in tradition, the best designers know how to give them contemporary flair. Here are some ways to modernize preppy typography:
    Unexpected Color Palettes: Pair traditional preppy fonts with modern colors. Think sage green and cream instead of navy and white, or soft blush tones for a fresh take.
    Generous White Space: Give your preppy fonts room to breathe with plenty of white space. This modern approach to layout keeps traditional fonts feeling fresh and uncluttered.
    Mixed Media Integration: Combine preppy typography with photography, illustrations, or graphic elements for a more contemporary feel while maintaining that sophisticated foundation.
    Strategic Contrast: Pair your refined preppy fonts with unexpected elements – maybe a bold geometric shape or modern photography – to create dynamic tension.
    Preppy Font Alternatives for Every Budget
    Not every preppy project has a premium font budget, and that’s okay! Here are some strategies for achieving that coveted preppy look without breaking the bank:
    Google Fonts Gems: Fonts like Playfair Display, Crimson Text, and Libre Baskerville offer sophisticated serif options that can work beautifully for preppy designs.
    Font Pairing Magic: Sometimes combining two free fonts thoughtfully can create a more expensive-looking result than using a single premium font poorly.
    Focus on Execution: A free font used with excellent spacing, hierarchy, and layout will always look better than an expensive font used carelessly.
    Common Preppy Font Mistakes to Avoid
    Even with the perfect preppy font, poor execution can ruin the effect. Here are the most common mistakes I see designers make:
    Overdoing the Decoration: Just because a font has elegant details doesn’t mean you need to add more flourishes. Let the typeface’s inherent sophistication speak for itself.
    Ignoring Hierarchy: Preppy design relies on clear, elegant hierarchy. Don’t make everything the same size or weight – create visual flow through thoughtful typography scaling.
    Poor Spacing: Cramped text kills the elegant feel of preppy fonts. Give your typography generous leading and appropriate margins.
    Wrong Context: Using an ultra-formal preppy font for a casual pizza restaurant’s menu will feel jarring and inappropriate. Match your font choice to your content and audience.
    The Future of Preppy Typography
    As we look ahead in 2025, preppy fonts continue to evolve while maintaining their classic appeal. We’re seeing interesting trends emerge:
    Variable Font Technology: Modern preppy fonts are increasingly available as variable fonts, allowing designers to fine-tune weight, width, and optical size for perfect customization.
    Screen Optimization: Classic preppy fonts are being redrawn and optimized for digital screens without losing their traditional charm.
    Inclusive Preppy: Designers are expanding the preppy aesthetic beyond its traditional boundaries, creating fonts that maintain sophistication while feeling more accessible and diverse.
    Sustainable Design: The timeless nature of preppy fonts aligns perfectly with sustainable design principles – these typefaces won’t look dated next year, making them environmentally responsible choices.
    Conclusion: Embracing Timeless Elegance
    Preppy fonts represent more than just letterforms – they’re a gateway to timeless elegance and sophisticated communication. Whether you’re designing wedding invitations for a Martha’s Vineyard ceremony or creating brand identity for a boutique law firm, the right preppy font can elevate your work from merely professional to genuinely distinguished.
    The beauty of preppy typography lies in its ability to feel both traditional and fresh, formal yet approachable. These fonts have stood the test of time because they tap into something fundamental about how we perceive quality, tradition, and sophistication.
    As you explore the world of preppy fonts, remember that the best typography choices support your message rather than overshadowing it. Choose fonts that enhance your content’s inherent qualities and speak to your audience’s aspirations and values.
    So whether you’re channeling that old-money aesthetic or simply want to add a touch of refined elegance to your designs, preppy fonts offer a wealth of possibilities. After all, good typography, like good manners, never goes out of style.
    #preppy #fonts #with #rich #ampamp
    50 Preppy Fonts with Rich & Fancy Vibes
    In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.Preppy fonts capture that quintessential East Coast elite vibe – think Nantucket summers, yacht clubs, and monogrammed everything. These typefaces embody the perfect balance of tradition and refinement that makes preppy design so timeless and aspirational. But here’s the thing: not all fonts can pull off that coveted preppy aesthetic. The best preppy fonts have a certain je ne sais quoi – they’re classic without being stuffy, elegant without being pretentious, and refined without being inaccessible. In this comprehensive guide, we’ll explore the most gorgeous preppy fonts that’ll have your designs looking like they belong in the pages of Town & Country magazine. So grab your pearls and let’s dive into this typographic treasure trove! 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The Preppiest Fonts That Define 2025 Let’s start with the crème de la crème – the fonts that truly embody that preppy spirit. I’ve curated this list based on their ability to channel that classic New England charm while remaining versatile enough for modern design needs. Gatsby Prelude Gatsby Prelude is an elegant and modern Art Deco font duo. It combines sans-serif characters with decorative elements, perfect for creating sophisticated designs with a touch of vintage glamour.Burtuqol Burtuqol is a vintage slab serif font that exudes a retro charm. Its bold, chunky serifs and aged appearance make it ideal for projects requiring a nostalgic or timeworn aesthetic.Gafler Gafler is a classy vintage serif font with decorative elements. It combines elegance with a touch of old-world charm, making it perfect for high-end branding and classic design projects.Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. Kagnue Kagnue is a modern and classy serif font. It offers a fresh take on traditional serif typefaces, blending contemporary design with timeless elegance for versatile use in various design contexts.The Blendinroom The Blendinroom is a retro serif typeface featuring luxurious ligatures. Its vintage-inspired design and intricate details make it ideal for creating sophisticated, old-world aesthetics in design projects.MODER BULES MODER BULES is a playful sans-serif font with a fun, childlike appeal. Its quirky design makes it perfect for kids-oriented projects or Halloween-themed designs, adding a touch of whimsy to typography.Nickey Vintage Nickey Vintage is a decorative display font with a strong vintage flair. Its bold, eye-catching characters make it ideal for headlines, logos, and designs that require a striking retro aesthetic.Ladger Ladger is a casual script font that exudes luxury and elegance. Its flowing lines and graceful curves make it perfect for logo designs, high-end branding, and projects requiring a touch of sophistication.Hadnich Hadnich is a modern script font with a brush-like quality. Its versatile design makes it suitable for various applications, from signage to branding, offering a contemporary take on handwritten typography.Belly and Park Belly and Park is a condensed beauty classic font family featuring both serif and sans-serif styles. Its vintage-inspired design and narrow characters make it ideal for creating elegant, space-efficient layouts.Loubag Loubag is a modern retro font family encompassing sans-serif, serif, and decorative styles. Its bold, fashion-forward design makes it perfect for creating eye-catching headlines and trendy branding materials.Petter And Sons Petter And Sons is a romantic beauty script font with decorative elements. Its elegant, flowing design makes it ideal for wedding invitations, luxury branding, and projects requiring a touch of refined beauty.Preteoria Preteoria is a modern cursive font with a sleek, contemporary feel. Its smooth curves and clean lines make it versatile for various design applications, from branding to digital media projects.Delauney Delauney is an Art Deco-inspired sans-serif font that captures the essence of the roaring twenties. Its geometric shapes and sleek lines make it perfect for creating designs with a bold, metropolitan flair.Amadi Vintage Amadi Vintage is a chic and beautiful serif font with a timeless appeal. Its elegant design and vintage-inspired details make it ideal for creating sophisticated, classic-looking designs and branding materials.LEDERSON LEDERSON is a vintage-inspired shadow font. Its weathered look and strong character make it perfect for designs requiring an authentic, aged aesthetic.Fancyou Fancyou is a versatile serif font with alternate characters. Its elegant design and customizable options make it suitable for a wide range of projects, from formal invitations to modern branding materials.Catterpie Font Catterpie is a handwritten script font that mimics natural handlettering. Its fluid, signature-like style makes it perfect for creating personal, authentic-looking designs and branding materials.Jemmy Wonder Jemmy Wonder is a Victorian-inspired serif font with a strong vintage character. Its ornate details and old-world charm make it ideal for creating designs with a classic, nostalgic feel.Monthey Monthey is a bold, elegant vintage display serif font. Its chunky characters and 70s-inspired design make it perfect for creating eye-catching headlines and retro-themed branding materials.Madville Madville is a classy script font with a versatile design. Its elegant curves and smooth transitions make it suitable for a wide range of projects, from formal invitations to modern branding materials.Crowk Crowk is a luxury serif font with a timeless, elegant appeal. Its refined design and classic proportions make it ideal for high-end branding, editorial layouts, and sophisticated design projects.Peachy Fantasy Peachy Fantasy is an Art Nouveau-inspired display font with decorative elements. Its vintage charm and unique character make it perfect for creating eye-catching headlines and artistic design projects.Cormier Cormier is a decorative sans-serif font with a strong artistic flair. Its unique design and fashion-forward aesthetic make it ideal for creating bold, attention-grabbing headlines and branding materials.Syntage Syntage is a decorative modern luxury font with both serif and ornamental elements. Its retro-inspired design and luxurious details make it perfect for high-end branding and sophisticated design projects.Jeniffer Selfies Jeniffer Selfies is a retro-inspired bold font combining sans-serif and script styles. Its playful design and vintage feel make it ideal for creating nostalgic, fun-loving designs and branding materials.The Rilman The Rilman is a ligature-rich rounded sans-serif font with a 90s-inspired design. Its retro charm and smooth edges make it perfect for creating playful, nostalgic designs and branding materials.Milky Croffle Milky Croffle is a classic beauty elegant serif font. Its refined design and timeless appeal make it ideal for creating sophisticated layouts, high-end branding, and projects requiring a touch of traditional elegance. What Makes a Font Feel Preppy? You might be wondering what exactly gives a font that unmistakable preppy vibe. After years of working with typography, I’ve identified several key characteristics that define the preppy aesthetic: Classic Serif Structure: Most preppy fonts are serifs, drawing inspiration from traditional typography used in prestigious publications and academic institutions. These serifs aren’t just decorative – they’re a nod to centuries of refined typographic tradition. Elegant Proportions: Preppy fonts tend to have well-balanced letterforms with moderate contrast between thick and thin strokes. They’re neither too delicate nor too bold – just perfectly poised, like a well-tailored blazer. Timeless Appeal: The best preppy fonts don’t scream “trendy.” Instead, they whisper “timeless.” They’re the typography equivalent of a strand of pearls – always appropriate, never out of style. Sophisticated Details: Look for subtle refinements in letterforms – graceful curves, well-crafted terminals, and thoughtful spacing. These details separate truly preppy fonts from their more pedestrian cousins. Heritage Inspiration: Many preppy fonts draw inspiration from historical typefaces used by Ivy League universities, prestigious publishing houses, and old-money families. This connection to tradition is what gives them their authentic preppy pedigree. Where to Use Preppy FontsPreppy fonts aren’t one-size-fits-all solutions, but when used appropriately, they’re absolutely magical. Here’s where they shine brightest: Wedding Invitations: Nothing says “elegant affair” quite like a beautifully chosen preppy serif. These fonts are perfect for formal invitations, save-the-dates, and wedding stationery that needs to feel sophisticated and timeless. Luxury Branding: Brands targeting affluent audiences or positioning themselves as premium often benefit from preppy typography. Think boutique hotels, high-end fashion, or artisanal goods. Editorial Design: Magazines, newsletters, and publications focusing on lifestyle, fashion, or culture can leverage preppy fonts to establish credibility and sophistication. Corporate Identity: Professional services, law firms, financial institutions, and consulting companies often choose preppy fonts to convey trustworthiness and establishment credibility. Academic Materials: Universities, prep schools, and educational institutions naturally gravitate toward preppy typography that reflects their traditional values and heritage. However, preppy fonts might not be the best choice for: Tech Startups: The traditional nature of preppy fonts can feel at odds with innovation and disruption. Modern sans serifs usually work better for tech companies. Children’s Brands: While elegant, preppy fonts might feel too formal for products targeting young children. Playful, rounded fonts are typically more appropriate. Casual Brands: If your brand personality is laid-back and approachable, overly formal preppy fonts might create distance between you and your audience. How to Choose the Perfect Preppy Font Selecting the right preppy font requires careful consideration of several factors. Here’s my tried-and-true process: Consider Your Audience: Are you designing for actual prep school alumni, or are you trying to capture that aspirational preppy aesthetic for a broader audience? Your target demographic should influence how traditional or accessible your font choice is. Evaluate the Context: A wedding invitation can handle more ornate details than a business card. Consider where your text will appear and how much personality the context can support. Test Readability: Preppy doesn’t mean hard to read. Always test your chosen font at various sizes to ensure it remains legible. Your typography should enhance communication, not hinder it. Think About Pairing: Will you be using this font alone or pairing it with others? Consider how your preppy serif will work alongside sans serifs for body text or script fonts for accents. Consider Your Medium: Some preppy fonts work beautifully in print but struggle on screens. Others are optimized for digital use but lose their charm in print. Choose accordingly. Pairing Preppy Fonts Like a Pro The magic of preppy typography often lies in thoughtful font pairing. Here are some winning combinations that never fail: Classic Serif + Clean Sans Serif: Pair your preppy serif headline font with a crisp, readable sans serif for body text. This creates hierarchy while maintaining sophistication. Traditional Serif + Script Accent: Use a refined script font sparingly for special elements like signatures or decorative text, balanced by a solid preppy serif for main content. Serif + Serif Variation: Sometimes pairing two serifs from the same family – perhaps a regular weight for body text and a bold condensed version for headlines – creates beautiful, cohesive designs. Remember, less is often more with preppy design. Stick to two or three fonts maximum, and let the inherent elegance of your chosen typefaces do the heavy lifting. The Psychology Behind Preppy Typography Understanding why preppy fonts work so well psychologically can help you use them more effectively. These typefaces tap into powerful associations: Trust and Reliability: The traditional nature of preppy fonts suggests stability and permanence. When people see these fonts, they subconsciously associate them with established institutions and time-tested values. Sophistication and Education: Preppy fonts are reminiscent of academic institutions and intellectual pursuits. They suggest refinement, education, and cultural awareness. Exclusivity and Status: Let’s be honest – part of the preppy aesthetic’s appeal is its association with privilege and exclusivity. These fonts can make designs feel more premium and aspirational. Quality and Craftsmanship: The careful attention to typographic detail in preppy fonts suggests similar attention to quality in whatever they’re representing. Modern Takes on Classic Preppy Style While preppy fonts are rooted in tradition, the best designers know how to give them contemporary flair. Here are some ways to modernize preppy typography: Unexpected Color Palettes: Pair traditional preppy fonts with modern colors. Think sage green and cream instead of navy and white, or soft blush tones for a fresh take. Generous White Space: Give your preppy fonts room to breathe with plenty of white space. This modern approach to layout keeps traditional fonts feeling fresh and uncluttered. Mixed Media Integration: Combine preppy typography with photography, illustrations, or graphic elements for a more contemporary feel while maintaining that sophisticated foundation. Strategic Contrast: Pair your refined preppy fonts with unexpected elements – maybe a bold geometric shape or modern photography – to create dynamic tension. Preppy Font Alternatives for Every Budget Not every preppy project has a premium font budget, and that’s okay! Here are some strategies for achieving that coveted preppy look without breaking the bank: Google Fonts Gems: Fonts like Playfair Display, Crimson Text, and Libre Baskerville offer sophisticated serif options that can work beautifully for preppy designs. Font Pairing Magic: Sometimes combining two free fonts thoughtfully can create a more expensive-looking result than using a single premium font poorly. Focus on Execution: A free font used with excellent spacing, hierarchy, and layout will always look better than an expensive font used carelessly. Common Preppy Font Mistakes to Avoid Even with the perfect preppy font, poor execution can ruin the effect. Here are the most common mistakes I see designers make: Overdoing the Decoration: Just because a font has elegant details doesn’t mean you need to add more flourishes. Let the typeface’s inherent sophistication speak for itself. Ignoring Hierarchy: Preppy design relies on clear, elegant hierarchy. Don’t make everything the same size or weight – create visual flow through thoughtful typography scaling. Poor Spacing: Cramped text kills the elegant feel of preppy fonts. Give your typography generous leading and appropriate margins. Wrong Context: Using an ultra-formal preppy font for a casual pizza restaurant’s menu will feel jarring and inappropriate. Match your font choice to your content and audience. The Future of Preppy Typography As we look ahead in 2025, preppy fonts continue to evolve while maintaining their classic appeal. We’re seeing interesting trends emerge: Variable Font Technology: Modern preppy fonts are increasingly available as variable fonts, allowing designers to fine-tune weight, width, and optical size for perfect customization. Screen Optimization: Classic preppy fonts are being redrawn and optimized for digital screens without losing their traditional charm. Inclusive Preppy: Designers are expanding the preppy aesthetic beyond its traditional boundaries, creating fonts that maintain sophistication while feeling more accessible and diverse. Sustainable Design: The timeless nature of preppy fonts aligns perfectly with sustainable design principles – these typefaces won’t look dated next year, making them environmentally responsible choices. Conclusion: Embracing Timeless Elegance Preppy fonts represent more than just letterforms – they’re a gateway to timeless elegance and sophisticated communication. Whether you’re designing wedding invitations for a Martha’s Vineyard ceremony or creating brand identity for a boutique law firm, the right preppy font can elevate your work from merely professional to genuinely distinguished. The beauty of preppy typography lies in its ability to feel both traditional and fresh, formal yet approachable. These fonts have stood the test of time because they tap into something fundamental about how we perceive quality, tradition, and sophistication. As you explore the world of preppy fonts, remember that the best typography choices support your message rather than overshadowing it. Choose fonts that enhance your content’s inherent qualities and speak to your audience’s aspirations and values. So whether you’re channeling that old-money aesthetic or simply want to add a touch of refined elegance to your designs, preppy fonts offer a wealth of possibilities. After all, good typography, like good manners, never goes out of style. #preppy #fonts #with #rich #ampamp
    DESIGNWORKLIFE.COM
    50 Preppy Fonts with Rich & Fancy Vibes
    In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.Preppy fonts capture that quintessential East Coast elite vibe – think Nantucket summers, yacht clubs, and monogrammed everything. These typefaces embody the perfect balance of tradition and refinement that makes preppy design so timeless and aspirational. But here’s the thing: not all fonts can pull off that coveted preppy aesthetic. The best preppy fonts have a certain je ne sais quoi – they’re classic without being stuffy, elegant without being pretentious, and refined without being inaccessible. In this comprehensive guide, we’ll explore the most gorgeous preppy fonts that’ll have your designs looking like they belong in the pages of Town & Country magazine. So grab your pearls and let’s dive into this typographic treasure trove! 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just $16.95/mo? Learn more »The Preppiest Fonts That Define 2025 Let’s start with the crème de la crème – the fonts that truly embody that preppy spirit. I’ve curated this list based on their ability to channel that classic New England charm while remaining versatile enough for modern design needs. Gatsby Prelude Gatsby Prelude is an elegant and modern Art Deco font duo. It combines sans-serif characters with decorative elements, perfect for creating sophisticated designs with a touch of vintage glamour.Burtuqol Burtuqol is a vintage slab serif font that exudes a retro charm. Its bold, chunky serifs and aged appearance make it ideal for projects requiring a nostalgic or timeworn aesthetic.Gafler Gafler is a classy vintage serif font with decorative elements. It combines elegance with a touch of old-world charm, making it perfect for high-end branding and classic design projects.Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. Kagnue Kagnue is a modern and classy serif font. It offers a fresh take on traditional serif typefaces, blending contemporary design with timeless elegance for versatile use in various design contexts.The Blendinroom The Blendinroom is a retro serif typeface featuring luxurious ligatures. Its vintage-inspired design and intricate details make it ideal for creating sophisticated, old-world aesthetics in design projects.MODER BULES MODER BULES is a playful sans-serif font with a fun, childlike appeal. Its quirky design makes it perfect for kids-oriented projects or Halloween-themed designs, adding a touch of whimsy to typography.Nickey Vintage Nickey Vintage is a decorative display font with a strong vintage flair. Its bold, eye-catching characters make it ideal for headlines, logos, and designs that require a striking retro aesthetic.Ladger Ladger is a casual script font that exudes luxury and elegance. Its flowing lines and graceful curves make it perfect for logo designs, high-end branding, and projects requiring a touch of sophistication.Hadnich Hadnich is a modern script font with a brush-like quality. Its versatile design makes it suitable for various applications, from signage to branding, offering a contemporary take on handwritten typography.Belly and Park Belly and Park is a condensed beauty classic font family featuring both serif and sans-serif styles. Its vintage-inspired design and narrow characters make it ideal for creating elegant, space-efficient layouts.Loubag Loubag is a modern retro font family encompassing sans-serif, serif, and decorative styles. Its bold, fashion-forward design makes it perfect for creating eye-catching headlines and trendy branding materials.Petter And Sons Petter And Sons is a romantic beauty script font with decorative elements. Its elegant, flowing design makes it ideal for wedding invitations, luxury branding, and projects requiring a touch of refined beauty.Preteoria Preteoria is a modern cursive font with a sleek, contemporary feel. Its smooth curves and clean lines make it versatile for various design applications, from branding to digital media projects.Delauney Delauney is an Art Deco-inspired sans-serif font that captures the essence of the roaring twenties. Its geometric shapes and sleek lines make it perfect for creating designs with a bold, metropolitan flair.Amadi Vintage Amadi Vintage is a chic and beautiful serif font with a timeless appeal. Its elegant design and vintage-inspired details make it ideal for creating sophisticated, classic-looking designs and branding materials.LEDERSON LEDERSON is a vintage-inspired shadow font. Its weathered look and strong character make it perfect for designs requiring an authentic, aged aesthetic.Fancyou Fancyou is a versatile serif font with alternate characters. Its elegant design and customizable options make it suitable for a wide range of projects, from formal invitations to modern branding materials.Catterpie Font Catterpie is a handwritten script font that mimics natural handlettering. Its fluid, signature-like style makes it perfect for creating personal, authentic-looking designs and branding materials.Jemmy Wonder Jemmy Wonder is a Victorian-inspired serif font with a strong vintage character. Its ornate details and old-world charm make it ideal for creating designs with a classic, nostalgic feel.Monthey Monthey is a bold, elegant vintage display serif font. Its chunky characters and 70s-inspired design make it perfect for creating eye-catching headlines and retro-themed branding materials.Madville Madville is a classy script font with a versatile design. Its elegant curves and smooth transitions make it suitable for a wide range of projects, from formal invitations to modern branding materials.Crowk Crowk is a luxury serif font with a timeless, elegant appeal. Its refined design and classic proportions make it ideal for high-end branding, editorial layouts, and sophisticated design projects.Peachy Fantasy Peachy Fantasy is an Art Nouveau-inspired display font with decorative elements. Its vintage charm and unique character make it perfect for creating eye-catching headlines and artistic design projects.Cormier Cormier is a decorative sans-serif font with a strong artistic flair. Its unique design and fashion-forward aesthetic make it ideal for creating bold, attention-grabbing headlines and branding materials.Syntage Syntage is a decorative modern luxury font with both serif and ornamental elements. Its retro-inspired design and luxurious details make it perfect for high-end branding and sophisticated design projects.Jeniffer Selfies Jeniffer Selfies is a retro-inspired bold font combining sans-serif and script styles. Its playful design and vintage feel make it ideal for creating nostalgic, fun-loving designs and branding materials.The Rilman The Rilman is a ligature-rich rounded sans-serif font with a 90s-inspired design. Its retro charm and smooth edges make it perfect for creating playful, nostalgic designs and branding materials.Milky Croffle Milky Croffle is a classic beauty elegant serif font. Its refined design and timeless appeal make it ideal for creating sophisticated layouts, high-end branding, and projects requiring a touch of traditional elegance. What Makes a Font Feel Preppy? You might be wondering what exactly gives a font that unmistakable preppy vibe. After years of working with typography, I’ve identified several key characteristics that define the preppy aesthetic: Classic Serif Structure: Most preppy fonts are serifs, drawing inspiration from traditional typography used in prestigious publications and academic institutions. These serifs aren’t just decorative – they’re a nod to centuries of refined typographic tradition. Elegant Proportions: Preppy fonts tend to have well-balanced letterforms with moderate contrast between thick and thin strokes. They’re neither too delicate nor too bold – just perfectly poised, like a well-tailored blazer. Timeless Appeal: The best preppy fonts don’t scream “trendy.” Instead, they whisper “timeless.” They’re the typography equivalent of a strand of pearls – always appropriate, never out of style. Sophisticated Details: Look for subtle refinements in letterforms – graceful curves, well-crafted terminals, and thoughtful spacing. These details separate truly preppy fonts from their more pedestrian cousins. Heritage Inspiration: Many preppy fonts draw inspiration from historical typefaces used by Ivy League universities, prestigious publishing houses, and old-money families. This connection to tradition is what gives them their authentic preppy pedigree. Where to Use Preppy Fonts (And Where Not To) Preppy fonts aren’t one-size-fits-all solutions, but when used appropriately, they’re absolutely magical. Here’s where they shine brightest: Wedding Invitations: Nothing says “elegant affair” quite like a beautifully chosen preppy serif. These fonts are perfect for formal invitations, save-the-dates, and wedding stationery that needs to feel sophisticated and timeless. Luxury Branding: Brands targeting affluent audiences or positioning themselves as premium often benefit from preppy typography. Think boutique hotels, high-end fashion, or artisanal goods. Editorial Design: Magazines, newsletters, and publications focusing on lifestyle, fashion, or culture can leverage preppy fonts to establish credibility and sophistication. Corporate Identity: Professional services, law firms, financial institutions, and consulting companies often choose preppy fonts to convey trustworthiness and establishment credibility. Academic Materials: Universities, prep schools, and educational institutions naturally gravitate toward preppy typography that reflects their traditional values and heritage. However, preppy fonts might not be the best choice for: Tech Startups: The traditional nature of preppy fonts can feel at odds with innovation and disruption. Modern sans serifs usually work better for tech companies. Children’s Brands: While elegant, preppy fonts might feel too formal for products targeting young children. Playful, rounded fonts are typically more appropriate. Casual Brands: If your brand personality is laid-back and approachable, overly formal preppy fonts might create distance between you and your audience. How to Choose the Perfect Preppy Font Selecting the right preppy font requires careful consideration of several factors. Here’s my tried-and-true process: Consider Your Audience: Are you designing for actual prep school alumni, or are you trying to capture that aspirational preppy aesthetic for a broader audience? Your target demographic should influence how traditional or accessible your font choice is. Evaluate the Context: A wedding invitation can handle more ornate details than a business card. Consider where your text will appear and how much personality the context can support. Test Readability: Preppy doesn’t mean hard to read. Always test your chosen font at various sizes to ensure it remains legible. Your typography should enhance communication, not hinder it. Think About Pairing: Will you be using this font alone or pairing it with others? Consider how your preppy serif will work alongside sans serifs for body text or script fonts for accents. Consider Your Medium: Some preppy fonts work beautifully in print but struggle on screens. Others are optimized for digital use but lose their charm in print. Choose accordingly. Pairing Preppy Fonts Like a Pro The magic of preppy typography often lies in thoughtful font pairing. Here are some winning combinations that never fail: Classic Serif + Clean Sans Serif: Pair your preppy serif headline font with a crisp, readable sans serif for body text. This creates hierarchy while maintaining sophistication. Traditional Serif + Script Accent: Use a refined script font sparingly for special elements like signatures or decorative text, balanced by a solid preppy serif for main content. Serif + Serif Variation: Sometimes pairing two serifs from the same family – perhaps a regular weight for body text and a bold condensed version for headlines – creates beautiful, cohesive designs. Remember, less is often more with preppy design. Stick to two or three fonts maximum, and let the inherent elegance of your chosen typefaces do the heavy lifting. The Psychology Behind Preppy Typography Understanding why preppy fonts work so well psychologically can help you use them more effectively. These typefaces tap into powerful associations: Trust and Reliability: The traditional nature of preppy fonts suggests stability and permanence. When people see these fonts, they subconsciously associate them with established institutions and time-tested values. Sophistication and Education: Preppy fonts are reminiscent of academic institutions and intellectual pursuits. They suggest refinement, education, and cultural awareness. Exclusivity and Status: Let’s be honest – part of the preppy aesthetic’s appeal is its association with privilege and exclusivity. These fonts can make designs feel more premium and aspirational. Quality and Craftsmanship: The careful attention to typographic detail in preppy fonts suggests similar attention to quality in whatever they’re representing. Modern Takes on Classic Preppy Style While preppy fonts are rooted in tradition, the best designers know how to give them contemporary flair. Here are some ways to modernize preppy typography: Unexpected Color Palettes: Pair traditional preppy fonts with modern colors. Think sage green and cream instead of navy and white, or soft blush tones for a fresh take. Generous White Space: Give your preppy fonts room to breathe with plenty of white space. This modern approach to layout keeps traditional fonts feeling fresh and uncluttered. Mixed Media Integration: Combine preppy typography with photography, illustrations, or graphic elements for a more contemporary feel while maintaining that sophisticated foundation. Strategic Contrast: Pair your refined preppy fonts with unexpected elements – maybe a bold geometric shape or modern photography – to create dynamic tension. Preppy Font Alternatives for Every Budget Not every preppy project has a premium font budget, and that’s okay! Here are some strategies for achieving that coveted preppy look without breaking the bank: Google Fonts Gems: Fonts like Playfair Display, Crimson Text, and Libre Baskerville offer sophisticated serif options that can work beautifully for preppy designs. Font Pairing Magic: Sometimes combining two free fonts thoughtfully can create a more expensive-looking result than using a single premium font poorly. Focus on Execution: A free font used with excellent spacing, hierarchy, and layout will always look better than an expensive font used carelessly. Common Preppy Font Mistakes to Avoid Even with the perfect preppy font, poor execution can ruin the effect. Here are the most common mistakes I see designers make: Overdoing the Decoration: Just because a font has elegant details doesn’t mean you need to add more flourishes. Let the typeface’s inherent sophistication speak for itself. Ignoring Hierarchy: Preppy design relies on clear, elegant hierarchy. Don’t make everything the same size or weight – create visual flow through thoughtful typography scaling. Poor Spacing: Cramped text kills the elegant feel of preppy fonts. Give your typography generous leading and appropriate margins. Wrong Context: Using an ultra-formal preppy font for a casual pizza restaurant’s menu will feel jarring and inappropriate. Match your font choice to your content and audience. The Future of Preppy Typography As we look ahead in 2025, preppy fonts continue to evolve while maintaining their classic appeal. We’re seeing interesting trends emerge: Variable Font Technology: Modern preppy fonts are increasingly available as variable fonts, allowing designers to fine-tune weight, width, and optical size for perfect customization. Screen Optimization: Classic preppy fonts are being redrawn and optimized for digital screens without losing their traditional charm. Inclusive Preppy: Designers are expanding the preppy aesthetic beyond its traditional boundaries, creating fonts that maintain sophistication while feeling more accessible and diverse. Sustainable Design: The timeless nature of preppy fonts aligns perfectly with sustainable design principles – these typefaces won’t look dated next year, making them environmentally responsible choices. Conclusion: Embracing Timeless Elegance Preppy fonts represent more than just letterforms – they’re a gateway to timeless elegance and sophisticated communication. Whether you’re designing wedding invitations for a Martha’s Vineyard ceremony or creating brand identity for a boutique law firm, the right preppy font can elevate your work from merely professional to genuinely distinguished. The beauty of preppy typography lies in its ability to feel both traditional and fresh, formal yet approachable. These fonts have stood the test of time because they tap into something fundamental about how we perceive quality, tradition, and sophistication. As you explore the world of preppy fonts, remember that the best typography choices support your message rather than overshadowing it. Choose fonts that enhance your content’s inherent qualities and speak to your audience’s aspirations and values. So whether you’re channeling that old-money aesthetic or simply want to add a touch of refined elegance to your designs, preppy fonts offer a wealth of possibilities. After all, good typography, like good manners, never goes out of style.
    Like
    Love
    Wow
    Sad
    Angry
    518
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • How to optimize your hybrid waterfall with CPM buckets

    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets,you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like to larger increments like so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfalland make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the -80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a -80 CPM, imagine how much more revenue you could bring at a CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue."

    -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here.
    #how #optimize #your #hybrid #waterfall
    How to optimize your hybrid waterfall with CPM buckets
    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets,you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like to larger increments like so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfalland make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the -80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a -80 CPM, imagine how much more revenue you could bring at a CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue." -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here. #how #optimize #your #hybrid #waterfall
    UNITY.COM
    How to optimize your hybrid waterfall with CPM buckets
    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets, (for example, seeing all the ad networks generating a CPM of $10-$20) you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like $1 to larger increments like $10, so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfall (filter app, OS, Ad unit, geo/geos from a specific group) and make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the $70-80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a $70-80 CPM, imagine how much more revenue you could bring at a $150 CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of $50, but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue." -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here.
    Like
    Love
    Wow
    Sad
    Angry
    544
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • The stunning reversal of humanity’s oldest bias

    Perhaps the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially “missing” — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren’t. Sen’s estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter’s one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers.But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That’s comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society?Success in school and the workplaceFor one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like “the Daughter, Educate the Daughter” have helped change opinions. A changing preferenceThe shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents.Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was “necessary” to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter.In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal futureThere’s still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men.And some of the reasons for this shift may not be as high-minded as we’d like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth.But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls’ lives saved. And maybe one Father’s Day, not too long from now, we’ll reach the point where daughters and sons are simply children: equally loved and equally welcomed.A version of this story originally appeared in the Good News newsletter. Sign up here!See More:
    #stunning #reversal #humanitys #oldest #bias
    The stunning reversal of humanity’s oldest bias
    Perhaps the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially “missing” — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren’t. Sen’s estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter’s one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers.But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That’s comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society?Success in school and the workplaceFor one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like “the Daughter, Educate the Daughter” have helped change opinions. A changing preferenceThe shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents.Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was “necessary” to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter.In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal futureThere’s still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men.And some of the reasons for this shift may not be as high-minded as we’d like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth.But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls’ lives saved. And maybe one Father’s Day, not too long from now, we’ll reach the point where daughters and sons are simply children: equally loved and equally welcomed.A version of this story originally appeared in the Good News newsletter. Sign up here!See More: #stunning #reversal #humanitys #oldest #bias
    WWW.VOX.COM
    The stunning reversal of humanity’s oldest bias
    Perhaps the oldest, most pernicious form of human bias is that of men toward women. It often started at the moment of birth. In ancient Athens, at a public ceremony called the amphidromia, fathers would inspect a newborn and decide whether it would be part of the family, or be cast away. One often socially acceptable reason for abandoning the baby: It was a girl. Female infanticide has been distressingly common in many societies — and its practice is not just ancient history. In 1990, the Nobel Prize-winning economist Amartya Sen looked at birth ratios in Asia, North Africa, and China and calculated that more than 100 million women were essentially “missing” — meaning that, based on the normal ratio of boys to girls at birth and the longevity of both genders, there was a huge missing number of girls who should have been born, but weren’t. Sen’s estimate came before the truly widespread adoption of ultrasound tests that could determine the sex of a fetus in utero — which actually made the problem worse, leading to a wave of sex-selective abortions. These were especially common in countries like India and China; the latter’s one-child policy and old biases made families desperate for their one child to be a boy. The Economist has estimated that since 1980 alone, there have been approximately 50 million fewer girls born worldwide than would naturally be expected, which almost certainly means that roughly that nearly all of those girls were aborted for no other reason than their sex. The preference for boys was a bias that killed in mass numbers.But in one of the most important social shifts of our time, that bias is changing. In a great cover story earlier this month, The Economist reported that the number of annual excess male births has fallen from a peak of 1.7 million in 2000 to around 200,000, which puts it back within the biologically standard birth ratio of 105 boys for every 100 girls. Countries that once had highly skewed sex ratios — like South Korea, which saw almost 116 boys born for every 100 girls in 1990 — now have normal or near-normal ratios. Altogether, The Economist estimated that the decline in sex preference at birth in the past 25 years has saved the equivalent of 7 million girls. That’s comparable to the number of lives saved by anti-smoking efforts in the US. So how, exactly, have we overcome a prejudice that seemed so embedded in human society?Success in school and the workplaceFor one, we have relaxed discrimination against girls and women in other ways — in school and in the workplace. With fewer limits, girls are outperforming boys in the classroom. In the most recent international PISA tests, considered the gold standard for evaluating student performance around the world, 15-year-old girls beat their male counterparts in reading in 79 out of 81 participating countries or economies, while the historic male advantage in math scores has fallen to single digits. Girls are also dominating in higher education, with 113 female students at that level for every 100 male students. While women continue to earn less than men, the gender pay gap has been shrinking, and in a number of urban areas in the US, young women have actually been outearning young men. Government policies have helped accelerate that shift, in part because they have come to recognize the serious social problems that eventually result from decades of anti-girl discrimination. In countries like South Korea and China, which have long had some of the most skewed gender ratios at birth, governments have cracked down on technologies that enable sex-selective abortion. In India, where female infanticide and neglect have been particularly horrific, slogans like “Save the Daughter, Educate the Daughter” have helped change opinions. A changing preferenceThe shift is being seen not just in birth sex ratios, but in opinion polls — and in the actions of would-be parents.Between 1983 and 2003, The Economist reported, the proportion of South Korean women who said it was “necessary” to have a son fell from 48 percent to 6 percent, while nearly half of women now say they want daughters. In Japan, the shift has gone even further — as far back as 2002, 75 percent of couples who wanted only one child said they hoped for a daughter.In the US, which allows sex selection for couples doing in-vitro fertilization, there is growing evidence that would-be parents prefer girls, as do potential adoptive parents. While in the past, parents who had a girl first were more likely to keep trying to have children in an effort to have a boy, the opposite is now true — couples who have a girl first are less likely to keep trying. A more equal futureThere’s still more progress to be made. In northwest of India, for instance, birth ratios that overly skew toward boys are still the norm. In regions of sub-Saharan Africa, birth sex ratios may be relatively normal, but post-birth discrimination in the form of poorer nutrition and worse medical care still lingers. And course, women around the world are still subject to unacceptable levels of violence and discrimination from men.And some of the reasons for this shift may not be as high-minded as we’d like to think. Boys around the world are struggling in the modern era. They increasingly underperform in education, are more likely to be involved in violent crime, and in general, are failing to launch into adulthood. In the US, 20 percent of American men between 25 and 34 still live with their parents, compared to 15 percent of similarly aged women. It also seems to be the case that at least some of the increasing preference for girls is rooted in sexist stereotypes. Parents around the world may now prefer girls partly because they see them as more likely to take care of them in their old age — meaning a different kind of bias against women, that they are more natural caretakers, may be paradoxically driving the decline in prejudice against girls at birth.But make no mistake — the decline of boy preference is a clear mark of social progress, one measured in millions of girls’ lives saved. And maybe one Father’s Day, not too long from now, we’ll reach the point where daughters and sons are simply children: equally loved and equally welcomed.A version of this story originally appeared in the Good News newsletter. Sign up here!See More:
    Like
    Love
    Wow
    Sad
    Angry
    525
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Four science-based rules that will make your conversations flow

    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy
    Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats.
    David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance?
    Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail.

    Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from?
    I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently.

    Receive a weekly dose of discovery in your inbox.

    Sign up to newsletter

    The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too.
    “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about?
    My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different.
    Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos
    What’s your advice when making these decisions?
    There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else.
    After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it.
    The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is?
    Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already.

    What kinds of questions should we be asking – and avoiding?
    In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful.
    There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it.
    Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno
    What are the benefits of levity?
    When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again.
    Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room.
    Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian?
    Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud.

    This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like?
    Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it.
    Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone?
    Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far.
    Topics:
    #four #sciencebased #rules #that #will
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics: #four #sciencebased #rules #that #will
    WWW.NEWSCIENTIST.COM
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, and [my students] all get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomans [at Imperial College London] and I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics:
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It

    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    #this #unexpected #rug #trend #taking
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you designa room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application, offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collectioncomprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed, a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok. #this #unexpected #rug #trend #taking
    WWW.HOUSEBEAUTIFUL.COM
    THIS Unexpected Rug Trend Is Taking Over—Here's How to Style It
    Pictured above: A dining room in Dallas, Texas, designed by Studio Thomas James.As you design (or redesign) a room at home, you may have specific ideas about the paint color, furniture placement, and even the lighting scheme your space requires to truly sing. But, if you're not also considering what type of rug will ground the entire look, this essential room-finishing touch may end up feeling like an afterthought. After all, one of the best ways to ensure your space looks expertly planned from top to bottom is to opt for a rug that can anchor the whole space—and, in many cases, that means a maximalist rug.A maximalist-style rug, or one that has a bold color, an abstract or asymmetrical pattern, an organic shape, distinctive pile texture, or unconventional application (such as functioning as a wall mural), offers a fresh answer to the perpetual design question, "What is this room missing?" Instead of defaulting to a neutral-colored, low-pile rug that goes largely unnoticed, a compelling case can be made for choosing a design that functions more as a tactile piece of art. Asha Chaudhary, the CEO of Jaipur, India-based rug brand Jaipur Living, has noticed many consumers moving away from "safe" interiors and embracing designs that pop with personality. "There’s a growing desire to design with individuality and soul. A vibrant or highly detailed rug can instantly transform a space by adding movement, contrast, and character, all in one single piece," she says.Ahead, we spoke to Chaudhary to get her essential tips for choosing the right maximalist rug for your design style, how to evaluate the construction of a piece, and even why you should think outside the box when it comes to the standard area rug shape. Turns out, this foundational mainstay can be a deeply personal expression of identity.Related StoriesWhen a Maximalist Rug Makes SenseJohn MerklAn outdoor lounge in Healdsburg, California, designed by Sheldon Harte.As you might imagine, integrating a maximalist rug into an existing aesthetic isn't about making a one-to-one swap. You'll want to refine your overall approach and potentially tweak elements of the room already in place, too."I like to think about rugs this way: Sometimes they play a supporting role, and other times, they’re the hero of the room," Chaudhary says. "Statement rugs are designed to stand out. They tell stories, stir emotion, and ground a space the way a bold piece of art would."In Chaudhary's work with interior designers who are selecting rugs for clients' high-end homes, she's noticed that tastes have recently swung toward a more maximalist ethos."Designers are leaning into expression and individuality," she says. "There’s growing interest in bold patterns, asymmetry, and designs that reflect the hand of the maker. Color-wise, we’re seeing more adventurous palettes: think jades, bordeauxes, and terracottas. And there’s a strong desire for rugs that feel personal, like they carry a story or a memory." Jaipur LivingJaipur Living’s Manchaha rugs are one-of-a-kind, hand-knotted pieces woven from upcycled hand-spun yarn that follow a freeform design of the artisan’s choosing.Jaipur LivingJaipur Living is uniquely positioned to fulfill the need for one-of-a-kind rugs that are not just visually striking within a space, but deeply meaningful as well. The brand's Manchaha collection (meaning “expression of my heart” in Hindi) comprises rugs made of upcycled yarn, each hand-knotted by rural Indian artisans in freeform shapes that capture the imagination."Each piece is designed from the heart of the artisan, with no predetermined pattern, just emotion, inspiration, and memory woven together by hand. What excites me most is this shift away from perfection and toward beauty that feels lived-in, layered, and real," she adds.There’s a strong desire for rugs that feel personal, like they carry a story or a memory.Related StoryHow to Choose the Right Maximalist RugBrittany AmbridgeDesign firm Drake/Anderson reimagined this Greenwich, Connecticut, living room. Good news for those who are taking a slow-decorating approach with their home: Finding the right maximalist rug for your space means looking at the big picture first."Most shoppers start with size and color, but the first question should really be, 'How will this space be used?' That answer guides everything—material, construction, and investment," says Chaudhary.Are you styling an off-limits living room or a lively family den where guests may occasionally wander in with shoes on? In considering your materials, you may want to opt for a performance-fabric rug for areas subject to frequent wear and tear, but Chaudhary has a clear favorite for nearly all other spaces. "Wool is the gold standard. It’s naturally resilient, stain-resistant, and has excellent bounce-back, meaning it recovers well from foot traffic and furniture impressions," she says. "It’s also moisture-wicking and insulating, making it an ideal choice for both comfort and durability."As far as construction goes, Chaudhary breaks down the most widely available options on the market: A hand-knotted rug, crafted by tying individual knots, is the most durable construction and can last decades, even with daily use.Hand-tufted rugs offer a beautiful look at a more accessible price point, but typically won’t have the same lifespan. Power-loomed rugs can be a great solution for high-traffic areas when made with quality materials. Though they fall at the higher end of the price spectrum, hand-knotted rugs aren't meant to be untouchable—after all, their quality construction helps ensure that they can stand up to minor mishaps in day-to-day living. This can shift your appreciation of a rug from a humble underfoot accent to a long-lasting art piece worthy of care and intentional restoration when the time comes. "Understanding these distinctions helps consumers make smarter, more lasting investments for their homes," Chaudhary says. Related StoryOpting for Unconventional Applications Lesley UnruhSarah Vaile designed this vibrant vestibule in Chicago, Illinois.Maximalist rugs encompass an impressively broad category, and even if you already have an area rug rolled out that you're happy with, there are alternative shapes you can choose, or ways in which they can imbue creative expression far beyond the floor."I’ve seen some incredibly beautiful applications of rugs as wall art. Especially when it comes to smaller or one-of-a-kind pieces, hanging them allows people to appreciate the detail, texture, and artistry at eye level," says Chaudhary. "Some designers have also used narrow runners as table coverings or layered over larger textiles for added dimension."Another interesting facet of maximalist rugs is that you can think outside the rectangle in terms of silhouette."We’re seeing more interest in irregular rug shapes, think soft ovals, curves, even asymmetrical outlines," says Chaudhary. "Clients are designing with more fluidity and movement in mind, especially in open-plan spaces. Extra-long runners, oversized circles, and multi-shape layouts are also trending."Ultimately, the best maximalist rug for you is one that meets your home's needs while highlighting your personal style. In spaces where dramatic light fixtures or punchy paint colors aren't practical or allowed (in the case of renters), a statement-making rug is the ideal solution. While trends will continue to evolve, honing in on a unique—even tailor-made—design will help ensure aesthetic longevity. Follow House Beautiful on Instagram and TikTok.
    Like
    Love
    Wow
    Sad
    Angry
    465
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • 8 Stunning Sunset Color Palettes

    8 Stunning Sunset Color Palettes
    Zoe Santoro • 

    In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory.
    If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs.
    Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes
    1. Golden Hour Glow

    #FFD700

    #FF8C00

    #FF6347

    #CD5C5C

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity.
    2. Tropical Paradise

    #FF69B4

    #FF1493

    #FF8C00

    #FFD700

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling.
    3. Desert Dreams

    #CD853F

    #D2691E

    #B22222

    #8B0000

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere.

    The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands.
    4. Pastel Evening

    #FFE4E1

    #FFA07A

    #F0E68C

    #DDA0DD

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance.
    5. Coastal Sunset

    #fae991

    #FF7F50

    #FF6347

    #4169E1

    #1E90FF

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies.
    6. Urban Twilight

    #ffeda3

    #fdad52

    #fc8a6e

    #575475

    #111f2a

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic.
    7. Autumn Harvest

    #FF4500

    #FF8C00

    #DAA520

    #8B4513

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition.
    8. Fire Sky

    #652220

    #DC143C

    #FF0000

    #FF4500

    #FF8C00

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades.
    Why Sunset Colors Never Go Out of Style
    Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty.
    From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good.
    I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics.
    How to Master Sunset Palettes in Contemporary Design
    Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work:
    Start with Temperature Balance
    One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous.
    Layer for Depth
    Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature.
    Consider Context and Contrast
    While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear.
    Embrace Gradual Transitions
    The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette.
    The Science Behind Our Sunset Obsession
    As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security.
    There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences.
    The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match.
    Applying Sunset Palettes Across Design Disciplines
    One of the things I love most about sunset color palettes is how adaptable they are across different types of design work:
    Brand Identity Design
    Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement.
    Digital Design
    In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content.
    Print and Packaging
    Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print.
    Environmental Design
    In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community.
    Seasonal Considerations and Trending Applications
    While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications.
    I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments.
    Conclusion: Bringing Natural Beauty Into Modern Design
    As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people.
    The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind.
    Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities.
    So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project.
    After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing!

    Zoe Santoro

    Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way.

    10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how...
    #stunning #sunset #color #palettes
    8 Stunning Sunset Color Palettes
    8 Stunning Sunset Color Palettes Zoe Santoro •  In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory. If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs. 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes 1. Golden Hour Glow #FFD700 #FF8C00 #FF6347 #CD5C5C Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity. 2. Tropical Paradise #FF69B4 #FF1493 #FF8C00 #FFD700 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling. 3. Desert Dreams #CD853F #D2691E #B22222 #8B0000 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands. 4. Pastel Evening #FFE4E1 #FFA07A #F0E68C #DDA0DD Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance. 5. Coastal Sunset #fae991 #FF7F50 #FF6347 #4169E1 #1E90FF Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies. 6. Urban Twilight #ffeda3 #fdad52 #fc8a6e #575475 #111f2a Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic. 7. Autumn Harvest #FF4500 #FF8C00 #DAA520 #8B4513 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition. 8. Fire Sky #652220 #DC143C #FF0000 #FF4500 #FF8C00 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades. Why Sunset Colors Never Go Out of Style Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty. From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good. I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics. How to Master Sunset Palettes in Contemporary Design Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work: Start with Temperature Balance One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous. Layer for Depth Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature. Consider Context and Contrast While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear. Embrace Gradual Transitions The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette. The Science Behind Our Sunset Obsession As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security. There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences. The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match. Applying Sunset Palettes Across Design Disciplines One of the things I love most about sunset color palettes is how adaptable they are across different types of design work: Brand Identity Design Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement. Digital Design In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content. Print and Packaging Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print. Environmental Design In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community. Seasonal Considerations and Trending Applications While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications. I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments. Conclusion: Bringing Natural Beauty Into Modern Design As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people. The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind. Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities. So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project. After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing! Zoe Santoro Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way. 10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how... #stunning #sunset #color #palettes
    DESIGNWORKLIFE.COM
    8 Stunning Sunset Color Palettes
    8 Stunning Sunset Color Palettes Zoe Santoro •  In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory. If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs. 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just $16.95/mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes 1. Golden Hour Glow #FFD700 #FF8C00 #FF6347 #CD5C5C Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity. 2. Tropical Paradise #FF69B4 #FF1493 #FF8C00 #FFD700 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling. 3. Desert Dreams #CD853F #D2691E #B22222 #8B0000 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands. 4. Pastel Evening #FFE4E1 #FFA07A #F0E68C #DDA0DD Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance. 5. Coastal Sunset #fae991 #FF7F50 #FF6347 #4169E1 #1E90FF Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies. 6. Urban Twilight #ffeda3 #fdad52 #fc8a6e #575475 #111f2a Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic. 7. Autumn Harvest #FF4500 #FF8C00 #DAA520 #8B4513 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition. 8. Fire Sky #652220 #DC143C #FF0000 #FF4500 #FF8C00 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades. Why Sunset Colors Never Go Out of Style Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty. From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good. I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics. How to Master Sunset Palettes in Contemporary Design Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work: Start with Temperature Balance One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous. Layer for Depth Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature. Consider Context and Contrast While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear. Embrace Gradual Transitions The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette. The Science Behind Our Sunset Obsession As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security. There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences. The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match. Applying Sunset Palettes Across Design Disciplines One of the things I love most about sunset color palettes is how adaptable they are across different types of design work: Brand Identity Design Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement. Digital Design In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content. Print and Packaging Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print. Environmental Design In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community. Seasonal Considerations and Trending Applications While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications. I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments. Conclusion: Bringing Natural Beauty Into Modern Design As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people. The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind. Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities. So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project. After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing! Zoe Santoro Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way. 10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how...
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • How to choose a programmatic video advertising platform: 8 considerations

    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love.
    #how #choose #programmatic #video #advertising
    How to choose a programmatic video advertising platform: 8 considerations
    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love. #how #choose #programmatic #video #advertising
    UNITY.COM
    How to choose a programmatic video advertising platform: 8 considerations
    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Gardenful / TAOA

    Gardenful / TAOASave this picture!© Tao LeiLandscape Architecture•Beijing, China

    Architects:
    TAOA
    Area
    Area of this architecture project

    Area: 
    227 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Tao LeiMore SpecsLess Specs
    this picture!
    Text description provided by the architects. This is an urban garden built for private use. As a corner of the city, I hope to fill the whole garden with abundant nature in this small space. The site is an open space in a villa compound, surrounded by a cluster of European-style single-family villas typical of Chinese real estate. Modern buildings greatly meet the requirements of indoor temperature and humidity comfort because of their complete facilities, but the building also has a clear climate boundary, cutting off the connection between indoor and outdoor, but also cut off the continuity of nature and life.this picture!this picture!There is no simple definition of the project as a garden or a building, too simple definition will only fall into the narrow imagination, the purpose is only to establish a place that can accommodate a piece of real nature, can give people shelter, can also walk in it. It is the original intention of this design to build a quiet place where you can be alone, a semi-indoor and semi-outdoor space, and re-lead the enclosed life to the outdoors and into the nature.this picture!this picture!The square site in the middle of the garden, which is a relatively independent space, the top shelter provides a comfortable life and cozy, the middle of the garden exposed a sky, sunshine and rain and snow will be staged here. With the corresponding land below, the trees and vegetation of the mountains are introduced into it, maintaining the most primitive wildness. To remain wild in this exquisite urban space, in this abstract geometric order, will naturally get rid of the wild gas of the original nature. A spatial transformation is made on both sides to the north, through the stairway and the upward pull of the roof space, extending the narrow auxiliary garden, which has no roof and is therefore bright, maintaining a different light and shade relationship from the central garden, which is filled with rocks and plants transplanted from the mountains.this picture!this picture!this picture!The structure of the garden is thin and dense synthetic bamboo, and the cross combination of dense structures forms a partition of the space, like a bamboo fence, forming a soft boundary. The interior of the space is lined with wooden panels, and the exterior is covered with thin and crisp aluminum panels. The "bridge" made of stone panels passes through different Spaces, sometimes standing between the bamboo structures, sometimes crossing the rocks, walking between them. Moving between order and wildness.this picture!Nature is difficult to measure, and because of its rich and ever-changing qualities, nature provides richness to Spaces. This is from the mountains to large trees, rocks, small flowers and plants, as far as possible to avoid artificial nursery plants. The structure of the garden will geometrically order the nature, eliminating the wild sense of nature. The details of nature can be discovered, and the life force released can be unconsciously perceived. The nature of fragments is real, is wild, and does not want to lose vitality and richness because of artificial transplantation. The superposition of wild abundance and modern geometric space makes it alive with elegance and decency.this picture!this picture!The nature is independent of the high-density urban space, becoming an independent world, shielding the noise of the city. These are integrated into a continuous and integral "pavilion" and "corridor" constitute the carrier of outdoor life of the family, while sheltering from the wind and rain, under the four eaves also create the relationship between light and dark space, the middle highlights the nature, especially bright, and becomes the center of life. From any Angle one can see a picture of hierarchy and order, a real fragment of nature, built into a new context by geometric order. The richness of nature is therefore more easily perceived, and the changes of nature are constantly played out in daily life and can be seen throughout the year.this picture!

    Project gallerySee allShow less
    Project locationAddress:Beijing, ChinaLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTAOAOffice•••
    Published on June 15, 2025Cite: "Gardenful / TAOA" 15 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save想阅读文章的中文版本吗?满园 / TAOA 陶磊建筑是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #gardenful #taoa
    Gardenful / TAOA
    Gardenful / TAOASave this picture!© Tao LeiLandscape Architecture•Beijing, China Architects: TAOA Area Area of this architecture project Area:  227 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Tao LeiMore SpecsLess Specs this picture! Text description provided by the architects. This is an urban garden built for private use. As a corner of the city, I hope to fill the whole garden with abundant nature in this small space. The site is an open space in a villa compound, surrounded by a cluster of European-style single-family villas typical of Chinese real estate. Modern buildings greatly meet the requirements of indoor temperature and humidity comfort because of their complete facilities, but the building also has a clear climate boundary, cutting off the connection between indoor and outdoor, but also cut off the continuity of nature and life.this picture!this picture!There is no simple definition of the project as a garden or a building, too simple definition will only fall into the narrow imagination, the purpose is only to establish a place that can accommodate a piece of real nature, can give people shelter, can also walk in it. It is the original intention of this design to build a quiet place where you can be alone, a semi-indoor and semi-outdoor space, and re-lead the enclosed life to the outdoors and into the nature.this picture!this picture!The square site in the middle of the garden, which is a relatively independent space, the top shelter provides a comfortable life and cozy, the middle of the garden exposed a sky, sunshine and rain and snow will be staged here. With the corresponding land below, the trees and vegetation of the mountains are introduced into it, maintaining the most primitive wildness. To remain wild in this exquisite urban space, in this abstract geometric order, will naturally get rid of the wild gas of the original nature. A spatial transformation is made on both sides to the north, through the stairway and the upward pull of the roof space, extending the narrow auxiliary garden, which has no roof and is therefore bright, maintaining a different light and shade relationship from the central garden, which is filled with rocks and plants transplanted from the mountains.this picture!this picture!this picture!The structure of the garden is thin and dense synthetic bamboo, and the cross combination of dense structures forms a partition of the space, like a bamboo fence, forming a soft boundary. The interior of the space is lined with wooden panels, and the exterior is covered with thin and crisp aluminum panels. The "bridge" made of stone panels passes through different Spaces, sometimes standing between the bamboo structures, sometimes crossing the rocks, walking between them. Moving between order and wildness.this picture!Nature is difficult to measure, and because of its rich and ever-changing qualities, nature provides richness to Spaces. This is from the mountains to large trees, rocks, small flowers and plants, as far as possible to avoid artificial nursery plants. The structure of the garden will geometrically order the nature, eliminating the wild sense of nature. The details of nature can be discovered, and the life force released can be unconsciously perceived. The nature of fragments is real, is wild, and does not want to lose vitality and richness because of artificial transplantation. The superposition of wild abundance and modern geometric space makes it alive with elegance and decency.this picture!this picture!The nature is independent of the high-density urban space, becoming an independent world, shielding the noise of the city. These are integrated into a continuous and integral "pavilion" and "corridor" constitute the carrier of outdoor life of the family, while sheltering from the wind and rain, under the four eaves also create the relationship between light and dark space, the middle highlights the nature, especially bright, and becomes the center of life. From any Angle one can see a picture of hierarchy and order, a real fragment of nature, built into a new context by geometric order. The richness of nature is therefore more easily perceived, and the changes of nature are constantly played out in daily life and can be seen throughout the year.this picture! Project gallerySee allShow less Project locationAddress:Beijing, ChinaLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTAOAOffice••• Published on June 15, 2025Cite: "Gardenful / TAOA" 15 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save想阅读文章的中文版本吗?满园 / TAOA 陶磊建筑是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #gardenful #taoa
    WWW.ARCHDAILY.COM
    Gardenful / TAOA
    Gardenful / TAOASave this picture!© Tao LeiLandscape Architecture•Beijing, China Architects: TAOA Area Area of this architecture project Area:  227 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Tao LeiMore SpecsLess Specs Save this picture! Text description provided by the architects. This is an urban garden built for private use. As a corner of the city, I hope to fill the whole garden with abundant nature in this small space. The site is an open space in a villa compound, surrounded by a cluster of European-style single-family villas typical of Chinese real estate. Modern buildings greatly meet the requirements of indoor temperature and humidity comfort because of their complete facilities, but the building also has a clear climate boundary, cutting off the connection between indoor and outdoor, but also cut off the continuity of nature and life.Save this picture!Save this picture!There is no simple definition of the project as a garden or a building, too simple definition will only fall into the narrow imagination, the purpose is only to establish a place that can accommodate a piece of real nature, can give people shelter, can also walk in it. It is the original intention of this design to build a quiet place where you can be alone, a semi-indoor and semi-outdoor space, and re-lead the enclosed life to the outdoors and into the nature.Save this picture!Save this picture!The square site in the middle of the garden, which is a relatively independent space, the top shelter provides a comfortable life and cozy, the middle of the garden exposed a sky, sunshine and rain and snow will be staged here. With the corresponding land below, the trees and vegetation of the mountains are introduced into it, maintaining the most primitive wildness. To remain wild in this exquisite urban space, in this abstract geometric order, will naturally get rid of the wild gas of the original nature. A spatial transformation is made on both sides to the north, through the stairway and the upward pull of the roof space, extending the narrow auxiliary garden, which has no roof and is therefore bright, maintaining a different light and shade relationship from the central garden, which is filled with rocks and plants transplanted from the mountains.Save this picture!Save this picture!Save this picture!The structure of the garden is thin and dense synthetic bamboo, and the cross combination of dense structures forms a partition of the space, like a bamboo fence, forming a soft boundary. The interior of the space is lined with wooden panels, and the exterior is covered with thin and crisp aluminum panels. The "bridge" made of stone panels passes through different Spaces, sometimes standing between the bamboo structures, sometimes crossing the rocks, walking between them. Moving between order and wildness.Save this picture!Nature is difficult to measure, and because of its rich and ever-changing qualities, nature provides richness to Spaces. This is from the mountains to large trees, rocks, small flowers and plants, as far as possible to avoid artificial nursery plants. The structure of the garden will geometrically order the nature, eliminating the wild sense of nature. The details of nature can be discovered, and the life force released can be unconsciously perceived. The nature of fragments is real, is wild, and does not want to lose vitality and richness because of artificial transplantation. The superposition of wild abundance and modern geometric space makes it alive with elegance and decency.Save this picture!Save this picture!The nature is independent of the high-density urban space, becoming an independent world, shielding the noise of the city. These are integrated into a continuous and integral "pavilion" and "corridor" constitute the carrier of outdoor life of the family, while sheltering from the wind and rain, under the four eaves also create the relationship between light and dark space, the middle highlights the nature, especially bright, and becomes the center of life. From any Angle one can see a picture of hierarchy and order, a real fragment of nature, built into a new context by geometric order. The richness of nature is therefore more easily perceived, and the changes of nature are constantly played out in daily life and can be seen throughout the year.Save this picture! Project gallerySee allShow less Project locationAddress:Beijing, ChinaLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTAOAOffice••• Published on June 15, 2025Cite: "Gardenful / TAOA" 15 Jun 2025. ArchDaily. Accessed . <https://www.archdaily.com/1028408/gardenful-taoa&gt ISSN 0719-8884Save想阅读文章的中文版本吗?满园 / TAOA 陶磊建筑是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Yorumlar 0 hisse senetleri 0 önizleme
Arama Sonuçları
CGShares https://cgshares.com