• How This Small Los Angeles Space Uses Color To "Keep It Tight"

    Nichols Canyon cuts through the south side of the Hollywood Hills, stretching from Hollywood Boulevard in the south up to Mulholland Drive in the north. Made famous by David Hockney, whose 1980 painting of the canyon sold for just over M in 2020, the area remains a thriving artist's community. What better place for Elle Decor A-List designer Oliver Furth to build a "creative cottage" for his partner, The Culture Creative founder Sean Yashar?Furth and Yashar, who've been together 14 years, met in the industry and purchased their home 7 years ago. When the lot next door—a pines-filled acre with a tiny house on it—came up for sale, the couple jumped at the chance to buy. "Anyone else would've torn it down and built something bigger," says Furth. "We replaced the windows and redid the kitchen and bathrooms, but we leaned into its size." Now drenched in Yashar's signature "eau de nil" pastel tones, the cottage embraces the character of its original 1940s structure while serving as a cutting-edge space for creativity.Kort HavensIn the sitting room, a Philippe Starck chrome side table from the original Royalton Hotel, from 1988, holds a place of pride with a group of Peter Shire and Ron Arad vintage chairs and a Rachel Shillander pyramidal lamp. Art includes greats of LA’s past and present: a Laddie John Dill mixed media, a Sam Falls tapestry, a Tom Holland metal relief, and a Strauss Bourque-LaFrance painting."All of my work is really portraiture." —Oliver Furth"My clients are all muses to me," says Yashar, who provides consulting services for designers. "I have to be a good listener and understand who the client is and how they connect to decorative arts history, so I spend a lot of time researching. How else can I be an authority?"To that end, the space is designed to provide a moment to reflect and the fodder to rev into high gear in equal measures; to facilitate rest as much as the chance to recharge. Following in the footsteps of Albert Hadley and Tony Duquette, Furth color-drenched the space in a mint green. "That color is the envelope—that's what I call it," says Furth. "We kept it very tight by lacquering the floors, the walls, and the ceiling in that color. Even the cabinetry and the appliances are in that mint. It allows us to have this object-driven interior space by unifying everything with color."Kort HavensA vintage Joe D’Urso for Knoll desk, Sam Maloof desk chair, Christopher Prinz stool and felt-clad speaker by Studio AHEAD create a sleek composition under an Ingo Mauer chandelier in the office. Art includes a triptych of photographs by David Benjamin Sherry, and framed magazine ads from Yashar’s parents’ furniture store, Moda Italia, from 1990.The seafoam hue unites not only the interior, but also decades of decorative history: Yashar found that the architect Paul Williams, who worked in LA in the 1940's and 50's, used a similar shade in many projects. "There's a lot of history and narrative within this color that maybe not everyone will be able to know, but hopefully everyone can feel," says Yashar. Clocking in at roughly 1,000-square-feet, the interior is now a mixture of millennial aesthetics, showcasing Yashar's love for design culture icons like Mario Buatta and Saul Bass. The entry sets the tone with its metal-and-glass Dutch door. A mixed-use meeting room offers a blend of contrasts, from Buatta-inspired shades in a Dickies-esque khaki twill to antique Chippendale chairs juxtaposed with 1990s Marc Newson tables. "All of my work is really portraiture," says Furth, "so this was an opportunity to help create this sort of portrait of Sean and his business." "Sometimes things just resonate...you just know when it's right." —Sean YasharThe sitting room features iconic design pieces, including a worn black leather sofa from the 1980s and a Philippe Starck table from the Royalton Hotel. Peter Shire and Ron Arad chairs are paired with conceptual furniture inspired by Dan Friedman. The kitchen celebrates postwar and ’80s influences with Smeg appliances and works by Soft Baroque and Patrick Nagel, grounded by a custom table from Studio MUKA. "A lot of people know me for my interest in eighties and nineties design culture," says Yashar. "But when I think eighties or nineties, I don't think of one thing. I don't want to choose. So I want to have Joe D'Urso high-tech track lighting, and I want it against these Mario Buatta-style balloon shades. I like that duality."Outside, a Persian-inspired courtyard nods to Yashar's heritage while offering dining and lounging areas that showcase rare 1980s furniture, including a Peter Lane ceramic table and one-off mint-colored Richard Schultz seating. The courtyard’s natural and faux vine murals create a satirical trompe-l’oeil effect, celebrating real-versus-virtual artistry. "I think we're both big believers in feeling," says Yashar. "Sometimes things just resonate. You can't really put your finger on it, but you just know that it's right."Sean SantiagoDeputy EditorSean Santiago is ELLE Decor's Deputy Editor, covering news, trends and talents in interior design, hospitality and travel, culture, and luxury shopping. Since starting his career at an interior design firm in 2011, he has gone on to cover the industry for Vogue, Architectural Digest, Sight Unseen, PIN-UP and Domino. He is the author of The Lonny Home, has produced scripted social content for brands including West Elm and Streeteasy, and is sometimes recognized on the street for his Instagram Reels series, #DanceToDecor
    #how #this #small #los #angeles
    How This Small Los Angeles Space Uses Color To "Keep It Tight"
    Nichols Canyon cuts through the south side of the Hollywood Hills, stretching from Hollywood Boulevard in the south up to Mulholland Drive in the north. Made famous by David Hockney, whose 1980 painting of the canyon sold for just over M in 2020, the area remains a thriving artist's community. What better place for Elle Decor A-List designer Oliver Furth to build a "creative cottage" for his partner, The Culture Creative founder Sean Yashar?Furth and Yashar, who've been together 14 years, met in the industry and purchased their home 7 years ago. When the lot next door—a pines-filled acre with a tiny house on it—came up for sale, the couple jumped at the chance to buy. "Anyone else would've torn it down and built something bigger," says Furth. "We replaced the windows and redid the kitchen and bathrooms, but we leaned into its size." Now drenched in Yashar's signature "eau de nil" pastel tones, the cottage embraces the character of its original 1940s structure while serving as a cutting-edge space for creativity.Kort HavensIn the sitting room, a Philippe Starck chrome side table from the original Royalton Hotel, from 1988, holds a place of pride with a group of Peter Shire and Ron Arad vintage chairs and a Rachel Shillander pyramidal lamp. Art includes greats of LA’s past and present: a Laddie John Dill mixed media, a Sam Falls tapestry, a Tom Holland metal relief, and a Strauss Bourque-LaFrance painting."All of my work is really portraiture." —Oliver Furth"My clients are all muses to me," says Yashar, who provides consulting services for designers. "I have to be a good listener and understand who the client is and how they connect to decorative arts history, so I spend a lot of time researching. How else can I be an authority?"To that end, the space is designed to provide a moment to reflect and the fodder to rev into high gear in equal measures; to facilitate rest as much as the chance to recharge. Following in the footsteps of Albert Hadley and Tony Duquette, Furth color-drenched the space in a mint green. "That color is the envelope—that's what I call it," says Furth. "We kept it very tight by lacquering the floors, the walls, and the ceiling in that color. Even the cabinetry and the appliances are in that mint. It allows us to have this object-driven interior space by unifying everything with color."Kort HavensA vintage Joe D’Urso for Knoll desk, Sam Maloof desk chair, Christopher Prinz stool and felt-clad speaker by Studio AHEAD create a sleek composition under an Ingo Mauer chandelier in the office. Art includes a triptych of photographs by David Benjamin Sherry, and framed magazine ads from Yashar’s parents’ furniture store, Moda Italia, from 1990.The seafoam hue unites not only the interior, but also decades of decorative history: Yashar found that the architect Paul Williams, who worked in LA in the 1940's and 50's, used a similar shade in many projects. "There's a lot of history and narrative within this color that maybe not everyone will be able to know, but hopefully everyone can feel," says Yashar. Clocking in at roughly 1,000-square-feet, the interior is now a mixture of millennial aesthetics, showcasing Yashar's love for design culture icons like Mario Buatta and Saul Bass. The entry sets the tone with its metal-and-glass Dutch door. A mixed-use meeting room offers a blend of contrasts, from Buatta-inspired shades in a Dickies-esque khaki twill to antique Chippendale chairs juxtaposed with 1990s Marc Newson tables. "All of my work is really portraiture," says Furth, "so this was an opportunity to help create this sort of portrait of Sean and his business." "Sometimes things just resonate...you just know when it's right." —Sean YasharThe sitting room features iconic design pieces, including a worn black leather sofa from the 1980s and a Philippe Starck table from the Royalton Hotel. Peter Shire and Ron Arad chairs are paired with conceptual furniture inspired by Dan Friedman. The kitchen celebrates postwar and ’80s influences with Smeg appliances and works by Soft Baroque and Patrick Nagel, grounded by a custom table from Studio MUKA. "A lot of people know me for my interest in eighties and nineties design culture," says Yashar. "But when I think eighties or nineties, I don't think of one thing. I don't want to choose. So I want to have Joe D'Urso high-tech track lighting, and I want it against these Mario Buatta-style balloon shades. I like that duality."Outside, a Persian-inspired courtyard nods to Yashar's heritage while offering dining and lounging areas that showcase rare 1980s furniture, including a Peter Lane ceramic table and one-off mint-colored Richard Schultz seating. The courtyard’s natural and faux vine murals create a satirical trompe-l’oeil effect, celebrating real-versus-virtual artistry. "I think we're both big believers in feeling," says Yashar. "Sometimes things just resonate. You can't really put your finger on it, but you just know that it's right."Sean SantiagoDeputy EditorSean Santiago is ELLE Decor's Deputy Editor, covering news, trends and talents in interior design, hospitality and travel, culture, and luxury shopping. Since starting his career at an interior design firm in 2011, he has gone on to cover the industry for Vogue, Architectural Digest, Sight Unseen, PIN-UP and Domino. He is the author of The Lonny Home, has produced scripted social content for brands including West Elm and Streeteasy, and is sometimes recognized on the street for his Instagram Reels series, #DanceToDecor #how #this #small #los #angeles
    WWW.ELLEDECOR.COM
    How This Small Los Angeles Space Uses Color To "Keep It Tight"
    Nichols Canyon cuts through the south side of the Hollywood Hills, stretching from Hollywood Boulevard in the south up to Mulholland Drive in the north. Made famous by David Hockney, whose 1980 painting of the canyon sold for just over $41M in 2020, the area remains a thriving artist's community. What better place for Elle Decor A-List designer Oliver Furth to build a "creative cottage" for his partner, The Culture Creative founder Sean Yashar?Furth and Yashar, who've been together 14 years, met in the industry and purchased their home 7 years ago. When the lot next door—a pines-filled acre with a tiny house on it—came up for sale, the couple jumped at the chance to buy. "Anyone else would've torn it down and built something bigger," says Furth. "We replaced the windows and redid the kitchen and bathrooms, but we leaned into its size." Now drenched in Yashar's signature "eau de nil" pastel tones, the cottage embraces the character of its original 1940s structure while serving as a cutting-edge space for creativity.Kort HavensIn the sitting room, a Philippe Starck chrome side table from the original Royalton Hotel, from 1988, holds a place of pride with a group of Peter Shire and Ron Arad vintage chairs and a Rachel Shillander pyramidal lamp. Art includes greats of LA’s past and present: a Laddie John Dill mixed media, a Sam Falls tapestry, a Tom Holland metal relief, and a Strauss Bourque-LaFrance painting."All of my work is really portraiture." —Oliver Furth"My clients are all muses to me," says Yashar, who provides consulting services for designers. "I have to be a good listener and understand who the client is and how they connect to decorative arts history, so I spend a lot of time researching. How else can I be an authority?"To that end, the space is designed to provide a moment to reflect and the fodder to rev into high gear in equal measures; to facilitate rest as much as the chance to recharge. Following in the footsteps of Albert Hadley and Tony Duquette (who once declared malachite a neutral), Furth color-drenched the space in a mint green. "That color is the envelope—that's what I call it," says Furth. "We kept it very tight by lacquering the floors, the walls, and the ceiling in that color. Even the cabinetry and the appliances are in that mint. It allows us to have this object-driven interior space by unifying everything with color."Kort HavensA vintage Joe D’Urso for Knoll desk, Sam Maloof desk chair, Christopher Prinz stool and felt-clad speaker by Studio AHEAD create a sleek composition under an Ingo Mauer chandelier in the office. Art includes a triptych of photographs by David Benjamin Sherry, and framed magazine ads from Yashar’s parents’ furniture store, Moda Italia, from 1990.The seafoam hue unites not only the interior, but also decades of decorative history: Yashar found that the architect Paul Williams, who worked in LA in the 1940's and 50's, used a similar shade in many projects. "There's a lot of history and narrative within this color that maybe not everyone will be able to know, but hopefully everyone can feel," says Yashar. Clocking in at roughly 1,000-square-feet, the interior is now a mixture of millennial aesthetics, showcasing Yashar's love for design culture icons like Mario Buatta and Saul Bass. The entry sets the tone with its metal-and-glass Dutch door. A mixed-use meeting room offers a blend of contrasts, from Buatta-inspired shades in a Dickies-esque khaki twill to antique Chippendale chairs juxtaposed with 1990s Marc Newson tables. "All of my work is really portraiture," says Furth, "so this was an opportunity to help create this sort of portrait of Sean and his business." "Sometimes things just resonate...you just know when it's right." —Sean YasharThe sitting room features iconic design pieces, including a worn black leather sofa from the 1980s and a Philippe Starck table from the Royalton Hotel. Peter Shire and Ron Arad chairs are paired with conceptual furniture inspired by Dan Friedman. The kitchen celebrates postwar and ’80s influences with Smeg appliances and works by Soft Baroque and Patrick Nagel, grounded by a custom table from Studio MUKA. "A lot of people know me for my interest in eighties and nineties design culture," says Yashar. "But when I think eighties or nineties, I don't think of one thing. I don't want to choose. So I want to have Joe D'Urso high-tech track lighting, and I want it against these Mario Buatta-style balloon shades. I like that duality."Outside, a Persian-inspired courtyard nods to Yashar's heritage while offering dining and lounging areas that showcase rare 1980s furniture, including a Peter Lane ceramic table and one-off mint-colored Richard Schultz seating. The courtyard’s natural and faux vine murals create a satirical trompe-l’oeil effect, celebrating real-versus-virtual artistry. "I think we're both big believers in feeling," says Yashar. "Sometimes things just resonate. You can't really put your finger on it, but you just know that it's right."Sean SantiagoDeputy EditorSean Santiago is ELLE Decor's Deputy Editor, covering news, trends and talents in interior design, hospitality and travel, culture, and luxury shopping. Since starting his career at an interior design firm in 2011, he has gone on to cover the industry for Vogue, Architectural Digest, Sight Unseen, PIN-UP and Domino. He is the author of The Lonny Home (Weldon Owens, 2018), has produced scripted social content for brands including West Elm and Streeteasy, and is sometimes recognized on the street for his Instagram Reels series, #DanceToDecor
    0 Reacties 0 aandelen
  • Koto creates unifying identity for Riot Games' League of Legends Championship Pacific

    From Tokyo to Taipei, Ho Chi Minh to Seoul, Asia Pacific has long been a powerhouse in global esports. However, until now, the region's role in the League of Legends ecosystem lacked a central stage. That's all about to change with the inception of League of Legends Championship Pacific, Riot Games' ambitious new league for APAC.
    This is the game developer's most significant step yet in unifying its fragmented competitive landscape. To bring that vision to life, Riot partnered with brand and digital studio Koto to craft an identity that would speak to fans across cultures, languages, and gaming styles. The result is a full-spectrum design system shaped by the region, for the region, and built to evolve alongside the fast-moving world of esports.
    "At its heart, this project was about building pride and momentum," says Koto creative director Melissa Baillache. "Riot made it clear from the beginning: they wanted to give fans a brand that belongs to them."
    The LCP brand makes that intent clear from the get-go. Under the platform What We're Made Of, Koto constructed an identity rooted in regional passion, from Japan's Oshi-style fandoms to Southeast Asia's hyper-social esports culture. The line isn't just a slogan; it's a rallying cry, making it clear that APAC isn't just participating in global esports – it's here to lead.

    Designing for competition and community
    Visually, the brand needed to deliver across an incredible range of touchpoints, from social teasers and broadcast graphics to merch, memes, and fan-created content. The design system is centred on The Pinnacle: a five-player emblem representing unity and competitive intensity. Rendered in molten, 3D finishes, it's a symbol of regional ambition and the raw energy of top-tier play.
    That energy carries through a modular graphic system inspired by League's own gameplay, specifically the three in-game lanes that structure how matches unfold. This system offers a flexible yet distinctive frame for everything from stat-heavy broadcast overlays to highlight reels and in-arena hype moments.
    The motion language is just as purposeful. It's fast, focused, and reactive, designed to mirror the breakneck pace of in-game action and the way fans consume esports content in real-time. Whether counting down to kick-off or amplifying a clutch play, the system pulses with immediacy.

    Voice with impact
    Koto also worked to develop a voice that cuts through the noise of a crowded digital space. LCP's tone is raw, energised, and emotionally charged, with tight headlines and punchy copy that speaks directly to fans. It's not just branding; it's storytelling engineered for social moments, match trailers, post-game celebrations, and everything in between.
    The studio's verbal system extends to campaign slogans, hashtags, and commentary-style callouts, all of which work together to build momentum, fuel rivalries, and stoke regional pride.

    A custom typeface that sets the tempo
    At the core of the visual identity is LCP Ignite, a custom variable typeface designed to capture the rhythm and sharpness of League gameplay. Inspired by the 'fired up' ethos of competitive play, it flexes across every format, from match stats and player quotes to dynamic on-screen graphics.
    Given the region's linguistic diversity, the system also includes a suite of secondary typefacesto ensure legibility and consistency in languages across APAC. The goal here was to create a type system that speaks to everyone, from die-hard fans to casual mobile viewers wherever they are.
    Fuel for fandom
    Koto's system goes beyond expectations by creating tools that grow with the community. A suite of icons and illustrations—all drawn from the strokes and geometry of LCP Ignite—provides creative fuel for Riot and fans. These assets flex across platforms, helping commentators, players, and creators build content that feels cohesive but never prescriptive.
    Gerald Torto, senior strategy director at Koto, says: "The goal with LCP was to frame the league not just as a competition, but as a cultural force. The energy and sentiment captured in the idea 'What We're Made Of' is a fitting platform. It gives APAC an unapologetic and proud voice that looks ahead to an exciting future."

    The scale of the system matches that ambition. Alongside Riot's APAC PubSports team, Koto delivered a complete brand toolkit with hundreds of assets – spanning physical, digital, and broadcast formats – built to scale across seasons, teams, and evolving tournament formats.
    Setting the stage for APAC's next era
    What sets this project apart is its commitment to longevity. LCP is a vast infrastructure investment into the future of competitive gaming in APAC, and the brand has already begun rolling out teaser campaigns, test broadcasts, and live events, setting the stage for a new chapter in League of Legends.
    As Koto continues to expand its presence in the region, the LCP identity is a strong signal of what's possible when global ambition meets regional nuance. It's also a showcase of what the Sydney studio brings to the table: cross-cultural fluency, strategic storytelling, and a flair for building scalable, high-impact identities with soul.
    With the official launch of LCP now live, Riot and Koto are inviting the world to witness what the region is made of.
    #koto #creates #unifying #identity #riot
    Koto creates unifying identity for Riot Games' League of Legends Championship Pacific
    From Tokyo to Taipei, Ho Chi Minh to Seoul, Asia Pacific has long been a powerhouse in global esports. However, until now, the region's role in the League of Legends ecosystem lacked a central stage. That's all about to change with the inception of League of Legends Championship Pacific, Riot Games' ambitious new league for APAC. This is the game developer's most significant step yet in unifying its fragmented competitive landscape. To bring that vision to life, Riot partnered with brand and digital studio Koto to craft an identity that would speak to fans across cultures, languages, and gaming styles. The result is a full-spectrum design system shaped by the region, for the region, and built to evolve alongside the fast-moving world of esports. "At its heart, this project was about building pride and momentum," says Koto creative director Melissa Baillache. "Riot made it clear from the beginning: they wanted to give fans a brand that belongs to them." The LCP brand makes that intent clear from the get-go. Under the platform What We're Made Of, Koto constructed an identity rooted in regional passion, from Japan's Oshi-style fandoms to Southeast Asia's hyper-social esports culture. The line isn't just a slogan; it's a rallying cry, making it clear that APAC isn't just participating in global esports – it's here to lead. Designing for competition and community Visually, the brand needed to deliver across an incredible range of touchpoints, from social teasers and broadcast graphics to merch, memes, and fan-created content. The design system is centred on The Pinnacle: a five-player emblem representing unity and competitive intensity. Rendered in molten, 3D finishes, it's a symbol of regional ambition and the raw energy of top-tier play. That energy carries through a modular graphic system inspired by League's own gameplay, specifically the three in-game lanes that structure how matches unfold. This system offers a flexible yet distinctive frame for everything from stat-heavy broadcast overlays to highlight reels and in-arena hype moments. The motion language is just as purposeful. It's fast, focused, and reactive, designed to mirror the breakneck pace of in-game action and the way fans consume esports content in real-time. Whether counting down to kick-off or amplifying a clutch play, the system pulses with immediacy. Voice with impact Koto also worked to develop a voice that cuts through the noise of a crowded digital space. LCP's tone is raw, energised, and emotionally charged, with tight headlines and punchy copy that speaks directly to fans. It's not just branding; it's storytelling engineered for social moments, match trailers, post-game celebrations, and everything in between. The studio's verbal system extends to campaign slogans, hashtags, and commentary-style callouts, all of which work together to build momentum, fuel rivalries, and stoke regional pride. A custom typeface that sets the tempo At the core of the visual identity is LCP Ignite, a custom variable typeface designed to capture the rhythm and sharpness of League gameplay. Inspired by the 'fired up' ethos of competitive play, it flexes across every format, from match stats and player quotes to dynamic on-screen graphics. Given the region's linguistic diversity, the system also includes a suite of secondary typefacesto ensure legibility and consistency in languages across APAC. The goal here was to create a type system that speaks to everyone, from die-hard fans to casual mobile viewers wherever they are. Fuel for fandom Koto's system goes beyond expectations by creating tools that grow with the community. A suite of icons and illustrations—all drawn from the strokes and geometry of LCP Ignite—provides creative fuel for Riot and fans. These assets flex across platforms, helping commentators, players, and creators build content that feels cohesive but never prescriptive. Gerald Torto, senior strategy director at Koto, says: "The goal with LCP was to frame the league not just as a competition, but as a cultural force. The energy and sentiment captured in the idea 'What We're Made Of' is a fitting platform. It gives APAC an unapologetic and proud voice that looks ahead to an exciting future." The scale of the system matches that ambition. Alongside Riot's APAC PubSports team, Koto delivered a complete brand toolkit with hundreds of assets – spanning physical, digital, and broadcast formats – built to scale across seasons, teams, and evolving tournament formats. Setting the stage for APAC's next era What sets this project apart is its commitment to longevity. LCP is a vast infrastructure investment into the future of competitive gaming in APAC, and the brand has already begun rolling out teaser campaigns, test broadcasts, and live events, setting the stage for a new chapter in League of Legends. As Koto continues to expand its presence in the region, the LCP identity is a strong signal of what's possible when global ambition meets regional nuance. It's also a showcase of what the Sydney studio brings to the table: cross-cultural fluency, strategic storytelling, and a flair for building scalable, high-impact identities with soul. With the official launch of LCP now live, Riot and Koto are inviting the world to witness what the region is made of. #koto #creates #unifying #identity #riot
    WWW.CREATIVEBOOM.COM
    Koto creates unifying identity for Riot Games' League of Legends Championship Pacific
    From Tokyo to Taipei, Ho Chi Minh to Seoul, Asia Pacific has long been a powerhouse in global esports. However, until now, the region's role in the League of Legends ecosystem lacked a central stage. That's all about to change with the inception of League of Legends Championship Pacific (LCP), Riot Games' ambitious new league for APAC. This is the game developer's most significant step yet in unifying its fragmented competitive landscape. To bring that vision to life, Riot partnered with brand and digital studio Koto to craft an identity that would speak to fans across cultures, languages, and gaming styles. The result is a full-spectrum design system shaped by the region, for the region, and built to evolve alongside the fast-moving world of esports. "At its heart, this project was about building pride and momentum," says Koto creative director Melissa Baillache. "Riot made it clear from the beginning: they wanted to give fans a brand that belongs to them." The LCP brand makes that intent clear from the get-go. Under the platform What We're Made Of, Koto constructed an identity rooted in regional passion, from Japan's Oshi-style fandoms to Southeast Asia's hyper-social esports culture. The line isn't just a slogan; it's a rallying cry, making it clear that APAC isn't just participating in global esports – it's here to lead. Designing for competition and community Visually, the brand needed to deliver across an incredible range of touchpoints, from social teasers and broadcast graphics to merch, memes, and fan-created content. The design system is centred on The Pinnacle: a five-player emblem representing unity and competitive intensity. Rendered in molten, 3D finishes, it's a symbol of regional ambition and the raw energy of top-tier play. That energy carries through a modular graphic system inspired by League's own gameplay, specifically the three in-game lanes that structure how matches unfold. This system offers a flexible yet distinctive frame for everything from stat-heavy broadcast overlays to highlight reels and in-arena hype moments. The motion language is just as purposeful. It's fast, focused, and reactive, designed to mirror the breakneck pace of in-game action and the way fans consume esports content in real-time. Whether counting down to kick-off or amplifying a clutch play, the system pulses with immediacy. Voice with impact Koto also worked to develop a voice that cuts through the noise of a crowded digital space. LCP's tone is raw, energised, and emotionally charged, with tight headlines and punchy copy that speaks directly to fans. It's not just branding; it's storytelling engineered for social moments, match trailers, post-game celebrations, and everything in between. The studio's verbal system extends to campaign slogans, hashtags, and commentary-style callouts, all of which work together to build momentum, fuel rivalries, and stoke regional pride. A custom typeface that sets the tempo At the core of the visual identity is LCP Ignite, a custom variable typeface designed to capture the rhythm and sharpness of League gameplay. Inspired by the 'fired up' ethos of competitive play, it flexes across every format, from match stats and player quotes to dynamic on-screen graphics. Given the region's linguistic diversity, the system also includes a suite of secondary typefaces (including Archivo, Kinkakuji and Thonglor Soi 4 Nr) to ensure legibility and consistency in languages across APAC. The goal here was to create a type system that speaks to everyone, from die-hard fans to casual mobile viewers wherever they are. Fuel for fandom Koto's system goes beyond expectations by creating tools that grow with the community. A suite of icons and illustrations—all drawn from the strokes and geometry of LCP Ignite—provides creative fuel for Riot and fans. These assets flex across platforms, helping commentators, players, and creators build content that feels cohesive but never prescriptive. Gerald Torto, senior strategy director at Koto, says: "The goal with LCP was to frame the league not just as a competition, but as a cultural force. The energy and sentiment captured in the idea 'What We're Made Of' is a fitting platform. It gives APAC an unapologetic and proud voice that looks ahead to an exciting future." The scale of the system matches that ambition. Alongside Riot's APAC PubSports team, Koto delivered a complete brand toolkit with hundreds of assets – spanning physical, digital, and broadcast formats – built to scale across seasons, teams, and evolving tournament formats. Setting the stage for APAC's next era What sets this project apart is its commitment to longevity. LCP is a vast infrastructure investment into the future of competitive gaming in APAC, and the brand has already begun rolling out teaser campaigns, test broadcasts, and live events, setting the stage for a new chapter in League of Legends. As Koto continues to expand its presence in the region, the LCP identity is a strong signal of what's possible when global ambition meets regional nuance. It's also a showcase of what the Sydney studio brings to the table: cross-cultural fluency, strategic storytelling, and a flair for building scalable, high-impact identities with soul. With the official launch of LCP now live, Riot and Koto are inviting the world to witness what the region is made of.
    0 Reacties 0 aandelen
  • AI storage: NAS vs SAN vs object for training and inference

    Artificial intelligencerelies on vast amounts of data.
    Enterprises that take on AI projects, especially for large language modelsand generative AI, need to capture large volumes of data for model training as well as to store outputs from AI-enabled systems.
    That data, however, is unlikely to be in a single system or location. Customers will draw on multiple data sources, including structured data in databases and often unstructured data. Some of these information sources will be on-premises and others in the cloud.
    To deal with AI’s hunger for data, system architects need to look at storage across storage area networks, network attached storage, and, potentially, object storage.
    In this article, we look at the pros and cons of block, file and object storage for AI projects and the challenges of finding the right blend for organisations.

    The current generation of AI projects are rarely, if ever, characterised by a single source of data. Instead, generative AI models draw on a wide range of data, much of it unstructured. This includes documents, images, audio and video and computer code, to name a few.

    Everything about generative AI is about understanding relationships. You have the source data still in your unstructured data, either file or object, and your vectorised data sitting on block

    Patrick Smith, Pure Storage

    When it comes to training LLMs, the more data sources the better. But, at the same time, enterprises link LLMs to their own data sources, either directly or through retrieval augmented generationthat improves the accuracy and relevance of results. That data might be documents but can include enterprise applications that hold data in a relational database.
    “A lot of AI is driven by unstructured data, so applications point at files, images, video, audio – all unstructured data,” says Patrick Smith, field chief technology officer EMEA at storage supplier Pure Storage. “But people also look at their production datasets and want to tie them to their generative AI projects.”
    This, he adds, includes adding vectorisation to databases, which is commonly supported by the main relational database suppliers, such as Oracle.

    For system architects who support AI projects, this raises the question of where best to store data. The simplest option would be to leave data sources as they are, but this is not always possible.
    This could be because data needs further processing, the AI application needs to be isolated from production systems, or current storage systems lack the throughput the AI application requires.
    In addition, vectorisation usually leads to large increases in data volumes – a 10 times increase is not untypical – and this puts more demands on production storage.
    This means that storage needs to be flexible and needs to be able to scale, and AI project data handling requirements differ during each stage. Training demands large volumes of raw data, inference – running the model in production – might not require as much data but needs higher throughput and minimal latency.
    Enterprises tend to keep the bulk of their unstructured data on file access NAS storage. NAS has the advantages of being relatively low cost and easier to manage and scale than alternatives such as direct-attached storageor block access SAN storage.
    Structured data is more likely to be block storage. Usually this will be on a SAN, although direct attached storage might be sufficient for smaller AI projects.
    Here, achieving the best performance – in terms of IOPS and throughput from the storage array – offsets the greater complexity of NAS. Enterprise production systems, such as enterprise resource planningand customer relationship management, will use SAN or DAS to store their data in database files. So, in practice, data for AI is likely to be drawn data from SAN and NAS environments.
    “AI data can be stored either in NAS or SAN. It’s all about the way the AI tools want or need to access the data,” says Bruce Kornfeld, chief product officer at StorMagic. “You can store AI data on a SAN, but AI tools won’t typically read the blocks. They’ll use a type of file access protocol to get to the block data.”
    It is not necessarily the case that one protocol will better than the other. It depends very much on the nature of the data sources and on the output of the AI system
    For a primarily document or image-based AI system, NAS might be fast enough. For an application such as autonomous driving or surveillance, systems might use a SAN or even high-speed local storage.
    Again, data architects will also need to distinguish between training and inference phases of their projects and consider whether the overhead of moving data between storage systems outweighs performance benefits, especially in training.

    This has led some organisations to look at object storage as a way of unifying data sources for AI. Object storage is increasingly in use with enterprises, and not just in cloud storage – on-premise object stores are gaining market share too.
    Object has some advantages for AI, not least its flat structure and global name space,low management overheads, ease of expansion and low cost.
    Performance, however, has not been a strength for object storage. This has tended to make it more suited to tasks such as archiving than applications that demand low latency and high levels of data throughput.
    Suppliers are working to close the performance gap, however. Pure Storage and NetApp sell storage systems that can handle file and object and, in some cases, block too. These include Pure’s FlashBlade, and hardware that runs NetApp’s OnTap storage operating system. These technologies give storage managers the flexibility to use the best data formats, without creating silos tied to specific hardware.
    Others, such as Hammerspace, with its Hyperscale NAS, aim to squeeze additional performance out of equipment that runs the network file system. This, they argue, prevents bottlenecks where storage fails to keep up with data-hungry graphics processing units.

    But until better-performing object storage systems become more widely available, or more enterprises move to universal storage platforms, AI is likely to use NAS, SAN, object and even DAS in combination.
    That said, the balance between the elements is likely to change during the lifetime of an AI project, and as AI tools and their applications evolve.
    At Pure, Smith has seen requests for new hardware for unstructured data, while block and vector database requirements are being met for most customers on existing hardware.
    “Everything about generative AI is about understanding relationships,” he says. “You have the source data still in your unstructured data, either file or object, and your vectorised data sitting on block.”

    about AI and storage

    Storage technology explained: AI and data storage: In this guide, we examine the data storage needs of artificial intelligence, the demands it places on data storage, the suitability of cloud and object storage for AI, and key AI storage products.
    Storage technology explained: Vector databases at the core of AI: We look at the use of vector data in AI and how vector databases work, plus vector embedding, the challenges for storage of vector data and the key suppliers of vector database products.
    #storage #nas #san #object #training
    AI storage: NAS vs SAN vs object for training and inference
    Artificial intelligencerelies on vast amounts of data. Enterprises that take on AI projects, especially for large language modelsand generative AI, need to capture large volumes of data for model training as well as to store outputs from AI-enabled systems. That data, however, is unlikely to be in a single system or location. Customers will draw on multiple data sources, including structured data in databases and often unstructured data. Some of these information sources will be on-premises and others in the cloud. To deal with AI’s hunger for data, system architects need to look at storage across storage area networks, network attached storage, and, potentially, object storage. In this article, we look at the pros and cons of block, file and object storage for AI projects and the challenges of finding the right blend for organisations. The current generation of AI projects are rarely, if ever, characterised by a single source of data. Instead, generative AI models draw on a wide range of data, much of it unstructured. This includes documents, images, audio and video and computer code, to name a few. Everything about generative AI is about understanding relationships. You have the source data still in your unstructured data, either file or object, and your vectorised data sitting on block Patrick Smith, Pure Storage When it comes to training LLMs, the more data sources the better. But, at the same time, enterprises link LLMs to their own data sources, either directly or through retrieval augmented generationthat improves the accuracy and relevance of results. That data might be documents but can include enterprise applications that hold data in a relational database. “A lot of AI is driven by unstructured data, so applications point at files, images, video, audio – all unstructured data,” says Patrick Smith, field chief technology officer EMEA at storage supplier Pure Storage. “But people also look at their production datasets and want to tie them to their generative AI projects.” This, he adds, includes adding vectorisation to databases, which is commonly supported by the main relational database suppliers, such as Oracle. For system architects who support AI projects, this raises the question of where best to store data. The simplest option would be to leave data sources as they are, but this is not always possible. This could be because data needs further processing, the AI application needs to be isolated from production systems, or current storage systems lack the throughput the AI application requires. In addition, vectorisation usually leads to large increases in data volumes – a 10 times increase is not untypical – and this puts more demands on production storage. This means that storage needs to be flexible and needs to be able to scale, and AI project data handling requirements differ during each stage. Training demands large volumes of raw data, inference – running the model in production – might not require as much data but needs higher throughput and minimal latency. Enterprises tend to keep the bulk of their unstructured data on file access NAS storage. NAS has the advantages of being relatively low cost and easier to manage and scale than alternatives such as direct-attached storageor block access SAN storage. Structured data is more likely to be block storage. Usually this will be on a SAN, although direct attached storage might be sufficient for smaller AI projects. Here, achieving the best performance – in terms of IOPS and throughput from the storage array – offsets the greater complexity of NAS. Enterprise production systems, such as enterprise resource planningand customer relationship management, will use SAN or DAS to store their data in database files. So, in practice, data for AI is likely to be drawn data from SAN and NAS environments. “AI data can be stored either in NAS or SAN. It’s all about the way the AI tools want or need to access the data,” says Bruce Kornfeld, chief product officer at StorMagic. “You can store AI data on a SAN, but AI tools won’t typically read the blocks. They’ll use a type of file access protocol to get to the block data.” It is not necessarily the case that one protocol will better than the other. It depends very much on the nature of the data sources and on the output of the AI system For a primarily document or image-based AI system, NAS might be fast enough. For an application such as autonomous driving or surveillance, systems might use a SAN or even high-speed local storage. Again, data architects will also need to distinguish between training and inference phases of their projects and consider whether the overhead of moving data between storage systems outweighs performance benefits, especially in training. This has led some organisations to look at object storage as a way of unifying data sources for AI. Object storage is increasingly in use with enterprises, and not just in cloud storage – on-premise object stores are gaining market share too. Object has some advantages for AI, not least its flat structure and global name space,low management overheads, ease of expansion and low cost. Performance, however, has not been a strength for object storage. This has tended to make it more suited to tasks such as archiving than applications that demand low latency and high levels of data throughput. Suppliers are working to close the performance gap, however. Pure Storage and NetApp sell storage systems that can handle file and object and, in some cases, block too. These include Pure’s FlashBlade, and hardware that runs NetApp’s OnTap storage operating system. These technologies give storage managers the flexibility to use the best data formats, without creating silos tied to specific hardware. Others, such as Hammerspace, with its Hyperscale NAS, aim to squeeze additional performance out of equipment that runs the network file system. This, they argue, prevents bottlenecks where storage fails to keep up with data-hungry graphics processing units. But until better-performing object storage systems become more widely available, or more enterprises move to universal storage platforms, AI is likely to use NAS, SAN, object and even DAS in combination. That said, the balance between the elements is likely to change during the lifetime of an AI project, and as AI tools and their applications evolve. At Pure, Smith has seen requests for new hardware for unstructured data, while block and vector database requirements are being met for most customers on existing hardware. “Everything about generative AI is about understanding relationships,” he says. “You have the source data still in your unstructured data, either file or object, and your vectorised data sitting on block.” about AI and storage Storage technology explained: AI and data storage: In this guide, we examine the data storage needs of artificial intelligence, the demands it places on data storage, the suitability of cloud and object storage for AI, and key AI storage products. Storage technology explained: Vector databases at the core of AI: We look at the use of vector data in AI and how vector databases work, plus vector embedding, the challenges for storage of vector data and the key suppliers of vector database products. #storage #nas #san #object #training
    WWW.COMPUTERWEEKLY.COM
    AI storage: NAS vs SAN vs object for training and inference
    Artificial intelligence (AI) relies on vast amounts of data. Enterprises that take on AI projects, especially for large language models (LLMs) and generative AI (GenAI), need to capture large volumes of data for model training as well as to store outputs from AI-enabled systems. That data, however, is unlikely to be in a single system or location. Customers will draw on multiple data sources, including structured data in databases and often unstructured data. Some of these information sources will be on-premises and others in the cloud. To deal with AI’s hunger for data, system architects need to look at storage across storage area networks (SAN), network attached storage (NAS), and, potentially, object storage. In this article, we look at the pros and cons of block, file and object storage for AI projects and the challenges of finding the right blend for organisations. The current generation of AI projects are rarely, if ever, characterised by a single source of data. Instead, generative AI models draw on a wide range of data, much of it unstructured. This includes documents, images, audio and video and computer code, to name a few. Everything about generative AI is about understanding relationships. You have the source data still in your unstructured data, either file or object, and your vectorised data sitting on block Patrick Smith, Pure Storage When it comes to training LLMs, the more data sources the better. But, at the same time, enterprises link LLMs to their own data sources, either directly or through retrieval augmented generation (RAG) that improves the accuracy and relevance of results. That data might be documents but can include enterprise applications that hold data in a relational database. “A lot of AI is driven by unstructured data, so applications point at files, images, video, audio – all unstructured data,” says Patrick Smith, field chief technology officer EMEA at storage supplier Pure Storage. “But people also look at their production datasets and want to tie them to their generative AI projects.” This, he adds, includes adding vectorisation to databases, which is commonly supported by the main relational database suppliers, such as Oracle. For system architects who support AI projects, this raises the question of where best to store data. The simplest option would be to leave data sources as they are, but this is not always possible. This could be because data needs further processing, the AI application needs to be isolated from production systems, or current storage systems lack the throughput the AI application requires. In addition, vectorisation usually leads to large increases in data volumes – a 10 times increase is not untypical – and this puts more demands on production storage. This means that storage needs to be flexible and needs to be able to scale, and AI project data handling requirements differ during each stage. Training demands large volumes of raw data, inference – running the model in production – might not require as much data but needs higher throughput and minimal latency. Enterprises tend to keep the bulk of their unstructured data on file access NAS storage. NAS has the advantages of being relatively low cost and easier to manage and scale than alternatives such as direct-attached storage (DAS) or block access SAN storage. Structured data is more likely to be block storage. Usually this will be on a SAN, although direct attached storage might be sufficient for smaller AI projects. Here, achieving the best performance – in terms of IOPS and throughput from the storage array – offsets the greater complexity of NAS. Enterprise production systems, such as enterprise resource planning (ERP) and customer relationship management (CRM), will use SAN or DAS to store their data in database files. So, in practice, data for AI is likely to be drawn data from SAN and NAS environments. “AI data can be stored either in NAS or SAN. It’s all about the way the AI tools want or need to access the data,” says Bruce Kornfeld, chief product officer at StorMagic. “You can store AI data on a SAN, but AI tools won’t typically read the blocks. They’ll use a type of file access protocol to get to the block data.” It is not necessarily the case that one protocol will better than the other. It depends very much on the nature of the data sources and on the output of the AI system For a primarily document or image-based AI system, NAS might be fast enough. For an application such as autonomous driving or surveillance, systems might use a SAN or even high-speed local storage. Again, data architects will also need to distinguish between training and inference phases of their projects and consider whether the overhead of moving data between storage systems outweighs performance benefits, especially in training. This has led some organisations to look at object storage as a way of unifying data sources for AI. Object storage is increasingly in use with enterprises, and not just in cloud storage – on-premise object stores are gaining market share too. Object has some advantages for AI, not least its flat structure and global name space, (relatively) low management overheads, ease of expansion and low cost. Performance, however, has not been a strength for object storage. This has tended to make it more suited to tasks such as archiving than applications that demand low latency and high levels of data throughput. Suppliers are working to close the performance gap, however. Pure Storage and NetApp sell storage systems that can handle file and object and, in some cases, block too. These include Pure’s FlashBlade, and hardware that runs NetApp’s OnTap storage operating system. These technologies give storage managers the flexibility to use the best data formats, without creating silos tied to specific hardware. Others, such as Hammerspace, with its Hyperscale NAS, aim to squeeze additional performance out of equipment that runs the network file system (NFS). This, they argue, prevents bottlenecks where storage fails to keep up with data-hungry graphics processing units (GPUs). But until better-performing object storage systems become more widely available, or more enterprises move to universal storage platforms, AI is likely to use NAS, SAN, object and even DAS in combination. That said, the balance between the elements is likely to change during the lifetime of an AI project, and as AI tools and their applications evolve. At Pure, Smith has seen requests for new hardware for unstructured data, while block and vector database requirements are being met for most customers on existing hardware. “Everything about generative AI is about understanding relationships,” he says. “You have the source data still in your unstructured data, either file or object, and your vectorised data sitting on block.” Read more about AI and storage Storage technology explained: AI and data storage: In this guide, we examine the data storage needs of artificial intelligence, the demands it places on data storage, the suitability of cloud and object storage for AI, and key AI storage products. Storage technology explained: Vector databases at the core of AI: We look at the use of vector data in AI and how vector databases work, plus vector embedding, the challenges for storage of vector data and the key suppliers of vector database products.
    0 Reacties 0 aandelen
  • Asia’s Tech Renaissance: Our Interview with Dr. Lu Gang on BEYOND Expo’s Global Ambition

    Five years ago, launching a tech conference during a global lockdown might’ve seemed delusional. Dr. Lu Gang calls it “stupid enough,” but the result, BEYOND Expo, is anything but. Against a backdrop of shuttered borders and empty exhibition halls, Lu didn’t just push ahead; he built a stage for Asia’s technological identity to finally perform, not as a supporting act, but as the main event.
    BEYOND wasn’t born out of convenience. It was a counter-punch to a persistent imbalance. Asia, rich with innovation in robotics, AI, mobility, and biotech, lacked a unifying platform. USA had CES and SXSW. Europe had Slush and IFA. Asia? Fragmented, regionally siloed, and globally underrepresented. Lu saw this gap firsthand – founders with world-class ideas were treated as footnotes at global expos, buried among exhibitors, their stories lost in translation, both literally and figuratively.

    He chose Macau. Not because it was a tech hub, but because it wasn’t. Culturally and linguistically neutral ground. Grand hotels. Efficient infrastructure. A place you could sell as Asia’s Vegas. And for Lu, it was more than geographic convenience. It was symbolism. A clean slate for a clean break from foreign legacy formats that never really fit Asia’s voice anyway.
    When we sat down with him, Lu was unfiltered, casually peeling back the layers of his vision like someone who’s spent years explaining why it matters and still hasn’t run out of reasons. “There’s no platform that reorganizes Asian innovation for a global audience,” he said. “You go to CES or Web Summit and the most exciting founders from Asia are just… missing. They’re in the crowd, not on stage.”

    The ambition isn’t subtle. BEYOND wants to be where the next big thing isn’t just shown off – it’s unveiled, discussed, and celebrated. A place where Plaud, a rising hardware startup with AI chops, gets more than a 3×3 booth on a back wall. “He’s a superstar,” Lu said of the founder. “But if he went to CES, he’d just be another exhibitor.”
    There’s a wild idealism to it all, but it’s grounded in grit. Building a cross-border tech expo in Asia means navigating linguistic hurdles, cultural nuance, and vastly different industrial priorities. Korea, Japan, China, and Southeast Asia don’t naturally sync. Getting them to share a stage, let alone a conversation, takes more than ambition. It takes trust. And Lu has been earning it by showing up consistently for 5 years – and arguably the toughest 5 years in recent history.

    BEYOND isn’t just a parade of booths. It’s become known for its parties, its loosened-tie vibe, the mingling of founders and media poolside after panels. Lu laughs about it, but there’s strategy here. “People come for the day show,” he said, “but they stay for the experience. The real conversations happen after hours.”
    In its fifth year, BEYOND is growing up fast. International media are taking notice. More exhibitors are treating it as their product launch platform. There’s momentum, and Lu knows what to do with it. He wants BEYOND to become the destination in Asia where new tech gets unveiled first. Think CES, but in a region where hardware is king and software isn’t the only storyline.

    The cultural shift is overdue. Silicon Valley has long dictated the pulse of tech, but the future? It’s being prototyped in Shenzhen, Seoul, Tokyo. Asia’s startup scenes aren’t just growing, they’re diverging, forming identities shaped by local needs and global reach. BEYOND is trying to harness that chaos, give it choreography, and let the rest of the world watch.
    He’s already fielding interest from Brazil, Japan, the UAE. Each wants their own BEYOND. Not to copy, but to collaborate. It’s flattering. Overwhelming too. “We’re still a small team,” Lu said. “But we’re thinking about it.” There’s no rush. Scale too fast and you lose the soul. But the demand is telling: the world doesn’t want another CES. It wants a fresh script.The post Asia’s Tech Renaissance: Our Interview with Dr. Lu Gang on BEYOND Expo’s Global Ambition first appeared on Yanko Design.
    #asias #tech #renaissance #our #interview
    Asia’s Tech Renaissance: Our Interview with Dr. Lu Gang on BEYOND Expo’s Global Ambition
    Five years ago, launching a tech conference during a global lockdown might’ve seemed delusional. Dr. Lu Gang calls it “stupid enough,” but the result, BEYOND Expo, is anything but. Against a backdrop of shuttered borders and empty exhibition halls, Lu didn’t just push ahead; he built a stage for Asia’s technological identity to finally perform, not as a supporting act, but as the main event. BEYOND wasn’t born out of convenience. It was a counter-punch to a persistent imbalance. Asia, rich with innovation in robotics, AI, mobility, and biotech, lacked a unifying platform. USA had CES and SXSW. Europe had Slush and IFA. Asia? Fragmented, regionally siloed, and globally underrepresented. Lu saw this gap firsthand – founders with world-class ideas were treated as footnotes at global expos, buried among exhibitors, their stories lost in translation, both literally and figuratively. He chose Macau. Not because it was a tech hub, but because it wasn’t. Culturally and linguistically neutral ground. Grand hotels. Efficient infrastructure. A place you could sell as Asia’s Vegas. And for Lu, it was more than geographic convenience. It was symbolism. A clean slate for a clean break from foreign legacy formats that never really fit Asia’s voice anyway. When we sat down with him, Lu was unfiltered, casually peeling back the layers of his vision like someone who’s spent years explaining why it matters and still hasn’t run out of reasons. “There’s no platform that reorganizes Asian innovation for a global audience,” he said. “You go to CES or Web Summit and the most exciting founders from Asia are just… missing. They’re in the crowd, not on stage.” The ambition isn’t subtle. BEYOND wants to be where the next big thing isn’t just shown off – it’s unveiled, discussed, and celebrated. A place where Plaud, a rising hardware startup with AI chops, gets more than a 3×3 booth on a back wall. “He’s a superstar,” Lu said of the founder. “But if he went to CES, he’d just be another exhibitor.” There’s a wild idealism to it all, but it’s grounded in grit. Building a cross-border tech expo in Asia means navigating linguistic hurdles, cultural nuance, and vastly different industrial priorities. Korea, Japan, China, and Southeast Asia don’t naturally sync. Getting them to share a stage, let alone a conversation, takes more than ambition. It takes trust. And Lu has been earning it by showing up consistently for 5 years – and arguably the toughest 5 years in recent history. BEYOND isn’t just a parade of booths. It’s become known for its parties, its loosened-tie vibe, the mingling of founders and media poolside after panels. Lu laughs about it, but there’s strategy here. “People come for the day show,” he said, “but they stay for the experience. The real conversations happen after hours.” In its fifth year, BEYOND is growing up fast. International media are taking notice. More exhibitors are treating it as their product launch platform. There’s momentum, and Lu knows what to do with it. He wants BEYOND to become the destination in Asia where new tech gets unveiled first. Think CES, but in a region where hardware is king and software isn’t the only storyline. The cultural shift is overdue. Silicon Valley has long dictated the pulse of tech, but the future? It’s being prototyped in Shenzhen, Seoul, Tokyo. Asia’s startup scenes aren’t just growing, they’re diverging, forming identities shaped by local needs and global reach. BEYOND is trying to harness that chaos, give it choreography, and let the rest of the world watch. He’s already fielding interest from Brazil, Japan, the UAE. Each wants their own BEYOND. Not to copy, but to collaborate. It’s flattering. Overwhelming too. “We’re still a small team,” Lu said. “But we’re thinking about it.” There’s no rush. Scale too fast and you lose the soul. But the demand is telling: the world doesn’t want another CES. It wants a fresh script.The post Asia’s Tech Renaissance: Our Interview with Dr. Lu Gang on BEYOND Expo’s Global Ambition first appeared on Yanko Design. #asias #tech #renaissance #our #interview
    WWW.YANKODESIGN.COM
    Asia’s Tech Renaissance: Our Interview with Dr. Lu Gang on BEYOND Expo’s Global Ambition
    Five years ago, launching a tech conference during a global lockdown might’ve seemed delusional. Dr. Lu Gang calls it “stupid enough,” but the result, BEYOND Expo, is anything but. Against a backdrop of shuttered borders and empty exhibition halls, Lu didn’t just push ahead; he built a stage for Asia’s technological identity to finally perform, not as a supporting act, but as the main event. BEYOND wasn’t born out of convenience. It was a counter-punch to a persistent imbalance. Asia, rich with innovation in robotics, AI, mobility, and biotech, lacked a unifying platform. USA had CES and SXSW. Europe had Slush and IFA. Asia? Fragmented, regionally siloed, and globally underrepresented. Lu saw this gap firsthand – founders with world-class ideas were treated as footnotes at global expos, buried among exhibitors, their stories lost in translation, both literally and figuratively. He chose Macau. Not because it was a tech hub, but because it wasn’t. Culturally and linguistically neutral ground. Grand hotels. Efficient infrastructure. A place you could sell as Asia’s Vegas. And for Lu, it was more than geographic convenience. It was symbolism. A clean slate for a clean break from foreign legacy formats that never really fit Asia’s voice anyway. When we sat down with him, Lu was unfiltered, casually peeling back the layers of his vision like someone who’s spent years explaining why it matters and still hasn’t run out of reasons. “There’s no platform that reorganizes Asian innovation for a global audience,” he said. “You go to CES or Web Summit and the most exciting founders from Asia are just… missing. They’re in the crowd, not on stage.” The ambition isn’t subtle. BEYOND wants to be where the next big thing isn’t just shown off – it’s unveiled, discussed, and celebrated. A place where Plaud, a rising hardware startup with AI chops, gets more than a 3×3 booth on a back wall. “He’s a superstar,” Lu said of the founder. “But if he went to CES, he’d just be another exhibitor.” There’s a wild idealism to it all, but it’s grounded in grit. Building a cross-border tech expo in Asia means navigating linguistic hurdles, cultural nuance, and vastly different industrial priorities. Korea, Japan, China, and Southeast Asia don’t naturally sync. Getting them to share a stage, let alone a conversation, takes more than ambition. It takes trust. And Lu has been earning it by showing up consistently for 5 years – and arguably the toughest 5 years in recent history. BEYOND isn’t just a parade of booths. It’s become known for its parties, its loosened-tie vibe, the mingling of founders and media poolside after panels. Lu laughs about it, but there’s strategy here. “People come for the day show,” he said, “but they stay for the experience. The real conversations happen after hours.” In its fifth year, BEYOND is growing up fast. International media are taking notice. More exhibitors are treating it as their product launch platform. There’s momentum, and Lu knows what to do with it. He wants BEYOND to become the destination in Asia where new tech gets unveiled first. Think CES, but in a region where hardware is king and software isn’t the only storyline. The cultural shift is overdue. Silicon Valley has long dictated the pulse of tech, but the future? It’s being prototyped in Shenzhen, Seoul, Tokyo. Asia’s startup scenes aren’t just growing, they’re diverging, forming identities shaped by local needs and global reach. BEYOND is trying to harness that chaos, give it choreography, and let the rest of the world watch. He’s already fielding interest from Brazil, Japan, the UAE. Each wants their own BEYOND. Not to copy, but to collaborate. It’s flattering. Overwhelming too. “We’re still a small team,” Lu said. “But we’re thinking about it.” There’s no rush. Scale too fast and you lose the soul. But the demand is telling: the world doesn’t want another CES. It wants a fresh script.The post Asia’s Tech Renaissance: Our Interview with Dr. Lu Gang on BEYOND Expo’s Global Ambition first appeared on Yanko Design.
    0 Reacties 0 aandelen
  • Digital Domain Goes Retro-Futuristic with Robots on ‘The Electric State’ VFX

    In The Electric State, based on a graphic novel by Swedish artist Simon Stålenhag, after a robot uprising in an alternative version of the 1990s, an orphaned teenager goes on a quest across the American West, with a cartoon-inspired robot, a smuggler, and his sidekick, to find her long-lost brother. Adapting this sci-fi adventure for Netflix were Joe and Anthony Russo; their film stars Millie Bobbie Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito and a cast of CG automatons voiced by the likes of Woody Harrelson, Alan Tudyk, Hank Azaria, and Anthony Mackie.  Overseeing the visual effects, which surpassed what the Russos had to deal with during their halcyon MCU days, was Matthew Buttler, who turned to the venerable Digital Domain.
    As the main vendor, the studio was responsible for producing 61 character builds, 480 assets, and over 850 shots. “It was one of the biggest projects that I’ve done in terms of sheer volumes of assets, shots and characters,” states Joel Behrens, VFX Supervisor, Digital Domain.  “Our wonderful asset team did the 61 characters we were responsible for and had to ingest another 46 characters from other facilities.  We didn’t do any major changes. It was pushing our pipeline to the limits it could handle, especially with other shows going on. We took up a lot of disk space and had the ability to expand and contract the Renderfarm with cloud machines as well.”
    In researching for the show, Digital Domain visited Boston Dynamics to better understand the technological advancements in robotics, and what structures, motions, and interactions were logical and physically plausible.  “There is a certain amount of fake engineering that goes into some of these things,” notes Behrens.  “We’re not actually building these robots to legitimately function in the real world but have to be visibly believable that they can actually pull some of this stuff off.”  The starting point is always the reference material provided by the client.  “Is there a voice that I need to match to?” notes Liz Bernard, Animation Supervisor, Digital Domain.  “Is there any physical body reference either from motion reference actors in the plate or motion capture? We had a big mix of that on the show.  Some of our characters couldn’t be mocapped at all while others could but we had to modify the performance considerably.  We were also looking at the anatomy of each one of these robots to see what their physical capabilities are.  Can they run or jump?  Because that’s always going to tie tightly with the personality.  Your body in some ways is your personality.  We’re trying to figure out how do we put the actor’s voice on top of all these physical limitations in a way that feels cohesive.  It doesn’t happen overnight.” 

    The character design of Cosmo was retained from the graphic novel despite not being feasible to engineer in reality.  “His feet are huge,” laughs Bernard.  “We had to figure out how to get him to walk in a way that felt normal and put the joints in the right spots.” Emoting was mainly achieved through physicality.  “He does have these audio clips from the Kid Cosmo cartoon that he can use to help express himself verbally, but most of it is pantomime,” observes Bernard.  “There is this great scene between Cosmo and Michelle that occurs right after she crashes the car, and Cosmo is still trying to convince her who he is and why she should go off on this great search for her brother across the country.   We were trying to get some tough nuanced acting into these shots with a subtle head tilt or a little bit of a slump in the shoulders.”  A green light was inserted into the eyes.  “Matthew Butler likes robotic stuff and anything that we could do to make Cosmo feel more grounded in reality was helpful,” observes Behrens.  “We also wanted to prevent anyone from panicking and giving Cosmo a more animated face or allowing him to speak dialogue. We started off with a constant light at the beginning and then added this twinkle and glimmer in his eye during certain moments. We liked that and ended up putting it in more places throughout the film. Everybody says that the eyes are the windows to the soul so giving Cosmo something rather than a dark black painted spot on his face assisted in connecting with that character.” 

    Coming in four different sizes that fit inside one another - like a Russian doll - is Herman. Digital Domain looked after the eight-inch, four-foot and 20-foot versions while ILM was responsible for the 60-foot Herman that appears in the final battle.   “They were scaled up to a certain extent but consider that the joints on the 20-foot version of Herman versus the four-foot version need to be more robust and beefier because they’re carrying so much more weight,” remarks Bernard.  “We were focusing on making sure that the impact of each step rippled through the body in a way that made it clear how heavy a 20-foot robot carrying a van across a desert would be.  The smaller one can be nimbler and lighter on its feet.  There were similar physical limitations, but that weight was the big deal.”  Incorporated into the face of Herman is a retro-futuristic screen in the style of the 1980s and early 1990s CRT panels. “It has these RGB pixels that live under a thick plate of glass like your old television set,” explains Behrens.  “You have this beautiful reflective dome that goes over top of these cathode-ray-looking pixels that allowed us to treat it as a modern-day LED with the ability to animate his expressions, or if we wanted to, put symbols up. You could pixelized any graphical element and put it on Herman’s face.  We wanted to add a nonlinear decay into the pixels so when he changed expressions or a shape altered drastically you would have a slow quadratic decay of the pixels fading off as he switched expressions. That contributed a nice touch.”

    One member of the robot cast is an iconic Planters mascot.  “Everybody knows who Mr. Peanut is and what he looks like, at least in North America,” observes Behrens.  “We had to go through a lot of design iterations of how his face should animate. It was determined that as a slightly older model of robot he didn’t have a lot of dexterity in his face. We were modelling him after Chuck E. Cheese and ShowBiz Pizza animatronics, so it was like a latex shell over the top of a mechanical under structure that drove his limited expressions. It allowed him to open and close his mouth and do some slight contractions at the corners, leaving most of the acting to his eyes, which did not have as many restrictions. The eyes had the ability to move quickly, and dart and blink like a human.”  The eyebrows were mounted tracks that ran up and down a vertical slot on the front of the face.  “We could move the eyebrows up and down, and tilt them, but couldn’t do anything else,” states Bernard.  “It was trying to find a visual language that would get the acting across with Woody Harrelson’s amazing performance backing it up.  Then a lot of pantomime to go with that.”  Mr. Peanut moves in a jerky rather than smooth manner.  “Here is a funny little detail,” reveals Bernard.  “If you think about a peanut shell, he doesn’t have a chest or hips that can move independently.  We realized early on that in order to get him to walk without teeter-tottering everywhere, we were going to have to cut his butt off, reattach it and add a swivel control on the bottom.  We always kept that peanut silhouette intact; however, he could swivel his hips enough to walk forward without looking silly!” 

    Other notable robots are Pop Fly and Perplexo; the former is modelled on baseball player, the latter on a magician.  “We decided that Pop Fly would be the clunkiest of all robots because he was meant to be the elder statesman,” states Behrens.  “Pop Fly was partially falling apart, like his eye would drift, the mouth would hang open and sometimes he’d pass out for a second and wake back up.  Pop Fly was the scavenger hunter of the group who has seen stuff in the battles of the wasteland. We came up with a fun pitching mechanism so he could actually shoot the balls out of his mouth and of course, there was his trusty baseball bat that he could bat things with.” An interesting task was figuring out how to rig his model.  “We realized that there needed to be a lot of restrictions in his joints to make him look realistic based on how he was modelled in the first place,” notes Bernard.  “Pop Fly couldn’t rotate his head in every direction; he could turn it from side to side for the most part.  Pop Fly was on this weird structure with the four wheels on a scissor lift situation which meant that he always had to lean forward to get going and when stopping, would rock backwards.  It was fun to add all that detail in for him.”  Serving as Perplexo’s upper body is a theatrical box that he pops in and out of.  “Perplexo did not have a whole lot going on with his face,” remarks Bernard.  “It was a simple mechanical structure to his jaw, eyes, and eyelids; that meant we could push the performance with pantomime and crazy big gestures with the arms.”              
    A major adversary in the film is The Marshall, portrayed by Giancarlo Esposito, who remotely controls a drone that projects the face of operator onto a video screen.  “We started with a much smaller screen and had a cowboy motif for awhile, but then they decided to have a unifying design for the drones that are operated by humans versus the robots,” remarks Behrens.  “Since the artist Simon Stålenhag had done an interesting, cool design with the virtual reality helmets with that long duckbill that the humans wear in the real world, the decision was made to mimic that head style of the drones to match the drone operators. Then you could put a screen on the front; that’s how you see Tedor The Marshall or the commando operators. It worked out quite nicely.”  

    There was not much differentiation in the movement of the drones.  “The drones were meant to be in the vein of Stormtroopers, a horde of them being operated by people sitting in a comfortable room in Seattle,” observes Bernard. “So, they didn’t get as much effort and love as we put into the rest of the robots which had their own personalities. But for The Marshall, we have great mocap to start from Adam Croasdell. He played it a little bit cowboy, which was how Giancarlo Esposito was portraying the character as well, like a Western sheriff style vibe. You could hear that in the voice.  Listening to Giancarlo’s vocal performance gives you a lot of clues of what you should do when you’re moving that character around.  We put all of that together in the performance of The Marshall.”  
    Many environments had to either be created or augmented, such as the haunted amusement park known as Happyland. “The majority of the exterior of Happyland was a beautiful set that Dennis Gassner and his crew built in a parking lot of a waterslide park in Atlanta,” states Behrens.  “We would go there at night and freeze our butts off shooting for a good two and a half weeks in the cold Atlanta winter.  Most of our environmental work was doing distance extensions for that and adding atmospherics and fog.  We made all the scavenger robots that inhabit Happyland, which are cannibalistic robotics that upgrade and hot rod themselves from random parts taken from the robots that they kill.  Once we get into the haunted house and fall into the basement, that’s where Dr. Amherst has his lab, which was modelled off a 1930s Frankenstein set, with Tesla coils, beakers, and lab equipment.  That was initially a set build we did onstage in Atlanta. But when we got into additional photography, they wanted to do this whole choreographed fight with The Marshall and Mr. Peanut. Because they didn’t know what actions we would need, we ended up building that entire lower level in CG.”  

    At one point, all the exiled robots gather at the Mall within the Exclusion Zone.  “We were responsible for building a number of the background characters along with Storm Studios and ILM,” remarks Behrens.  “As for the mall, we didn’t have to do much to the environment.  There were some small things here and there that had to be modified.  We took over an abandoned mall in Atlanta and the art department dressed over half of it.” The background characters were not treated haphazardly. “We assigned two or three characters to each animator,” explains Bernard.  “I asked them to make a backstory and figure out who this guy is, what does he care about, and who is his mama?!  Put that into the performance so that each one feels unique and different because they have their own personalities.  There is a big central theme in the movie where the robots are almost more human than most of the humans you meet.  It was important to us that we put that humanity into their performances. As far as the Mall and choreography, Matthew, Joel and I knew that was going to be a huge challenge because this is not traditional crowd work where you can animate cycles and give it to a crowds department and say, ‘Have a bunch of people walking around.’  All these characters are different; they have to move differently and do their own thing.  We did a first pass on the big reveal in the Mall where you swing around and see the atrium where everybody is doing their thing.  We essentially took each character and moved them around like a chess piece to figure out if we had enough characters, if the color balanced nicely across all of them, and if it was okay for us to duplicate a couple of them.  We started to show that early to Matthew and Jeffrey Ford, and the directors to get buyoff on the density of the crowd.”   
    Considered one of the film’s signature sequences is the walk across the Exclusion Zone, where 20-foot Herman is carrying a Volkswagen van containing Michelle, Cosmo and Keats on his shoulder.  “We did a little bit of everything,” notes Behrens.  “We had plate-based shots because a splinter unit went out to Moab, Utah and shot a bunch of beautiful vistas for us.  For environments, there were shots where we had to do projections of plate material onto 3D geometry that we built. We had some DMPs that went into deep background. We also had to build out some actual legitimate 3D terrain for foreground and midground because a lot of the shots that had interaction with our hero characters rocking and back forth were shot on a bluescreen stage with a VW van on a large gimbal rig.  Then Liz had the fun job of trying to tie that into a giant robot walking with them.  We had to do some obvious tweaking to some of those motions. The establishing shots, where they are walking through this giant dead robot skeleton from who knows where, several of those were 100 percent CG. Once they get to the Mall, we had a big digital mall and a canyon area that had to look like they were once populated.”  Modifications were kept subtle.  “There were a couple of shots where we needed to move the plate VW van around a little bit,” states Bernard.  “You can’t do a lot without it starting to fall apart and lose perspective.” 

    “The biggest challenge was the scale and sheer number of characters needed that played a large role interacting with our human actors and creating a believable world for them to live in,” reflects Behrens.  “The sequence that I had the most fun with was the mine sequence with Herman and Keats, as far as their banter back and forth. Some of our most expansive work was the Mall and the walk across the Exclusion Zone.  Those had the most stunning visuals.”  Bernard agrees with her colleague.  “I’m going to sound like a broken record.  For me, it was the scale and the sheer number of characters that we had to deal with and keeping them feeling that they were all different, but from the same universe.  Having the animators working towards that same goal was a big challenge.  We had quite a large team on this one.  And I do love that mine sequence.  There is such good banter between Keats and Herman, especially early on in that sequence.  It has so much great action to it.  We got to drop a giant claw on top of The Marshall that he had to fight his way out of.  That was a hard shot.  And of course, the Mall is stunning.  You can see all the care that went into creating that environment and all those characters.  It’s beautiful.”     

    Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
    #digital #domain #goes #retrofuturistic #with
    Digital Domain Goes Retro-Futuristic with Robots on ‘The Electric State’ VFX
    In The Electric State, based on a graphic novel by Swedish artist Simon Stålenhag, after a robot uprising in an alternative version of the 1990s, an orphaned teenager goes on a quest across the American West, with a cartoon-inspired robot, a smuggler, and his sidekick, to find her long-lost brother. Adapting this sci-fi adventure for Netflix were Joe and Anthony Russo; their film stars Millie Bobbie Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito and a cast of CG automatons voiced by the likes of Woody Harrelson, Alan Tudyk, Hank Azaria, and Anthony Mackie.  Overseeing the visual effects, which surpassed what the Russos had to deal with during their halcyon MCU days, was Matthew Buttler, who turned to the venerable Digital Domain. As the main vendor, the studio was responsible for producing 61 character builds, 480 assets, and over 850 shots. “It was one of the biggest projects that I’ve done in terms of sheer volumes of assets, shots and characters,” states Joel Behrens, VFX Supervisor, Digital Domain.  “Our wonderful asset team did the 61 characters we were responsible for and had to ingest another 46 characters from other facilities.  We didn’t do any major changes. It was pushing our pipeline to the limits it could handle, especially with other shows going on. We took up a lot of disk space and had the ability to expand and contract the Renderfarm with cloud machines as well.” In researching for the show, Digital Domain visited Boston Dynamics to better understand the technological advancements in robotics, and what structures, motions, and interactions were logical and physically plausible.  “There is a certain amount of fake engineering that goes into some of these things,” notes Behrens.  “We’re not actually building these robots to legitimately function in the real world but have to be visibly believable that they can actually pull some of this stuff off.”  The starting point is always the reference material provided by the client.  “Is there a voice that I need to match to?” notes Liz Bernard, Animation Supervisor, Digital Domain.  “Is there any physical body reference either from motion reference actors in the plate or motion capture? We had a big mix of that on the show.  Some of our characters couldn’t be mocapped at all while others could but we had to modify the performance considerably.  We were also looking at the anatomy of each one of these robots to see what their physical capabilities are.  Can they run or jump?  Because that’s always going to tie tightly with the personality.  Your body in some ways is your personality.  We’re trying to figure out how do we put the actor’s voice on top of all these physical limitations in a way that feels cohesive.  It doesn’t happen overnight.”  The character design of Cosmo was retained from the graphic novel despite not being feasible to engineer in reality.  “His feet are huge,” laughs Bernard.  “We had to figure out how to get him to walk in a way that felt normal and put the joints in the right spots.” Emoting was mainly achieved through physicality.  “He does have these audio clips from the Kid Cosmo cartoon that he can use to help express himself verbally, but most of it is pantomime,” observes Bernard.  “There is this great scene between Cosmo and Michelle that occurs right after she crashes the car, and Cosmo is still trying to convince her who he is and why she should go off on this great search for her brother across the country.   We were trying to get some tough nuanced acting into these shots with a subtle head tilt or a little bit of a slump in the shoulders.”  A green light was inserted into the eyes.  “Matthew Butler likes robotic stuff and anything that we could do to make Cosmo feel more grounded in reality was helpful,” observes Behrens.  “We also wanted to prevent anyone from panicking and giving Cosmo a more animated face or allowing him to speak dialogue. We started off with a constant light at the beginning and then added this twinkle and glimmer in his eye during certain moments. We liked that and ended up putting it in more places throughout the film. Everybody says that the eyes are the windows to the soul so giving Cosmo something rather than a dark black painted spot on his face assisted in connecting with that character.”  Coming in four different sizes that fit inside one another - like a Russian doll - is Herman. Digital Domain looked after the eight-inch, four-foot and 20-foot versions while ILM was responsible for the 60-foot Herman that appears in the final battle.   “They were scaled up to a certain extent but consider that the joints on the 20-foot version of Herman versus the four-foot version need to be more robust and beefier because they’re carrying so much more weight,” remarks Bernard.  “We were focusing on making sure that the impact of each step rippled through the body in a way that made it clear how heavy a 20-foot robot carrying a van across a desert would be.  The smaller one can be nimbler and lighter on its feet.  There were similar physical limitations, but that weight was the big deal.”  Incorporated into the face of Herman is a retro-futuristic screen in the style of the 1980s and early 1990s CRT panels. “It has these RGB pixels that live under a thick plate of glass like your old television set,” explains Behrens.  “You have this beautiful reflective dome that goes over top of these cathode-ray-looking pixels that allowed us to treat it as a modern-day LED with the ability to animate his expressions, or if we wanted to, put symbols up. You could pixelized any graphical element and put it on Herman’s face.  We wanted to add a nonlinear decay into the pixels so when he changed expressions or a shape altered drastically you would have a slow quadratic decay of the pixels fading off as he switched expressions. That contributed a nice touch.” One member of the robot cast is an iconic Planters mascot.  “Everybody knows who Mr. Peanut is and what he looks like, at least in North America,” observes Behrens.  “We had to go through a lot of design iterations of how his face should animate. It was determined that as a slightly older model of robot he didn’t have a lot of dexterity in his face. We were modelling him after Chuck E. Cheese and ShowBiz Pizza animatronics, so it was like a latex shell over the top of a mechanical under structure that drove his limited expressions. It allowed him to open and close his mouth and do some slight contractions at the corners, leaving most of the acting to his eyes, which did not have as many restrictions. The eyes had the ability to move quickly, and dart and blink like a human.”  The eyebrows were mounted tracks that ran up and down a vertical slot on the front of the face.  “We could move the eyebrows up and down, and tilt them, but couldn’t do anything else,” states Bernard.  “It was trying to find a visual language that would get the acting across with Woody Harrelson’s amazing performance backing it up.  Then a lot of pantomime to go with that.”  Mr. Peanut moves in a jerky rather than smooth manner.  “Here is a funny little detail,” reveals Bernard.  “If you think about a peanut shell, he doesn’t have a chest or hips that can move independently.  We realized early on that in order to get him to walk without teeter-tottering everywhere, we were going to have to cut his butt off, reattach it and add a swivel control on the bottom.  We always kept that peanut silhouette intact; however, he could swivel his hips enough to walk forward without looking silly!”  Other notable robots are Pop Fly and Perplexo; the former is modelled on baseball player, the latter on a magician.  “We decided that Pop Fly would be the clunkiest of all robots because he was meant to be the elder statesman,” states Behrens.  “Pop Fly was partially falling apart, like his eye would drift, the mouth would hang open and sometimes he’d pass out for a second and wake back up.  Pop Fly was the scavenger hunter of the group who has seen stuff in the battles of the wasteland. We came up with a fun pitching mechanism so he could actually shoot the balls out of his mouth and of course, there was his trusty baseball bat that he could bat things with.” An interesting task was figuring out how to rig his model.  “We realized that there needed to be a lot of restrictions in his joints to make him look realistic based on how he was modelled in the first place,” notes Bernard.  “Pop Fly couldn’t rotate his head in every direction; he could turn it from side to side for the most part.  Pop Fly was on this weird structure with the four wheels on a scissor lift situation which meant that he always had to lean forward to get going and when stopping, would rock backwards.  It was fun to add all that detail in for him.”  Serving as Perplexo’s upper body is a theatrical box that he pops in and out of.  “Perplexo did not have a whole lot going on with his face,” remarks Bernard.  “It was a simple mechanical structure to his jaw, eyes, and eyelids; that meant we could push the performance with pantomime and crazy big gestures with the arms.”               A major adversary in the film is The Marshall, portrayed by Giancarlo Esposito, who remotely controls a drone that projects the face of operator onto a video screen.  “We started with a much smaller screen and had a cowboy motif for awhile, but then they decided to have a unifying design for the drones that are operated by humans versus the robots,” remarks Behrens.  “Since the artist Simon Stålenhag had done an interesting, cool design with the virtual reality helmets with that long duckbill that the humans wear in the real world, the decision was made to mimic that head style of the drones to match the drone operators. Then you could put a screen on the front; that’s how you see Tedor The Marshall or the commando operators. It worked out quite nicely.”   There was not much differentiation in the movement of the drones.  “The drones were meant to be in the vein of Stormtroopers, a horde of them being operated by people sitting in a comfortable room in Seattle,” observes Bernard. “So, they didn’t get as much effort and love as we put into the rest of the robots which had their own personalities. But for The Marshall, we have great mocap to start from Adam Croasdell. He played it a little bit cowboy, which was how Giancarlo Esposito was portraying the character as well, like a Western sheriff style vibe. You could hear that in the voice.  Listening to Giancarlo’s vocal performance gives you a lot of clues of what you should do when you’re moving that character around.  We put all of that together in the performance of The Marshall.”   Many environments had to either be created or augmented, such as the haunted amusement park known as Happyland. “The majority of the exterior of Happyland was a beautiful set that Dennis Gassner and his crew built in a parking lot of a waterslide park in Atlanta,” states Behrens.  “We would go there at night and freeze our butts off shooting for a good two and a half weeks in the cold Atlanta winter.  Most of our environmental work was doing distance extensions for that and adding atmospherics and fog.  We made all the scavenger robots that inhabit Happyland, which are cannibalistic robotics that upgrade and hot rod themselves from random parts taken from the robots that they kill.  Once we get into the haunted house and fall into the basement, that’s where Dr. Amherst has his lab, which was modelled off a 1930s Frankenstein set, with Tesla coils, beakers, and lab equipment.  That was initially a set build we did onstage in Atlanta. But when we got into additional photography, they wanted to do this whole choreographed fight with The Marshall and Mr. Peanut. Because they didn’t know what actions we would need, we ended up building that entire lower level in CG.”   At one point, all the exiled robots gather at the Mall within the Exclusion Zone.  “We were responsible for building a number of the background characters along with Storm Studios and ILM,” remarks Behrens.  “As for the mall, we didn’t have to do much to the environment.  There were some small things here and there that had to be modified.  We took over an abandoned mall in Atlanta and the art department dressed over half of it.” The background characters were not treated haphazardly. “We assigned two or three characters to each animator,” explains Bernard.  “I asked them to make a backstory and figure out who this guy is, what does he care about, and who is his mama?!  Put that into the performance so that each one feels unique and different because they have their own personalities.  There is a big central theme in the movie where the robots are almost more human than most of the humans you meet.  It was important to us that we put that humanity into their performances. As far as the Mall and choreography, Matthew, Joel and I knew that was going to be a huge challenge because this is not traditional crowd work where you can animate cycles and give it to a crowds department and say, ‘Have a bunch of people walking around.’  All these characters are different; they have to move differently and do their own thing.  We did a first pass on the big reveal in the Mall where you swing around and see the atrium where everybody is doing their thing.  We essentially took each character and moved them around like a chess piece to figure out if we had enough characters, if the color balanced nicely across all of them, and if it was okay for us to duplicate a couple of them.  We started to show that early to Matthew and Jeffrey Ford, and the directors to get buyoff on the density of the crowd.”    Considered one of the film’s signature sequences is the walk across the Exclusion Zone, where 20-foot Herman is carrying a Volkswagen van containing Michelle, Cosmo and Keats on his shoulder.  “We did a little bit of everything,” notes Behrens.  “We had plate-based shots because a splinter unit went out to Moab, Utah and shot a bunch of beautiful vistas for us.  For environments, there were shots where we had to do projections of plate material onto 3D geometry that we built. We had some DMPs that went into deep background. We also had to build out some actual legitimate 3D terrain for foreground and midground because a lot of the shots that had interaction with our hero characters rocking and back forth were shot on a bluescreen stage with a VW van on a large gimbal rig.  Then Liz had the fun job of trying to tie that into a giant robot walking with them.  We had to do some obvious tweaking to some of those motions. The establishing shots, where they are walking through this giant dead robot skeleton from who knows where, several of those were 100 percent CG. Once they get to the Mall, we had a big digital mall and a canyon area that had to look like they were once populated.”  Modifications were kept subtle.  “There were a couple of shots where we needed to move the plate VW van around a little bit,” states Bernard.  “You can’t do a lot without it starting to fall apart and lose perspective.”  “The biggest challenge was the scale and sheer number of characters needed that played a large role interacting with our human actors and creating a believable world for them to live in,” reflects Behrens.  “The sequence that I had the most fun with was the mine sequence with Herman and Keats, as far as their banter back and forth. Some of our most expansive work was the Mall and the walk across the Exclusion Zone.  Those had the most stunning visuals.”  Bernard agrees with her colleague.  “I’m going to sound like a broken record.  For me, it was the scale and the sheer number of characters that we had to deal with and keeping them feeling that they were all different, but from the same universe.  Having the animators working towards that same goal was a big challenge.  We had quite a large team on this one.  And I do love that mine sequence.  There is such good banter between Keats and Herman, especially early on in that sequence.  It has so much great action to it.  We got to drop a giant claw on top of The Marshall that he had to fight his way out of.  That was a hard shot.  And of course, the Mall is stunning.  You can see all the care that went into creating that environment and all those characters.  It’s beautiful.”      Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer. #digital #domain #goes #retrofuturistic #with
    WWW.AWN.COM
    Digital Domain Goes Retro-Futuristic with Robots on ‘The Electric State’ VFX
    In The Electric State, based on a graphic novel by Swedish artist Simon Stålenhag, after a robot uprising in an alternative version of the 1990s, an orphaned teenager goes on a quest across the American West, with a cartoon-inspired robot, a smuggler, and his sidekick, to find her long-lost brother. Adapting this sci-fi adventure for Netflix were Joe and Anthony Russo; their film stars Millie Bobbie Brown, Chris Pratt, Stanley Tucci, Giancarlo Esposito and a cast of CG automatons voiced by the likes of Woody Harrelson, Alan Tudyk, Hank Azaria, and Anthony Mackie.  Overseeing the visual effects, which surpassed what the Russos had to deal with during their halcyon MCU days, was Matthew Buttler, who turned to the venerable Digital Domain. As the main vendor, the studio was responsible for producing 61 character builds, 480 assets, and over 850 shots. “It was one of the biggest projects that I’ve done in terms of sheer volumes of assets, shots and characters,” states Joel Behrens, VFX Supervisor, Digital Domain.  “Our wonderful asset team did the 61 characters we were responsible for and had to ingest another 46 characters from other facilities.  We didn’t do any major changes. It was pushing our pipeline to the limits it could handle, especially with other shows going on. We took up a lot of disk space and had the ability to expand and contract the Renderfarm with cloud machines as well.” In researching for the show, Digital Domain visited Boston Dynamics to better understand the technological advancements in robotics, and what structures, motions, and interactions were logical and physically plausible.  “There is a certain amount of fake engineering that goes into some of these things,” notes Behrens.  “We’re not actually building these robots to legitimately function in the real world but have to be visibly believable that they can actually pull some of this stuff off.”  The starting point is always the reference material provided by the client.  “Is there a voice that I need to match to?” notes Liz Bernard, Animation Supervisor, Digital Domain.  “Is there any physical body reference either from motion reference actors in the plate or motion capture? We had a big mix of that on the show.  Some of our characters couldn’t be mocapped at all while others could but we had to modify the performance considerably.  We were also looking at the anatomy of each one of these robots to see what their physical capabilities are.  Can they run or jump?  Because that’s always going to tie tightly with the personality.  Your body in some ways is your personality.  We’re trying to figure out how do we put the actor’s voice on top of all these physical limitations in a way that feels cohesive.  It doesn’t happen overnight.”  The character design of Cosmo was retained from the graphic novel despite not being feasible to engineer in reality.  “His feet are huge,” laughs Bernard.  “We had to figure out how to get him to walk in a way that felt normal and put the joints in the right spots.” Emoting was mainly achieved through physicality.  “He does have these audio clips from the Kid Cosmo cartoon that he can use to help express himself verbally, but most of it is pantomime,” observes Bernard.  “There is this great scene between Cosmo and Michelle that occurs right after she crashes the car, and Cosmo is still trying to convince her who he is and why she should go off on this great search for her brother across the country.   We were trying to get some tough nuanced acting into these shots with a subtle head tilt or a little bit of a slump in the shoulders.”  A green light was inserted into the eyes.  “Matthew Butler likes robotic stuff and anything that we could do to make Cosmo feel more grounded in reality was helpful,” observes Behrens.  “We also wanted to prevent anyone from panicking and giving Cosmo a more animated face or allowing him to speak dialogue. We started off with a constant light at the beginning and then added this twinkle and glimmer in his eye during certain moments. We liked that and ended up putting it in more places throughout the film. Everybody says that the eyes are the windows to the soul so giving Cosmo something rather than a dark black painted spot on his face assisted in connecting with that character.”  Coming in four different sizes that fit inside one another - like a Russian doll - is Herman. Digital Domain looked after the eight-inch, four-foot and 20-foot versions while ILM was responsible for the 60-foot Herman that appears in the final battle.   “They were scaled up to a certain extent but consider that the joints on the 20-foot version of Herman versus the four-foot version need to be more robust and beefier because they’re carrying so much more weight,” remarks Bernard.  “We were focusing on making sure that the impact of each step rippled through the body in a way that made it clear how heavy a 20-foot robot carrying a van across a desert would be.  The smaller one can be nimbler and lighter on its feet.  There were similar physical limitations, but that weight was the big deal.”  Incorporated into the face of Herman is a retro-futuristic screen in the style of the 1980s and early 1990s CRT panels. “It has these RGB pixels that live under a thick plate of glass like your old television set,” explains Behrens.  “You have this beautiful reflective dome that goes over top of these cathode-ray-looking pixels that allowed us to treat it as a modern-day LED with the ability to animate his expressions, or if we wanted to, put symbols up. You could pixelized any graphical element and put it on Herman’s face.  We wanted to add a nonlinear decay into the pixels so when he changed expressions or a shape altered drastically you would have a slow quadratic decay of the pixels fading off as he switched expressions. That contributed a nice touch.” One member of the robot cast is an iconic Planters mascot.  “Everybody knows who Mr. Peanut is and what he looks like, at least in North America,” observes Behrens.  “We had to go through a lot of design iterations of how his face should animate. It was determined that as a slightly older model of robot he didn’t have a lot of dexterity in his face. We were modelling him after Chuck E. Cheese and ShowBiz Pizza animatronics, so it was like a latex shell over the top of a mechanical under structure that drove his limited expressions. It allowed him to open and close his mouth and do some slight contractions at the corners, leaving most of the acting to his eyes, which did not have as many restrictions. The eyes had the ability to move quickly, and dart and blink like a human.”  The eyebrows were mounted tracks that ran up and down a vertical slot on the front of the face.  “We could move the eyebrows up and down, and tilt them, but couldn’t do anything else,” states Bernard.  “It was trying to find a visual language that would get the acting across with Woody Harrelson’s amazing performance backing it up.  Then a lot of pantomime to go with that.”  Mr. Peanut moves in a jerky rather than smooth manner.  “Here is a funny little detail,” reveals Bernard.  “If you think about a peanut shell, he doesn’t have a chest or hips that can move independently.  We realized early on that in order to get him to walk without teeter-tottering everywhere, we were going to have to cut his butt off, reattach it and add a swivel control on the bottom.  We always kept that peanut silhouette intact; however, he could swivel his hips enough to walk forward without looking silly!”  Other notable robots are Pop Fly and Perplexo; the former is modelled on baseball player, the latter on a magician.  “We decided that Pop Fly would be the clunkiest of all robots because he was meant to be the elder statesman,” states Behrens.  “Pop Fly was partially falling apart, like his eye would drift, the mouth would hang open and sometimes he’d pass out for a second and wake back up.  Pop Fly was the scavenger hunter of the group who has seen stuff in the battles of the wasteland. We came up with a fun pitching mechanism so he could actually shoot the balls out of his mouth and of course, there was his trusty baseball bat that he could bat things with.” An interesting task was figuring out how to rig his model.  “We realized that there needed to be a lot of restrictions in his joints to make him look realistic based on how he was modelled in the first place,” notes Bernard.  “Pop Fly couldn’t rotate his head in every direction; he could turn it from side to side for the most part.  Pop Fly was on this weird structure with the four wheels on a scissor lift situation which meant that he always had to lean forward to get going and when stopping, would rock backwards.  It was fun to add all that detail in for him.”  Serving as Perplexo’s upper body is a theatrical box that he pops in and out of.  “Perplexo did not have a whole lot going on with his face,” remarks Bernard.  “It was a simple mechanical structure to his jaw, eyes, and eyelids; that meant we could push the performance with pantomime and crazy big gestures with the arms.”               A major adversary in the film is The Marshall, portrayed by Giancarlo Esposito, who remotely controls a drone that projects the face of operator onto a video screen.  “We started with a much smaller screen and had a cowboy motif for awhile, but then they decided to have a unifying design for the drones that are operated by humans versus the robots,” remarks Behrens.  “Since the artist Simon Stålenhag had done an interesting, cool design with the virtual reality helmets with that long duckbill that the humans wear in the real world, the decision was made to mimic that head style of the drones to match the drone operators. Then you could put a screen on the front; that’s how you see Ted [Jason Alexander] or The Marshall or the commando operators. It worked out quite nicely.”   There was not much differentiation in the movement of the drones.  “The drones were meant to be in the vein of Stormtroopers, a horde of them being operated by people sitting in a comfortable room in Seattle,” observes Bernard. “So, they didn’t get as much effort and love as we put into the rest of the robots which had their own personalities. But for The Marshall, we have great mocap to start from Adam Croasdell. He played it a little bit cowboy, which was how Giancarlo Esposito was portraying the character as well, like a Western sheriff style vibe. You could hear that in the voice.  Listening to Giancarlo’s vocal performance gives you a lot of clues of what you should do when you’re moving that character around.  We put all of that together in the performance of The Marshall.”   Many environments had to either be created or augmented, such as the haunted amusement park known as Happyland. “The majority of the exterior of Happyland was a beautiful set that Dennis Gassner and his crew built in a parking lot of a waterslide park in Atlanta,” states Behrens.  “We would go there at night and freeze our butts off shooting for a good two and a half weeks in the cold Atlanta winter.  Most of our environmental work was doing distance extensions for that and adding atmospherics and fog.  We made all the scavenger robots that inhabit Happyland, which are cannibalistic robotics that upgrade and hot rod themselves from random parts taken from the robots that they kill.  Once we get into the haunted house and fall into the basement, that’s where Dr. Amherst has his lab, which was modelled off a 1930s Frankenstein set, with Tesla coils, beakers, and lab equipment.  That was initially a set build we did onstage in Atlanta. But when we got into additional photography, they wanted to do this whole choreographed fight with The Marshall and Mr. Peanut. Because they didn’t know what actions we would need, we ended up building that entire lower level in CG.”   At one point, all the exiled robots gather at the Mall within the Exclusion Zone.  “We were responsible for building a number of the background characters along with Storm Studios and ILM,” remarks Behrens.  “As for the mall, we didn’t have to do much to the environment.  There were some small things here and there that had to be modified.  We took over an abandoned mall in Atlanta and the art department dressed over half of it.” The background characters were not treated haphazardly. “We assigned two or three characters to each animator,” explains Bernard.  “I asked them to make a backstory and figure out who this guy is, what does he care about, and who is his mama?!  Put that into the performance so that each one feels unique and different because they have their own personalities.  There is a big central theme in the movie where the robots are almost more human than most of the humans you meet.  It was important to us that we put that humanity into their performances. As far as the Mall and choreography, Matthew, Joel and I knew that was going to be a huge challenge because this is not traditional crowd work where you can animate cycles and give it to a crowds department and say, ‘Have a bunch of people walking around.’  All these characters are different; they have to move differently and do their own thing.  We did a first pass on the big reveal in the Mall where you swing around and see the atrium where everybody is doing their thing.  We essentially took each character and moved them around like a chess piece to figure out if we had enough characters, if the color balanced nicely across all of them, and if it was okay for us to duplicate a couple of them.  We started to show that early to Matthew and Jeffrey Ford [Editor, Executive Producer], and the directors to get buyoff on the density of the crowd.”    Considered one of the film’s signature sequences is the walk across the Exclusion Zone, where 20-foot Herman is carrying a Volkswagen van containing Michelle, Cosmo and Keats on his shoulder.  “We did a little bit of everything,” notes Behrens.  “We had plate-based shots because a splinter unit went out to Moab, Utah and shot a bunch of beautiful vistas for us.  For environments, there were shots where we had to do projections of plate material onto 3D geometry that we built. We had some DMPs that went into deep background. We also had to build out some actual legitimate 3D terrain for foreground and midground because a lot of the shots that had interaction with our hero characters rocking and back forth were shot on a bluescreen stage with a VW van on a large gimbal rig.  Then Liz had the fun job of trying to tie that into a giant robot walking with them.  We had to do some obvious tweaking to some of those motions. The establishing shots, where they are walking through this giant dead robot skeleton from who knows where, several of those were 100 percent CG. Once they get to the Mall, we had a big digital mall and a canyon area that had to look like they were once populated.”  Modifications were kept subtle.  “There were a couple of shots where we needed to move the plate VW van around a little bit,” states Bernard.  “You can’t do a lot without it starting to fall apart and lose perspective.”  “The biggest challenge was the scale and sheer number of characters needed that played a large role interacting with our human actors and creating a believable world for them to live in,” reflects Behrens.  “The sequence that I had the most fun with was the mine sequence with Herman and Keats, as far as their banter back and forth. Some of our most expansive work was the Mall and the walk across the Exclusion Zone.  Those had the most stunning visuals.”  Bernard agrees with her colleague.  “I’m going to sound like a broken record.  For me, it was the scale and the sheer number of characters that we had to deal with and keeping them feeling that they were all different, but from the same universe.  Having the animators working towards that same goal was a big challenge.  We had quite a large team on this one.  And I do love that mine sequence.  There is such good banter between Keats and Herman, especially early on in that sequence.  It has so much great action to it.  We got to drop a giant claw on top of The Marshall that he had to fight his way out of.  That was a hard shot.  And of course, the Mall is stunning.  You can see all the care that went into creating that environment and all those characters.  It’s beautiful.”      Trevor Hogg is a freelance video editor and writer best known for composing in-depth filmmaker and movie profiles for VFX Voice, Animation Magazine, and British Cinematographer.
    0 Reacties 0 aandelen
  • The big design winners at this year’s D&AD awards

    W Conran Design’s graphic design for last year’s Paris Olympics has won D&AD’s highest accolade.
    The Black Pencil is reserved for “truly groundbreaking work” and some years none are given out.
    But this year’s juries awarded three Black Pencils, including the Paris games’ visual identity. The judges called it a “breakthrough for traditional sports marketing aesthetics” and praised the design for being “playful and scalable, with a unifying but distinctive feel that blends heritage and sport.”
    W Conran Design co-founder and creative director Gilles Deléris called working on the Olympics and Paralympics, “a once-in-a-lifetime opportunity for a designer.”
    “We are so proud and honoured by this recognition, which celebrates five years of collaboration with the Paris 2024 Organising Committee teams,” he said. “It was a shared commitment to excellence and a design system that is fresh, joyful, and popular.”
    W Conran Design’s work for the Paris Olympics
    It is only the fifth time a graphic design project has won a Black Pencil, joining Johnson Banks’ Fruit and Veg stamps for Royal Mail, the new UK coin designs for the Royal Mint, TBWA’s Trillion Dollar Flyers for the exiled Zimbabwean Newspaper, and Hans Thiessen’s provocative annual report for the Calgary Society for Persons with Disabilities.
    One rebrand has previously won the top honour – Made Thought’s 2015 work for GF Smith.
    This year, the Paris design was joined on the Black Pencil podium by A$AP Rocky’s music video Tailor Swif, by Iconoclast LA, and FCB New York’s Spreadbeats, which hacked spreadsheets as a way to promote Spotify’s ad offerings.
    Its Black Pencil was for digital design, and this award caps a fine couple of weeks for that work, which also cleaned up at the ADC Annual awards, where it was named Best in Show.
    This year, 11,689 entrants from 86 countries submitted 30,000 pieces of work to the D&AD awards.
    But JKR global executive creative director Lisa Smith, who is also a D&AD trustee, said that judging was tricky due to a level of creative sameness.
    “Too many entries follow the same established design codes and trends, making everything start to look and feel alike, regardless of category,” she said. “The work that stood out – and was ultimately awarded – was the kind that breaks away from the expected: inspiring, well-crafted, and truly fit for purpose.”
    The Yellow Pencil is the highest award in each category. There were 48 in total, 11 of which went to the main design categories. These were:
    Brand Identity Refresh: Porto Rocha’s Nike Run rebrand
    Porto Rocha’s work for Nike Run
    New Brand Identity: Meat Studio for Pangmei Deserts
    Meat Studio’s work for Pangmei Desserts
    New Brand Identity: Scholz & Friends Berlin for the Tiroler Festspiele
    Scholz & Friends Berlin’s work for the Tiroler Festspiele
    New Brand Identity: Angelina Pischikova and Karina Zhukovskaya for mud
    Angelina Pischikova and Karina Zhukovskaya’s identity for their mud pet care brand.
    Graphic Design and Packaging Design: Serviceplan Germany’s Price Packs for PENNY
    Serviceplan Germany’s Price Packs for PENNY
    Graphic Design and Spatial Design: Circus Grey Peru’s Sightwalks for UNACEM
    Magazine and Newspaper Design: Uncommon Creative Studio covers for Port Magazine
    Uncommon Creative Studio’s work for Port Magazine
    Typography: DutchScot’s work for Danish textile company Tameko
    DutchScot’s work for Danish textile company Tameko
    Type Design and Lettering: TypeTogether’s Playwrite for Google
    TypeTogether’s Playwrite for Google
    Overall there were 107 pencils awarded in the design categories, led by graphic design, packaging designand type design and lettering.
    There were 31 pencils awarded in the branding categories, with double the number of awards in the new brand categorycompared to brand refresh.
    There were 26 pencils for illustration, 20 for experiential, 12 for typography, and seven for writing for design.
    Across the board, the winners demonstrate “the power of design not just as a form of art, but as a catalyst for commercial success and behavioural change,” says D&AD CEO, Dara Lynch.
    “The resurgence of craftsmanship stands as a reminder that in an era of automation, true excellence lies in the thoughtful execution of ideas,” she added.
    You can see all the D&AD winners here.
    #big #design #winners #this #years
    The big design winners at this year’s D&AD awards
    W Conran Design’s graphic design for last year’s Paris Olympics has won D&AD’s highest accolade. The Black Pencil is reserved for “truly groundbreaking work” and some years none are given out. But this year’s juries awarded three Black Pencils, including the Paris games’ visual identity. The judges called it a “breakthrough for traditional sports marketing aesthetics” and praised the design for being “playful and scalable, with a unifying but distinctive feel that blends heritage and sport.” W Conran Design co-founder and creative director Gilles Deléris called working on the Olympics and Paralympics, “a once-in-a-lifetime opportunity for a designer.” “We are so proud and honoured by this recognition, which celebrates five years of collaboration with the Paris 2024 Organising Committee teams,” he said. “It was a shared commitment to excellence and a design system that is fresh, joyful, and popular.” W Conran Design’s work for the Paris Olympics It is only the fifth time a graphic design project has won a Black Pencil, joining Johnson Banks’ Fruit and Veg stamps for Royal Mail, the new UK coin designs for the Royal Mint, TBWA’s Trillion Dollar Flyers for the exiled Zimbabwean Newspaper, and Hans Thiessen’s provocative annual report for the Calgary Society for Persons with Disabilities. One rebrand has previously won the top honour – Made Thought’s 2015 work for GF Smith. This year, the Paris design was joined on the Black Pencil podium by A$AP Rocky’s music video Tailor Swif, by Iconoclast LA, and FCB New York’s Spreadbeats, which hacked spreadsheets as a way to promote Spotify’s ad offerings. Its Black Pencil was for digital design, and this award caps a fine couple of weeks for that work, which also cleaned up at the ADC Annual awards, where it was named Best in Show. This year, 11,689 entrants from 86 countries submitted 30,000 pieces of work to the D&AD awards. But JKR global executive creative director Lisa Smith, who is also a D&AD trustee, said that judging was tricky due to a level of creative sameness. “Too many entries follow the same established design codes and trends, making everything start to look and feel alike, regardless of category,” she said. “The work that stood out – and was ultimately awarded – was the kind that breaks away from the expected: inspiring, well-crafted, and truly fit for purpose.” The Yellow Pencil is the highest award in each category. There were 48 in total, 11 of which went to the main design categories. These were: Brand Identity Refresh: Porto Rocha’s Nike Run rebrand Porto Rocha’s work for Nike Run New Brand Identity: Meat Studio for Pangmei Deserts Meat Studio’s work for Pangmei Desserts New Brand Identity: Scholz & Friends Berlin for the Tiroler Festspiele Scholz & Friends Berlin’s work for the Tiroler Festspiele New Brand Identity: Angelina Pischikova and Karina Zhukovskaya for mud Angelina Pischikova and Karina Zhukovskaya’s identity for their mud pet care brand. Graphic Design and Packaging Design: Serviceplan Germany’s Price Packs for PENNY Serviceplan Germany’s Price Packs for PENNY Graphic Design and Spatial Design: Circus Grey Peru’s Sightwalks for UNACEM Magazine and Newspaper Design: Uncommon Creative Studio covers for Port Magazine Uncommon Creative Studio’s work for Port Magazine Typography: DutchScot’s work for Danish textile company Tameko DutchScot’s work for Danish textile company Tameko Type Design and Lettering: TypeTogether’s Playwrite for Google TypeTogether’s Playwrite for Google Overall there were 107 pencils awarded in the design categories, led by graphic design, packaging designand type design and lettering. There were 31 pencils awarded in the branding categories, with double the number of awards in the new brand categorycompared to brand refresh. There were 26 pencils for illustration, 20 for experiential, 12 for typography, and seven for writing for design. Across the board, the winners demonstrate “the power of design not just as a form of art, but as a catalyst for commercial success and behavioural change,” says D&AD CEO, Dara Lynch. “The resurgence of craftsmanship stands as a reminder that in an era of automation, true excellence lies in the thoughtful execution of ideas,” she added. You can see all the D&AD winners here. #big #design #winners #this #years
    WWW.DESIGNWEEK.CO.UK
    The big design winners at this year’s D&AD awards
    W Conran Design’s graphic design for last year’s Paris Olympics has won D&AD’s highest accolade. The Black Pencil is reserved for “truly groundbreaking work” and some years none are given out. But this year’s juries awarded three Black Pencils, including the Paris games’ visual identity. The judges called it a “breakthrough for traditional sports marketing aesthetics” and praised the design for being “playful and scalable, with a unifying but distinctive feel that blends heritage and sport.” W Conran Design co-founder and creative director Gilles Deléris called working on the Olympics and Paralympics, “a once-in-a-lifetime opportunity for a designer.” “We are so proud and honoured by this recognition, which celebrates five years of collaboration with the Paris 2024 Organising Committee teams,” he said. “It was a shared commitment to excellence and a design system that is fresh, joyful, and popular.” W Conran Design’s work for the Paris Olympics It is only the fifth time a graphic design project has won a Black Pencil, joining Johnson Banks’ Fruit and Veg stamps for Royal Mail (2004), the new UK coin designs for the Royal Mint (2005), TBWA’s Trillion Dollar Flyers for the exiled Zimbabwean Newspaper (2010), and Hans Thiessen’s provocative annual report for the Calgary Society for Persons with Disabilities (2012). One rebrand has previously won the top honour – Made Thought’s 2015 work for GF Smith. This year, the Paris design was joined on the Black Pencil podium by A$AP Rocky’s music video Tailor Swif, by Iconoclast LA, and FCB New York’s Spreadbeats, which hacked spreadsheets as a way to promote Spotify’s ad offerings. Its Black Pencil was for digital design, and this award caps a fine couple of weeks for that work, which also cleaned up at the ADC Annual awards, where it was named Best in Show. This year, 11,689 entrants from 86 countries submitted 30,000 pieces of work to the D&AD awards. But JKR global executive creative director Lisa Smith, who is also a D&AD trustee, said that judging was tricky due to a level of creative sameness. “Too many entries follow the same established design codes and trends, making everything start to look and feel alike, regardless of category,” she said. “The work that stood out – and was ultimately awarded – was the kind that breaks away from the expected: inspiring, well-crafted, and truly fit for purpose.” The Yellow Pencil is the highest award in each category. There were 48 in total, 11 of which went to the main design categories. These were: Brand Identity Refresh: Porto Rocha’s Nike Run rebrand Porto Rocha’s work for Nike Run New Brand Identity: Meat Studio for Pangmei Deserts Meat Studio’s work for Pangmei Desserts New Brand Identity: Scholz & Friends Berlin for the Tiroler Festspiele Scholz & Friends Berlin’s work for the Tiroler Festspiele New Brand Identity: Angelina Pischikova and Karina Zhukovskaya for mud Angelina Pischikova and Karina Zhukovskaya’s identity for their mud pet care brand. Graphic Design and Packaging Design: Serviceplan Germany’s Price Packs for PENNY Serviceplan Germany’s Price Packs for PENNY Graphic Design and Spatial Design: Circus Grey Peru’s Sightwalks for UNACEM Magazine and Newspaper Design: Uncommon Creative Studio covers for Port Magazine Uncommon Creative Studio’s work for Port Magazine Typography: DutchScot’s work for Danish textile company Tameko DutchScot’s work for Danish textile company Tameko Type Design and Lettering: TypeTogether’s Playwrite for Google TypeTogether’s Playwrite for Google Overall there were 107 pencils awarded in the design categories, led by graphic design (21%), packaging design (17%) and type design and lettering (15%). There were 31 pencils awarded in the branding categories, with double the number of awards in the new brand category (21) compared to brand refresh (10). There were 26 pencils for illustration, 20 for experiential, 12 for typography, and seven for writing for design. Across the board, the winners demonstrate “the power of design not just as a form of art, but as a catalyst for commercial success and behavioural change,” says D&AD CEO, Dara Lynch. “The resurgence of craftsmanship stands as a reminder that in an era of automation, true excellence lies in the thoughtful execution of ideas,” she added. You can see all the D&AD winners here.
    0 Reacties 0 aandelen