• Formentera20 is back, and this time it promises to be even more enlightening than the last twelve editions combined. Can you feel the excitement in the air? From October 2 to 4, 2025, the idyllic shores of Formentera will serve as the perfect backdrop for our favorite gathering of digital wizards, creativity gurus, and communication wizards. Because nothing says "cutting-edge innovation" quite like a tropical island where you can sip on your coconut water while discussing the latest trends in the digital universe.

    This year’s theme? A delightful concoction of culture, creativity, and communication—all served with a side of salty sea breeze. Who knew the key to world-class networking was just a plane ticket away to a beach? Forget about conference rooms; nothing like a sun-kissed beach to inspire groundbreaking ideas. Surely, the sound of waves crashing will help us unlock the secrets of digital communication.

    And let’s not overlook the stellar lineup of speakers they've assembled. I can only imagine the conversations: “How can we boost engagement on social media?” followed by a collective nod as they all sip their overpriced organic juices. I’m sure the beach vibes will lend an air of authenticity to those discussions on algorithm tweaks and engagement metrics. Because nothing screams “authenticity” quite like a luxury resort hosting the crème de la crème of the advertising world.

    Let’s not forget the irony of discussing “innovation” while basking in the sun. Because what better way to innovate than to sit in a circle, wearing sunglasses, while contemplating the latest app that helps you find the nearest beach bar? It’s the dream, isn’t it? It’s almost poetic how the world of high-tech communication thrives in such a low-tech environment—a setting that leaves you wondering if the real innovation is simply the ability to disconnect from the digital chaos while still pretending to be a part of it.

    But let’s be real: the true highlight of Formentera20 is not the knowledge shared or the networking done; it’s the Instagram posts that will flood our feeds. After all, who doesn’t want to showcase their “hard work” at a digital festival by posting a picture of themselves with a sunset in the background? It’s all about branding, darling.

    So, mark your calendars! Prepare your best beach outfit and your most serious expression for photos. Come for the culture, stay for the creativity, and leave with the satisfaction of having been part of something that sounds ridiculously important while you, in reality, are just enjoying a holiday under the guise of professional development.

    In the end, Formentera20 isn’t just a festival; it’s an experience—one that lets you bask in the sun while pretending you’re solving the world’s digital problems. Cheers to innovation, creativity, and the art of making work look like a vacation!

    #Formentera20 #digitalculture #creativity #communication #innovation
    Formentera20 is back, and this time it promises to be even more enlightening than the last twelve editions combined. Can you feel the excitement in the air? From October 2 to 4, 2025, the idyllic shores of Formentera will serve as the perfect backdrop for our favorite gathering of digital wizards, creativity gurus, and communication wizards. Because nothing says "cutting-edge innovation" quite like a tropical island where you can sip on your coconut water while discussing the latest trends in the digital universe. This year’s theme? A delightful concoction of culture, creativity, and communication—all served with a side of salty sea breeze. Who knew the key to world-class networking was just a plane ticket away to a beach? Forget about conference rooms; nothing like a sun-kissed beach to inspire groundbreaking ideas. Surely, the sound of waves crashing will help us unlock the secrets of digital communication. And let’s not overlook the stellar lineup of speakers they've assembled. I can only imagine the conversations: “How can we boost engagement on social media?” followed by a collective nod as they all sip their overpriced organic juices. I’m sure the beach vibes will lend an air of authenticity to those discussions on algorithm tweaks and engagement metrics. Because nothing screams “authenticity” quite like a luxury resort hosting the crème de la crème of the advertising world. Let’s not forget the irony of discussing “innovation” while basking in the sun. Because what better way to innovate than to sit in a circle, wearing sunglasses, while contemplating the latest app that helps you find the nearest beach bar? It’s the dream, isn’t it? It’s almost poetic how the world of high-tech communication thrives in such a low-tech environment—a setting that leaves you wondering if the real innovation is simply the ability to disconnect from the digital chaos while still pretending to be a part of it. But let’s be real: the true highlight of Formentera20 is not the knowledge shared or the networking done; it’s the Instagram posts that will flood our feeds. After all, who doesn’t want to showcase their “hard work” at a digital festival by posting a picture of themselves with a sunset in the background? It’s all about branding, darling. So, mark your calendars! Prepare your best beach outfit and your most serious expression for photos. Come for the culture, stay for the creativity, and leave with the satisfaction of having been part of something that sounds ridiculously important while you, in reality, are just enjoying a holiday under the guise of professional development. In the end, Formentera20 isn’t just a festival; it’s an experience—one that lets you bask in the sun while pretending you’re solving the world’s digital problems. Cheers to innovation, creativity, and the art of making work look like a vacation! #Formentera20 #digitalculture #creativity #communication #innovation
    Formentera20 anuncia los ponentes de su 12ª edición: cultura digital, creatividad y comunicación frente al mar
    Del 2 al 4 de octubre de 2025, la isla de Formentera volverá a convertirse en un punto de encuentro para los profesionales del entorno digital, creativo y estratégico. El festival Formentera20 celebrará su duodécima edición con un cartel que, un año
    Like
    Love
    Wow
    Sad
    Angry
    291
    1 Yorumlar 0 hisse senetleri
  • Editorial Design: '100 Beste Plakate 24' Showcase

    06/12 — 2025

    by abduzeedo

    Explore "100 Beste Plakate 24," a stunning yearbook by Tristesse and Slanted Publishers. Dive into cutting-edge editorial design and visual identity.
    Design enthusiasts, get ready to dive into the latest from the German-speaking design scene. The "100 Beste Plakate 24" yearbook offers a compelling showcase of contemporary graphic design. It's more than just a collection; it's a deep exploration of visual identity and editorial design.
    This yearbook, published by Slanted Publishers and edited by 100 beste Plakate e. V. and Fons Hickmann, is a testament to the power of impactful poster design. The design studio Tristesse from Basel took the reins for the overall concept, delivering a fresh and cheeky aesthetic that makes the "100 best posters" feel like leading actors on a vibrant stage. Their in-house approach to layout, typography, and photography truly shines.
    Unpacking the Visuals
    The book's formatand 256 pages allow for large-format images, providing ample space to appreciate each poster's intricate details. It includes detailed credits, content descriptions, and creation contexts. This commitment to detail in the editorial design elevates the reading experience.
    One notable example within the yearbook is the "To-Do: Diplome 24" poster campaign by Atelier HKB. Designed under Marco Matti's project management, this series features twelve motifs for the Bern University of the Arts graduation events. These posters highlight effective graphic design and visual communication. Another standout is the "Rettungsplakate" by klotz-studio für gestaltung. These "rescue posters," printed on actual rescue blankets, address homelessness in Germany. The raw, impactful visual approach paired with a tangible medium demonstrates powerful design with a purpose.
    Beyond the Imagery
    Beyond the stunning visuals, the yearbook offers insightful essays and interviews on current poster design trends. The introductory section features jury members, their works, and statements on the selection process, alongside forewords from the association president and jury chair. This editorial content offers valuable context and insights into the evolving landscape of graphic design.
    The book’s concept playfully questions the seriousness and benevolence of the honorary certificates awarded to the winning designers. This subtle irony adds a unique layer to the publication, transforming it from a mere compilation into a thoughtful commentary on the design world itself. It's an inspiring showcase of the cutting edge of contemporary graphic design.
    The Art of Editorial Design
    "100 Beste Plakate 24" is a prime example of exceptional editorial design. It's not just about compiling images; it's about curating a narrative. The precise layout, thoughtful typography choices, and the deliberate flow of content all contribute to a cohesive and engaging experience. This book highlights how editorial design can transform a collection of works into a compelling story, inviting readers to delve deeper into each piece.
    The attention to detail, from the softcover with flaps to the thread-stitching and hot-foil embossing, speaks volumes about the dedication to craftsmanship. This is where illustration, graphic design, and branding converge to create a truly immersive experience.
    Final Thoughts
    This yearbook is a must-have for anyone passionate about graphic design and visual identity. It offers a fresh perspective on contemporary poster design, highlighting both aesthetic excellence and social relevance. The detailed insights into the design process and the designers' intentions make it an invaluable resource. Pick up a copy and see how impactful design can be.
    You can learn more about this incredible work and acquire your copy at slanted.de/product/100-beste-plakate-24.
    Editorial design artifacts

    Tags

    editorial design
    #editorial #design #beste #plakate #showcase
    Editorial Design: '100 Beste Plakate 24' Showcase
    06/12 — 2025 by abduzeedo Explore "100 Beste Plakate 24," a stunning yearbook by Tristesse and Slanted Publishers. Dive into cutting-edge editorial design and visual identity. Design enthusiasts, get ready to dive into the latest from the German-speaking design scene. The "100 Beste Plakate 24" yearbook offers a compelling showcase of contemporary graphic design. It's more than just a collection; it's a deep exploration of visual identity and editorial design. This yearbook, published by Slanted Publishers and edited by 100 beste Plakate e. V. and Fons Hickmann, is a testament to the power of impactful poster design. The design studio Tristesse from Basel took the reins for the overall concept, delivering a fresh and cheeky aesthetic that makes the "100 best posters" feel like leading actors on a vibrant stage. Their in-house approach to layout, typography, and photography truly shines. Unpacking the Visuals The book's formatand 256 pages allow for large-format images, providing ample space to appreciate each poster's intricate details. It includes detailed credits, content descriptions, and creation contexts. This commitment to detail in the editorial design elevates the reading experience. One notable example within the yearbook is the "To-Do: Diplome 24" poster campaign by Atelier HKB. Designed under Marco Matti's project management, this series features twelve motifs for the Bern University of the Arts graduation events. These posters highlight effective graphic design and visual communication. Another standout is the "Rettungsplakate" by klotz-studio für gestaltung. These "rescue posters," printed on actual rescue blankets, address homelessness in Germany. The raw, impactful visual approach paired with a tangible medium demonstrates powerful design with a purpose. Beyond the Imagery Beyond the stunning visuals, the yearbook offers insightful essays and interviews on current poster design trends. The introductory section features jury members, their works, and statements on the selection process, alongside forewords from the association president and jury chair. This editorial content offers valuable context and insights into the evolving landscape of graphic design. The book’s concept playfully questions the seriousness and benevolence of the honorary certificates awarded to the winning designers. This subtle irony adds a unique layer to the publication, transforming it from a mere compilation into a thoughtful commentary on the design world itself. It's an inspiring showcase of the cutting edge of contemporary graphic design. The Art of Editorial Design "100 Beste Plakate 24" is a prime example of exceptional editorial design. It's not just about compiling images; it's about curating a narrative. The precise layout, thoughtful typography choices, and the deliberate flow of content all contribute to a cohesive and engaging experience. This book highlights how editorial design can transform a collection of works into a compelling story, inviting readers to delve deeper into each piece. The attention to detail, from the softcover with flaps to the thread-stitching and hot-foil embossing, speaks volumes about the dedication to craftsmanship. This is where illustration, graphic design, and branding converge to create a truly immersive experience. Final Thoughts This yearbook is a must-have for anyone passionate about graphic design and visual identity. It offers a fresh perspective on contemporary poster design, highlighting both aesthetic excellence and social relevance. The detailed insights into the design process and the designers' intentions make it an invaluable resource. Pick up a copy and see how impactful design can be. You can learn more about this incredible work and acquire your copy at slanted.de/product/100-beste-plakate-24. Editorial design artifacts Tags editorial design #editorial #design #beste #plakate #showcase
    ABDUZEEDO.COM
    Editorial Design: '100 Beste Plakate 24' Showcase
    06/12 — 2025 by abduzeedo Explore "100 Beste Plakate 24," a stunning yearbook by Tristesse and Slanted Publishers. Dive into cutting-edge editorial design and visual identity. Design enthusiasts, get ready to dive into the latest from the German-speaking design scene. The "100 Beste Plakate 24" yearbook offers a compelling showcase of contemporary graphic design. It's more than just a collection; it's a deep exploration of visual identity and editorial design. This yearbook, published by Slanted Publishers and edited by 100 beste Plakate e. V. and Fons Hickmann, is a testament to the power of impactful poster design. The design studio Tristesse from Basel took the reins for the overall concept, delivering a fresh and cheeky aesthetic that makes the "100 best posters" feel like leading actors on a vibrant stage. Their in-house approach to layout, typography, and photography truly shines. Unpacking the Visuals The book's format (17×24 cm) and 256 pages allow for large-format images, providing ample space to appreciate each poster's intricate details. It includes detailed credits, content descriptions, and creation contexts. This commitment to detail in the editorial design elevates the reading experience. One notable example within the yearbook is the "To-Do: Diplome 24" poster campaign by Atelier HKB. Designed under Marco Matti's project management, this series features twelve motifs for the Bern University of the Arts graduation events. These posters highlight effective graphic design and visual communication. Another standout is the "Rettungsplakate" by klotz-studio für gestaltung. These "rescue posters," printed on actual rescue blankets, address homelessness in Germany. The raw, impactful visual approach paired with a tangible medium demonstrates powerful design with a purpose. Beyond the Imagery Beyond the stunning visuals, the yearbook offers insightful essays and interviews on current poster design trends. The introductory section features jury members, their works, and statements on the selection process, alongside forewords from the association president and jury chair. This editorial content offers valuable context and insights into the evolving landscape of graphic design. The book’s concept playfully questions the seriousness and benevolence of the honorary certificates awarded to the winning designers. This subtle irony adds a unique layer to the publication, transforming it from a mere compilation into a thoughtful commentary on the design world itself. It's an inspiring showcase of the cutting edge of contemporary graphic design. The Art of Editorial Design "100 Beste Plakate 24" is a prime example of exceptional editorial design. It's not just about compiling images; it's about curating a narrative. The precise layout, thoughtful typography choices, and the deliberate flow of content all contribute to a cohesive and engaging experience. This book highlights how editorial design can transform a collection of works into a compelling story, inviting readers to delve deeper into each piece. The attention to detail, from the softcover with flaps to the thread-stitching and hot-foil embossing, speaks volumes about the dedication to craftsmanship. This is where illustration, graphic design, and branding converge to create a truly immersive experience. Final Thoughts This yearbook is a must-have for anyone passionate about graphic design and visual identity. It offers a fresh perspective on contemporary poster design, highlighting both aesthetic excellence and social relevance. The detailed insights into the design process and the designers' intentions make it an invaluable resource. Pick up a copy and see how impactful design can be. You can learn more about this incredible work and acquire your copy at slanted.de/product/100-beste-plakate-24. Editorial design artifacts Tags editorial design
    0 Yorumlar 0 hisse senetleri
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Yorumlar 0 hisse senetleri
  • The Weirdest Part of the MCU Spider-Man Is Back for Vision Quest

    Remember that time when good ol’ Peter Parker called a drone strike on his classmates because another guy was flirting with MJ? Well, the artificial intelligence that made it happen is back, this time in snarky Canadian form!
    Deadline is reporting that Schitt’s Creek alum Emily Hampshire has been cast as E.D.I.T.H. in Vision Quest, the upcoming Disney+ series starring Paul Bettany as the synthezoid Avenger. E.D.I.T.H., of course, made her debut as a pair of ugly, gaudy sunglasses the late Tony Stark bequeathed to Peter in Spider-Man: Far From Home. Through E.D.I.T.H., Peter had access to vast technological resources, resources that Mysterio wanted to use for himself.

    At the end of Far From Home, Peter reclaimed the E.D.I.T.H. glasses and in Spider-Man: No Way Home, a screen readout assured us that they were inactive. Moreover, No Way Home ends with Peter having his secret identity wiped from everyone’s memory and a closing shot of him hand-stitching his own costume in a dingy New York apartment, suggeting that the MCU experiment of making working-class Peter Parker into the scion of a tech bro was done.
    That may still be true, in which case Vision Quest is a much better place for E.D.I.T.H. to exist. Created by Terry Matalas, showrunner of the Twelve Monkeys TV series and the third season of Star Trek: Picard, Vision Quest will follow the next phase in the life of the synthezoid Vision, who was killed in Avengers: Infinity War and resurrected as an initially evil clone in WandaVision.

    The title Vision Quest comes from a 1989-1990 arc of West Coast Avengers, written and penciled by John Byrne, in which the U.S. government dismantles Vision and recreates him into a mindless and easily controllable form, signified by his new bleach white look. Fans of the MCU will recognize that storyline from the last episodes of WandaVision, in which S.A.B.E.R. did the same thing to Bettany’s character.
    However, the Vision Quest comics continued to tell the story of Vision attempting to recover the humanity and personality he’d previously gained over the years, which will presumably be the plot of Vision Quest. However, E.D.I.T.H.’s casting is just the latest in a host of synthetic characters who will appear in the show. James Spader will return as Vision’s creator Ultron, and T’Nia Miller has joined the show as Jocasta, a female synthezoid originally created as Ultron’s bride. A few humans will show up as well, including the return of Faran Tahir as Raza, the leader of the Ten Rings terrorist cell, last seen in Iron Man, and frequent Matalas collaborator Todd Stashwick as a mystery man hunting Vision.
    That’s a packed cast, but as anyone who recalls the Picard season 3 episode in which androids Data and Lore merged, Matalas knows how to tell an interesting story about artificial intelligence. That episode also showed that Matalas knows how to add levity to heavy conversations about existence, making Hampshire’s casting as E.D.I.T.H. a wise choice. Just don’t let her anywhere near another school bus full of teenagers.
    Vision Quest is slated to appear on Disney+ in 2026.
    #weirdest #part #mcu #spiderman #back
    The Weirdest Part of the MCU Spider-Man Is Back for Vision Quest
    Remember that time when good ol’ Peter Parker called a drone strike on his classmates because another guy was flirting with MJ? Well, the artificial intelligence that made it happen is back, this time in snarky Canadian form! Deadline is reporting that Schitt’s Creek alum Emily Hampshire has been cast as E.D.I.T.H. in Vision Quest, the upcoming Disney+ series starring Paul Bettany as the synthezoid Avenger. E.D.I.T.H., of course, made her debut as a pair of ugly, gaudy sunglasses the late Tony Stark bequeathed to Peter in Spider-Man: Far From Home. Through E.D.I.T.H., Peter had access to vast technological resources, resources that Mysterio wanted to use for himself. At the end of Far From Home, Peter reclaimed the E.D.I.T.H. glasses and in Spider-Man: No Way Home, a screen readout assured us that they were inactive. Moreover, No Way Home ends with Peter having his secret identity wiped from everyone’s memory and a closing shot of him hand-stitching his own costume in a dingy New York apartment, suggeting that the MCU experiment of making working-class Peter Parker into the scion of a tech bro was done. That may still be true, in which case Vision Quest is a much better place for E.D.I.T.H. to exist. Created by Terry Matalas, showrunner of the Twelve Monkeys TV series and the third season of Star Trek: Picard, Vision Quest will follow the next phase in the life of the synthezoid Vision, who was killed in Avengers: Infinity War and resurrected as an initially evil clone in WandaVision. The title Vision Quest comes from a 1989-1990 arc of West Coast Avengers, written and penciled by John Byrne, in which the U.S. government dismantles Vision and recreates him into a mindless and easily controllable form, signified by his new bleach white look. Fans of the MCU will recognize that storyline from the last episodes of WandaVision, in which S.A.B.E.R. did the same thing to Bettany’s character. However, the Vision Quest comics continued to tell the story of Vision attempting to recover the humanity and personality he’d previously gained over the years, which will presumably be the plot of Vision Quest. However, E.D.I.T.H.’s casting is just the latest in a host of synthetic characters who will appear in the show. James Spader will return as Vision’s creator Ultron, and T’Nia Miller has joined the show as Jocasta, a female synthezoid originally created as Ultron’s bride. A few humans will show up as well, including the return of Faran Tahir as Raza, the leader of the Ten Rings terrorist cell, last seen in Iron Man, and frequent Matalas collaborator Todd Stashwick as a mystery man hunting Vision. That’s a packed cast, but as anyone who recalls the Picard season 3 episode in which androids Data and Lore merged, Matalas knows how to tell an interesting story about artificial intelligence. That episode also showed that Matalas knows how to add levity to heavy conversations about existence, making Hampshire’s casting as E.D.I.T.H. a wise choice. Just don’t let her anywhere near another school bus full of teenagers. Vision Quest is slated to appear on Disney+ in 2026. #weirdest #part #mcu #spiderman #back
    WWW.DENOFGEEK.COM
    The Weirdest Part of the MCU Spider-Man Is Back for Vision Quest
    Remember that time when good ol’ Peter Parker called a drone strike on his classmates because another guy was flirting with MJ? Well, the artificial intelligence that made it happen is back, this time in snarky Canadian form! Deadline is reporting that Schitt’s Creek alum Emily Hampshire has been cast as E.D.I.T.H. in Vision Quest, the upcoming Disney+ series starring Paul Bettany as the synthezoid Avenger. E.D.I.T.H., of course, made her debut as a pair of ugly, gaudy sunglasses the late Tony Stark bequeathed to Peter in Spider-Man: Far From Home. Through E.D.I.T.H., Peter had access to vast technological resources, resources that Mysterio wanted to use for himself. At the end of Far From Home, Peter reclaimed the E.D.I.T.H. glasses and in Spider-Man: No Way Home, a screen readout assured us that they were inactive. Moreover, No Way Home ends with Peter having his secret identity wiped from everyone’s memory and a closing shot of him hand-stitching his own costume in a dingy New York apartment, suggeting that the MCU experiment of making working-class Peter Parker into the scion of a tech bro was done. That may still be true, in which case Vision Quest is a much better place for E.D.I.T.H. to exist. Created by Terry Matalas, showrunner of the Twelve Monkeys TV series and the third season of Star Trek: Picard, Vision Quest will follow the next phase in the life of the synthezoid Vision, who was killed in Avengers: Infinity War and resurrected as an initially evil clone in WandaVision. The title Vision Quest comes from a 1989-1990 arc of West Coast Avengers, written and penciled by John Byrne, in which the U.S. government dismantles Vision and recreates him into a mindless and easily controllable form, signified by his new bleach white look. Fans of the MCU will recognize that storyline from the last episodes of WandaVision, in which S.A.B.E.R. did the same thing to Bettany’s character. However, the Vision Quest comics continued to tell the story of Vision attempting to recover the humanity and personality he’d previously gained over the years, which will presumably be the plot of Vision Quest. However, E.D.I.T.H.’s casting is just the latest in a host of synthetic characters who will appear in the show. James Spader will return as Vision’s creator Ultron, and T’Nia Miller has joined the show as Jocasta, a female synthezoid originally created as Ultron’s bride. A few humans will show up as well, including the return of Faran Tahir as Raza, the leader of the Ten Rings terrorist cell, last seen in Iron Man, and frequent Matalas collaborator Todd Stashwick as a mystery man hunting Vision. That’s a packed cast, but as anyone who recalls the Picard season 3 episode in which androids Data and Lore merged, Matalas knows how to tell an interesting story about artificial intelligence. That episode also showed that Matalas knows how to add levity to heavy conversations about existence, making Hampshire’s casting as E.D.I.T.H. a wise choice. Just don’t let her anywhere near another school bus full of teenagers. Vision Quest is slated to appear on Disney+ in 2026.
    Like
    Love
    Wow
    Sad
    Angry
    354
    0 Yorumlar 0 hisse senetleri
  • Pentagram crafts a smart, elegant identity for AI video pioneer TwelveLabs

    Pentagram's latest project introduces a striking new identity for TwelveLabs, an AI company redefining how machines comprehend video. At its heart, the rebrand marks a conceptual shift, positioning video not as a linear sequence of frames but as a volume.
    For TwelveLabs, this reframing is more than a technological advancement, as it's setting them apart in the fast-moving AI landscape. For Jody Hudson-Powell and the Pentagram team, it became the foundation for the entire creative direction.
    "The 'video as volume' idea became our way of visualising their technology," Jody explains. "Traditionally, AI sees video as separate frames – individual moments, isolated. But TwelveLabs understands everything at once, an interconnected whole. This shift in perspective became the core idea guiding all our design choices."

    Turning such an abstract and dynamic concept into a tangible brand identity was no small feat. In a sector often awash with dense data visualisations and corporate clichés, Pentagram's work for TwelveLabs stands out for its clarity and restraint. Described by the team as "smart, elegant, alive", the identity not only enhances the product's appearance but also helps explain it.
    "Abstract concepts can easily get complicated, but by staying closely focused on 'video as volume,' every decision felt purposeful and naturally clear," says Jody.
    A key part of this clarity is the system's use of diagrams and modular layouts. These elements do the heavy lifting when it comes to representing TwelveLabs' complex technology in a way that's accessible to a wide range of audiences, from engineers and developers to enterprise clients and creatives.
    "Complexity becomes understandable when you strip things back to what's truly important," Jody says. "We spent time refining the diagrams to say just enough – clear for everyone but authentic to those who really know the tech. When engineers see their work clearly represented, and others intuitively understand it, you've found some interesting ground."

    What's striking is how the identity balances the technical with the emotive. It's a visual language that feels structured but not cold, intelligent but not aloof. According to Jody, this tension was carefully calibrated.
    "Technology can feel distant, cold, but it's built around human experiences: images, memories, moments," he explains. "Our system provided clarity and structure, creating natural space for these human elements to emerge, balancing intelligence and warmth."
    Motion, too, plays a critical role in bringing the brand to life. Given that TwelveLabs' platform revolves around continuous, connected reasoning, it made sense for motion to echo that logic, being subtle but purposeful.
    "Motion always means something, even subtly," says Jody. "Every bit of movement mirrored the continuous reasoning of the platform. Allowing room for gentle expressiveness and even playfulness didn't take away from clarity – it enhanced it."

    One of the most distinctive elements of the new identity is the horse symbol. On the surface, it's a nod to Eadweard Muybridge's pioneering photographic studies of motion and an iconic reference for any visual technology. But there's a deeper connection, as TwelveLabs already named its AI models after horses. The Pentagram team simply brought that story to the forefront.
    "Often, the best ideas are already there, just waiting to be noticed," says Jody. "We simply amplified that story, connecting it back to Muybridge's historical images, allowing it to clearly communicate motion and intelligence in a fresh yet familiar way."

    Throughout the project, the team had to consider the diversity of TwelveLabs' audience, from developers and researchers to large-scale enterprise clients and the broader creative community. The result is an identity system that feels accessible without being simplistic and capable of meeting users at every level of expertise.
    "We aimed for a universal tone – clear, direct, and welcoming for everyone, regardless of their expertise," Jody says. "Precision doesn't have to be complicated, and clarity invites everyone in. Finding a voice that felt calm, clear, and honest meant meeting each person exactly where they are."
    The outcome is an identity that doesn't just repackage complex AI technology. It embodies the very qualities that make TwelveLabs' approach revolutionary: interconnected, intelligent, and distinctly human.
    #pentagram #crafts #smart #elegant #identity
    Pentagram crafts a smart, elegant identity for AI video pioneer TwelveLabs
    Pentagram's latest project introduces a striking new identity for TwelveLabs, an AI company redefining how machines comprehend video. At its heart, the rebrand marks a conceptual shift, positioning video not as a linear sequence of frames but as a volume. For TwelveLabs, this reframing is more than a technological advancement, as it's setting them apart in the fast-moving AI landscape. For Jody Hudson-Powell and the Pentagram team, it became the foundation for the entire creative direction. "The 'video as volume' idea became our way of visualising their technology," Jody explains. "Traditionally, AI sees video as separate frames – individual moments, isolated. But TwelveLabs understands everything at once, an interconnected whole. This shift in perspective became the core idea guiding all our design choices." Turning such an abstract and dynamic concept into a tangible brand identity was no small feat. In a sector often awash with dense data visualisations and corporate clichés, Pentagram's work for TwelveLabs stands out for its clarity and restraint. Described by the team as "smart, elegant, alive", the identity not only enhances the product's appearance but also helps explain it. "Abstract concepts can easily get complicated, but by staying closely focused on 'video as volume,' every decision felt purposeful and naturally clear," says Jody. A key part of this clarity is the system's use of diagrams and modular layouts. These elements do the heavy lifting when it comes to representing TwelveLabs' complex technology in a way that's accessible to a wide range of audiences, from engineers and developers to enterprise clients and creatives. "Complexity becomes understandable when you strip things back to what's truly important," Jody says. "We spent time refining the diagrams to say just enough – clear for everyone but authentic to those who really know the tech. When engineers see their work clearly represented, and others intuitively understand it, you've found some interesting ground." What's striking is how the identity balances the technical with the emotive. It's a visual language that feels structured but not cold, intelligent but not aloof. According to Jody, this tension was carefully calibrated. "Technology can feel distant, cold, but it's built around human experiences: images, memories, moments," he explains. "Our system provided clarity and structure, creating natural space for these human elements to emerge, balancing intelligence and warmth." Motion, too, plays a critical role in bringing the brand to life. Given that TwelveLabs' platform revolves around continuous, connected reasoning, it made sense for motion to echo that logic, being subtle but purposeful. "Motion always means something, even subtly," says Jody. "Every bit of movement mirrored the continuous reasoning of the platform. Allowing room for gentle expressiveness and even playfulness didn't take away from clarity – it enhanced it." One of the most distinctive elements of the new identity is the horse symbol. On the surface, it's a nod to Eadweard Muybridge's pioneering photographic studies of motion and an iconic reference for any visual technology. But there's a deeper connection, as TwelveLabs already named its AI models after horses. The Pentagram team simply brought that story to the forefront. "Often, the best ideas are already there, just waiting to be noticed," says Jody. "We simply amplified that story, connecting it back to Muybridge's historical images, allowing it to clearly communicate motion and intelligence in a fresh yet familiar way." Throughout the project, the team had to consider the diversity of TwelveLabs' audience, from developers and researchers to large-scale enterprise clients and the broader creative community. The result is an identity system that feels accessible without being simplistic and capable of meeting users at every level of expertise. "We aimed for a universal tone – clear, direct, and welcoming for everyone, regardless of their expertise," Jody says. "Precision doesn't have to be complicated, and clarity invites everyone in. Finding a voice that felt calm, clear, and honest meant meeting each person exactly where they are." The outcome is an identity that doesn't just repackage complex AI technology. It embodies the very qualities that make TwelveLabs' approach revolutionary: interconnected, intelligent, and distinctly human. #pentagram #crafts #smart #elegant #identity
    WWW.CREATIVEBOOM.COM
    Pentagram crafts a smart, elegant identity for AI video pioneer TwelveLabs
    Pentagram's latest project introduces a striking new identity for TwelveLabs, an AI company redefining how machines comprehend video. At its heart, the rebrand marks a conceptual shift, positioning video not as a linear sequence of frames but as a volume. For TwelveLabs, this reframing is more than a technological advancement, as it's setting them apart in the fast-moving AI landscape. For Jody Hudson-Powell and the Pentagram team, it became the foundation for the entire creative direction. "The 'video as volume' idea became our way of visualising their technology," Jody explains. "Traditionally, AI sees video as separate frames – individual moments, isolated. But TwelveLabs understands everything at once, an interconnected whole. This shift in perspective became the core idea guiding all our design choices." Turning such an abstract and dynamic concept into a tangible brand identity was no small feat. In a sector often awash with dense data visualisations and corporate clichés, Pentagram's work for TwelveLabs stands out for its clarity and restraint. Described by the team as "smart, elegant, alive", the identity not only enhances the product's appearance but also helps explain it. "Abstract concepts can easily get complicated, but by staying closely focused on 'video as volume,' every decision felt purposeful and naturally clear," says Jody. A key part of this clarity is the system's use of diagrams and modular layouts. These elements do the heavy lifting when it comes to representing TwelveLabs' complex technology in a way that's accessible to a wide range of audiences, from engineers and developers to enterprise clients and creatives. "Complexity becomes understandable when you strip things back to what's truly important," Jody says. "We spent time refining the diagrams to say just enough – clear for everyone but authentic to those who really know the tech. When engineers see their work clearly represented, and others intuitively understand it, you've found some interesting ground." What's striking is how the identity balances the technical with the emotive. It's a visual language that feels structured but not cold, intelligent but not aloof. According to Jody, this tension was carefully calibrated. "Technology can feel distant, cold, but it's built around human experiences: images, memories, moments," he explains. "Our system provided clarity and structure, creating natural space for these human elements to emerge, balancing intelligence and warmth." Motion, too, plays a critical role in bringing the brand to life. Given that TwelveLabs' platform revolves around continuous, connected reasoning, it made sense for motion to echo that logic, being subtle but purposeful. "Motion always means something, even subtly," says Jody. "Every bit of movement mirrored the continuous reasoning of the platform. Allowing room for gentle expressiveness and even playfulness didn't take away from clarity – it enhanced it." One of the most distinctive elements of the new identity is the horse symbol. On the surface, it's a nod to Eadweard Muybridge's pioneering photographic studies of motion and an iconic reference for any visual technology. But there's a deeper connection, as TwelveLabs already named its AI models after horses. The Pentagram team simply brought that story to the forefront. "Often, the best ideas are already there, just waiting to be noticed," says Jody. "We simply amplified that story, connecting it back to Muybridge's historical images, allowing it to clearly communicate motion and intelligence in a fresh yet familiar way." Throughout the project, the team had to consider the diversity of TwelveLabs' audience, from developers and researchers to large-scale enterprise clients and the broader creative community. The result is an identity system that feels accessible without being simplistic and capable of meeting users at every level of expertise. "We aimed for a universal tone – clear, direct, and welcoming for everyone, regardless of their expertise," Jody says. "Precision doesn't have to be complicated, and clarity invites everyone in. Finding a voice that felt calm, clear, and honest meant meeting each person exactly where they are." The outcome is an identity that doesn't just repackage complex AI technology. It embodies the very qualities that make TwelveLabs' approach revolutionary: interconnected, intelligent, and distinctly human.
    Like
    Love
    Wow
    Sad
    Angry
    174
    0 Yorumlar 0 hisse senetleri
  • We Build the LEGO Harry Potter Monster Book of Monsters: An Iconic Book That Actually Chomps

    LEGO has released a ton of new Harry Potter sets for June, but perhaps the most quirky and delightful build in the bunch is the Chomping Monster Book of Monsters set. It's a recreation of the iconic book we first see in the third Harry Potter filmand it absolutely looks the part. More importantly, though, it actually chomps.Out June 1Chomping Monster Book of Monstersat LEGOThe new Monster Book of Monsters set has a lot of cool details on the outside that made it fun to put together, but it's what on the inside that makes it fun to play with after. LEGO provided IGN with a copy of the set for a test build and I got the chance to put it together myself. At only 518 pieces, I was able to build the whole thing in one evening before I went to bed and had my nephews playing with it the next morning.We Build the LEGO Harry Potter Monster Book of MonstersSet #76449 is actually the second iteration of the LEGO Monster Book of Monsters. The first rendition was a Gift with Purchase, called The Monster Book of Monsters, released back in 2020 with a lot fewer pieces and a more simplistic style. The newer Chomping Monster Book of Monsters looks a lot more realistic and includes actual chomping action. It also includes a Neville Longbottom minifigure that is holding a much smaller version of the book. It's a fairly easy build, but it was fun to put together and the chomping action was a nice touch.The build is split up into four sections and you get one bag of LEGO bricks for each part. You start by putting together your little Neville Longbottom minifigure. He has two different face options to chooose from, so you can make him either smiley or terrified. I decided to go with smiley and placed him near the pieces as I put together everything else. The first part of the build is basically putting together the framework for the book.This is the longest step in the whole process and it admittedly takes quite a bit of time until it really starts looking like something. You're building what will later become the chassis that your little chomping motor and wheels will later sit in, so it's important you get everything facing the right direction. It really helped me that you use red bricks to indicate the back and blue bricks for the front or I definitely would have made a mistake along the way.It doesn't actually resemble anything like a monster book until you start adding some of the exterior pieces. There are light brown panels with a ridge that will look like pages once you're finished putting them together. The dark brown pointy pieces you add on the front and sides are what really start making it look like what you see on the box. You'll also add smooth panels on the back of each rectangle that will eventually fit together to form the entire base of the book.The one thing I didn't particularly enjoy about this build was how repetitive it felt to build both sides of the book itself. There were some small differences between the top and the bottom of the book, but for the most part the build felt exactly the same. So it ended up being a bit repetitive to have to do basically the same step twice. That being said, it was extremely satisfying when I finally got to be able to connect the two halves at the spine. You thread a few long pieces through the back hinge and suddenly you've got what looks to be a hollowed out book.The next portion of this build is where it really started to become fun. Once you're done with the overall structure, you move on to building the cover of the book. You start out with a series of large flat brown pieces that form the base of your cover. These are held together by two long flat pieces that are also thankfully color coordinated to indicate which side is up. Once you have the base assembled, you start adding all of the cool little details that bring the set to life. This includes the actual title of the book as well as the beady little eyes and spikey little feelers.Once you snap the cover onto the top of your book frame, it starts looking like a legit Monster Book of Monsters. And while the cover is really the turning point, it's all of the additional details you add on after this that start to give it an air of life. As you build the frame and the cover of the book you will have added a bunch of what appear to be little LEGO arms near the front pages. These will become the holders for the book's spiny little teeth. There are twelve of these in total and once they're snapped in you can articulate them in whatever direction feels right.The larger teeth get added after that, which is when it starts to look like it could actually chomp you. The instructions tell you to add one set of teeth at a time, but I decided to build them both first and add them all at once for dramatic effect. When all of the teeth are attached, you'll have what looks to be a Monster Book of Monsters that's actually capable of doing some chomping.The final step of this set is the most interesting part. At this point you've built a fairly realistic monster book, but it's still an empty shell waiting for some internal components to get it running. Now you essentially have to build a working pull-back car that you place inside so it can get to chomping on its own.The motorized aspect of this build is pretty straightforward, but it's a nice break from all of the detail work I had just done to be suddenly building a little wheeled car. It was also really fun to see how well the little car I'd just built snapped into place on the inside of the book itself.After I fully put everything together, I immediately tried out the rolling chomping action. It's a neat trick that turns what looks like a display set into an actual toy you can play with. The roll-back mechanism only goes so far back, so it doesn't actually roll that far, but the chomping action makes up for the lack of distance. As it moves forward you can actually hear the teeth clacking together. I had both of my younger nephews play with the set afterwards and they enjoyed playing with it almost as much as l did. The gimmick wears off fairly quickly after you've done it a few times, but afterwards you still have a really cool looking set you can display somewhere.The price of the set is fairly reasonable at placing it well below some of the most expensive sets on the market right now. Any of the franchise-specific sets that come out are always going to be more expensive than a non-franchise set with a similar number of pieces, and this has consistently remained true for all LEGO Harry Potter sets. All-in-all, it's a set I'd recommend to any fan of Harry Potter and LEGO. It's a fun and simple build you can knock out in an afternoon, and the finished product would make for a great Harry Potter gift you can display on a shelf or your desk.LEGO Harry Potter Chomping Book of Monsters, Set #76449, retails for and it is composed of 518 pieces. It is available at the LEGO Store beginning on June 1, 2025.
    #build #lego #harry #potter #monster
    We Build the LEGO Harry Potter Monster Book of Monsters: An Iconic Book That Actually Chomps
    LEGO has released a ton of new Harry Potter sets for June, but perhaps the most quirky and delightful build in the bunch is the Chomping Monster Book of Monsters set. It's a recreation of the iconic book we first see in the third Harry Potter filmand it absolutely looks the part. More importantly, though, it actually chomps.Out June 1Chomping Monster Book of Monstersat LEGOThe new Monster Book of Monsters set has a lot of cool details on the outside that made it fun to put together, but it's what on the inside that makes it fun to play with after. LEGO provided IGN with a copy of the set for a test build and I got the chance to put it together myself. At only 518 pieces, I was able to build the whole thing in one evening before I went to bed and had my nephews playing with it the next morning.We Build the LEGO Harry Potter Monster Book of MonstersSet #76449 is actually the second iteration of the LEGO Monster Book of Monsters. The first rendition was a Gift with Purchase, called The Monster Book of Monsters, released back in 2020 with a lot fewer pieces and a more simplistic style. The newer Chomping Monster Book of Monsters looks a lot more realistic and includes actual chomping action. It also includes a Neville Longbottom minifigure that is holding a much smaller version of the book. It's a fairly easy build, but it was fun to put together and the chomping action was a nice touch.The build is split up into four sections and you get one bag of LEGO bricks for each part. You start by putting together your little Neville Longbottom minifigure. He has two different face options to chooose from, so you can make him either smiley or terrified. I decided to go with smiley and placed him near the pieces as I put together everything else. The first part of the build is basically putting together the framework for the book.This is the longest step in the whole process and it admittedly takes quite a bit of time until it really starts looking like something. You're building what will later become the chassis that your little chomping motor and wheels will later sit in, so it's important you get everything facing the right direction. It really helped me that you use red bricks to indicate the back and blue bricks for the front or I definitely would have made a mistake along the way.It doesn't actually resemble anything like a monster book until you start adding some of the exterior pieces. There are light brown panels with a ridge that will look like pages once you're finished putting them together. The dark brown pointy pieces you add on the front and sides are what really start making it look like what you see on the box. You'll also add smooth panels on the back of each rectangle that will eventually fit together to form the entire base of the book.The one thing I didn't particularly enjoy about this build was how repetitive it felt to build both sides of the book itself. There were some small differences between the top and the bottom of the book, but for the most part the build felt exactly the same. So it ended up being a bit repetitive to have to do basically the same step twice. That being said, it was extremely satisfying when I finally got to be able to connect the two halves at the spine. You thread a few long pieces through the back hinge and suddenly you've got what looks to be a hollowed out book.The next portion of this build is where it really started to become fun. Once you're done with the overall structure, you move on to building the cover of the book. You start out with a series of large flat brown pieces that form the base of your cover. These are held together by two long flat pieces that are also thankfully color coordinated to indicate which side is up. Once you have the base assembled, you start adding all of the cool little details that bring the set to life. This includes the actual title of the book as well as the beady little eyes and spikey little feelers.Once you snap the cover onto the top of your book frame, it starts looking like a legit Monster Book of Monsters. And while the cover is really the turning point, it's all of the additional details you add on after this that start to give it an air of life. As you build the frame and the cover of the book you will have added a bunch of what appear to be little LEGO arms near the front pages. These will become the holders for the book's spiny little teeth. There are twelve of these in total and once they're snapped in you can articulate them in whatever direction feels right.The larger teeth get added after that, which is when it starts to look like it could actually chomp you. The instructions tell you to add one set of teeth at a time, but I decided to build them both first and add them all at once for dramatic effect. When all of the teeth are attached, you'll have what looks to be a Monster Book of Monsters that's actually capable of doing some chomping.The final step of this set is the most interesting part. At this point you've built a fairly realistic monster book, but it's still an empty shell waiting for some internal components to get it running. Now you essentially have to build a working pull-back car that you place inside so it can get to chomping on its own.The motorized aspect of this build is pretty straightforward, but it's a nice break from all of the detail work I had just done to be suddenly building a little wheeled car. It was also really fun to see how well the little car I'd just built snapped into place on the inside of the book itself.After I fully put everything together, I immediately tried out the rolling chomping action. It's a neat trick that turns what looks like a display set into an actual toy you can play with. The roll-back mechanism only goes so far back, so it doesn't actually roll that far, but the chomping action makes up for the lack of distance. As it moves forward you can actually hear the teeth clacking together. I had both of my younger nephews play with the set afterwards and they enjoyed playing with it almost as much as l did. The gimmick wears off fairly quickly after you've done it a few times, but afterwards you still have a really cool looking set you can display somewhere.The price of the set is fairly reasonable at placing it well below some of the most expensive sets on the market right now. Any of the franchise-specific sets that come out are always going to be more expensive than a non-franchise set with a similar number of pieces, and this has consistently remained true for all LEGO Harry Potter sets. All-in-all, it's a set I'd recommend to any fan of Harry Potter and LEGO. It's a fun and simple build you can knock out in an afternoon, and the finished product would make for a great Harry Potter gift you can display on a shelf or your desk.LEGO Harry Potter Chomping Book of Monsters, Set #76449, retails for and it is composed of 518 pieces. It is available at the LEGO Store beginning on June 1, 2025. #build #lego #harry #potter #monster
    WWW.IGN.COM
    We Build the LEGO Harry Potter Monster Book of Monsters: An Iconic Book That Actually Chomps
    LEGO has released a ton of new Harry Potter sets for June, but perhaps the most quirky and delightful build in the bunch is the Chomping Monster Book of Monsters set. It's a recreation of the iconic book we first see in the third Harry Potter film (The Prisoner of Azkaban) and it absolutely looks the part. More importantly, though, it actually chomps.Out June 1Chomping Monster Book of Monsters$59.99 at LEGOThe new Monster Book of Monsters set has a lot of cool details on the outside that made it fun to put together, but it's what on the inside that makes it fun to play with after. LEGO provided IGN with a copy of the set for a test build and I got the chance to put it together myself. At only 518 pieces, I was able to build the whole thing in one evening before I went to bed and had my nephews playing with it the next morning.We Build the LEGO Harry Potter Monster Book of MonstersSet #76449 is actually the second iteration of the LEGO Monster Book of Monsters. The first rendition was a Gift with Purchase (set #30628), called The Monster Book of Monsters, released back in 2020 with a lot fewer pieces and a more simplistic style. The newer Chomping Monster Book of Monsters looks a lot more realistic and includes actual chomping action. It also includes a Neville Longbottom minifigure that is holding a much smaller version of the book. It's a fairly easy build, but it was fun to put together and the chomping action was a nice touch.The build is split up into four sections and you get one bag of LEGO bricks for each part. You start by putting together your little Neville Longbottom minifigure. He has two different face options to chooose from, so you can make him either smiley or terrified. I decided to go with smiley and placed him near the pieces as I put together everything else. The first part of the build is basically putting together the framework for the book.This is the longest step in the whole process and it admittedly takes quite a bit of time until it really starts looking like something. You're building what will later become the chassis that your little chomping motor and wheels will later sit in, so it's important you get everything facing the right direction. It really helped me that you use red bricks to indicate the back and blue bricks for the front or I definitely would have made a mistake along the way.It doesn't actually resemble anything like a monster book until you start adding some of the exterior pieces. There are light brown panels with a ridge that will look like pages once you're finished putting them together. The dark brown pointy pieces you add on the front and sides are what really start making it look like what you see on the box. You'll also add smooth panels on the back of each rectangle that will eventually fit together to form the entire base of the book.The one thing I didn't particularly enjoy about this build was how repetitive it felt to build both sides of the book itself. There were some small differences between the top and the bottom of the book, but for the most part the build felt exactly the same. So it ended up being a bit repetitive to have to do basically the same step twice. That being said, it was extremely satisfying when I finally got to be able to connect the two halves at the spine. You thread a few long pieces through the back hinge and suddenly you've got what looks to be a hollowed out book.The next portion of this build is where it really started to become fun. Once you're done with the overall structure, you move on to building the cover of the book. You start out with a series of large flat brown pieces that form the base of your cover. These are held together by two long flat pieces that are also thankfully color coordinated to indicate which side is up. Once you have the base assembled, you start adding all of the cool little details that bring the set to life. This includes the actual title of the book as well as the beady little eyes and spikey little feelers.Once you snap the cover onto the top of your book frame, it starts looking like a legit Monster Book of Monsters. And while the cover is really the turning point, it's all of the additional details you add on after this that start to give it an air of life. As you build the frame and the cover of the book you will have added a bunch of what appear to be little LEGO arms near the front pages. These will become the holders for the book's spiny little teeth. There are twelve of these in total and once they're snapped in you can articulate them in whatever direction feels right.The larger teeth get added after that, which is when it starts to look like it could actually chomp you. The instructions tell you to add one set of teeth at a time, but I decided to build them both first and add them all at once for dramatic effect. When all of the teeth are attached, you'll have what looks to be a Monster Book of Monsters that's actually capable of doing some chomping.The final step of this set is the most interesting part. At this point you've built a fairly realistic monster book, but it's still an empty shell waiting for some internal components to get it running. Now you essentially have to build a working pull-back car that you place inside so it can get to chomping on its own.The motorized aspect of this build is pretty straightforward, but it's a nice break from all of the detail work I had just done to be suddenly building a little wheeled car. It was also really fun to see how well the little car I'd just built snapped into place on the inside of the book itself.After I fully put everything together, I immediately tried out the rolling chomping action. It's a neat trick that turns what looks like a display set into an actual toy you can play with. The roll-back mechanism only goes so far back, so it doesn't actually roll that far, but the chomping action makes up for the lack of distance. As it moves forward you can actually hear the teeth clacking together. I had both of my younger nephews play with the set afterwards and they enjoyed playing with it almost as much as l did. The gimmick wears off fairly quickly after you've done it a few times, but afterwards you still have a really cool looking set you can display somewhere.The price of the set is fairly reasonable at $60, placing it well below some of the most expensive sets on the market right now. Any of the franchise-specific sets that come out are always going to be more expensive than a non-franchise set with a similar number of pieces, and this has consistently remained true for all LEGO Harry Potter sets. All-in-all, it's a set I'd recommend to any fan of Harry Potter and LEGO. It's a fun and simple build you can knock out in an afternoon, and the finished product would make for a great Harry Potter gift you can display on a shelf or your desk.LEGO Harry Potter Chomping Book of Monsters, Set #76449, retails for $59.99, and it is composed of 518 pieces. It is available at the LEGO Store beginning on June 1, 2025.
    0 Yorumlar 0 hisse senetleri