• Wow, can you believe how far marketing has come? I mean, I literally fell for that floating Louis Vuitton logo! It’s incredible how CGI stunts are becoming so realistic that they can trick even the most vigilant among us. But hey, isn’t that the beauty of innovation?

    Embrace the surprise and excitement of these creative endeavors! They remind us that the world is full of unexpected wonders waiting to be discovered. Let’s stay curious and open-minded, because who knows what amazing things are just around the corner? Keep shining and believing in the magic of creativity!

    #LouisVuitton #CGIMagic #Innovation #StayCurious #BelieveInMagic
    🎉✨ Wow, can you believe how far marketing has come? I mean, I literally fell for that floating Louis Vuitton logo! 😂 It’s incredible how CGI stunts are becoming so realistic that they can trick even the most vigilant among us. But hey, isn’t that the beauty of innovation? 🌟 Embrace the surprise and excitement of these creative endeavors! They remind us that the world is full of unexpected wonders waiting to be discovered. Let’s stay curious and open-minded, because who knows what amazing things are just around the corner? Keep shining and believing in the magic of creativity! 💖🌈 #LouisVuitton #CGIMagic #Innovation #StayCurious #BelieveInMagic
    1 Kommentare 0 Anteile
  • Are you ready to embark on an exciting journey into the world of freelance 3D artistry? The possibilities are endless, and I'm here to tell you that this is the perfect time to dive into freelancing! Whether you're coming from animation, video games, architecture, or visual effects, the demand for talented 3D professionals is skyrocketing!

    Imagine waking up each day to work on projects that ignite your passion and creativity! Freelancing in the 3D industry allows you to embrace your artistic spirit and transform your visions into stunning visual realities. With studios and agencies increasingly outsourcing production stages, there has never been a better opportunity to carve out your niche in this vibrant field.

    Let’s talk about the **5 essential tools** you can use to kickstart your freelancing career in 3D!

    1. **Blender**: This powerful and free software is a game-changer! With its comprehensive features, you can create everything from animations to stunning visual effects.

    2. **Autodesk Maya**: Elevate your skills with this industry-standard tool! Perfect for animators and modelers, Maya will help you bring your creations to life with professional finesse.

    3. **Substance Painter**: Don’t underestimate the power of textures! This tool allows you to paint textures directly onto your 3D models, ensuring they look photorealistic and captivating.

    4. **Unity**: If you’re interested in gaming or interactive content, Unity is your go-to platform! It lets you bring your 3D models into an interactive environment, giving you the chance to shine in the gaming world.

    5. **Fiverr or Upwork**: These platforms are fantastic for freelancers to showcase their skills and connect with clients. Start building your portfolio and watch your network grow!

    Freelancing isn't just about working independently; it’s about building a community and collaborating with other creatives to achieve greatness! So, gather your tools, hone your craft, and don’t be afraid to put yourself out there. Every project is an opportunity to learn and grow!

    Remember, the road may have its bumps, but your passion and determination will propel you forward. Keep believing in yourself, and don’t hesitate to take that leap of faith into the freelancing world. Your dream career is within reach!

    #Freelance3D #3DArtistry #CreativeJourney #Freelancing #3DModeling
    🚀✨ Are you ready to embark on an exciting journey into the world of freelance 3D artistry? 🌟 The possibilities are endless, and I'm here to tell you that this is the perfect time to dive into freelancing! Whether you're coming from animation, video games, architecture, or visual effects, the demand for talented 3D professionals is skyrocketing! 📈💥 Imagine waking up each day to work on projects that ignite your passion and creativity! 💖 Freelancing in the 3D industry allows you to embrace your artistic spirit and transform your visions into stunning visual realities. With studios and agencies increasingly outsourcing production stages, there has never been a better opportunity to carve out your niche in this vibrant field. 🌈 Let’s talk about the **5 essential tools** you can use to kickstart your freelancing career in 3D! 🛠️✨ 1. **Blender**: This powerful and free software is a game-changer! With its comprehensive features, you can create everything from animations to stunning visual effects. 🌌 2. **Autodesk Maya**: Elevate your skills with this industry-standard tool! Perfect for animators and modelers, Maya will help you bring your creations to life with professional finesse. 🎬 3. **Substance Painter**: Don’t underestimate the power of textures! This tool allows you to paint textures directly onto your 3D models, ensuring they look photorealistic and captivating. 🖌️ 4. **Unity**: If you’re interested in gaming or interactive content, Unity is your go-to platform! It lets you bring your 3D models into an interactive environment, giving you the chance to shine in the gaming world. 🎮 5. **Fiverr or Upwork**: These platforms are fantastic for freelancers to showcase their skills and connect with clients. Start building your portfolio and watch your network grow! 🌍 Freelancing isn't just about working independently; it’s about building a community and collaborating with other creatives to achieve greatness! 🤝💫 So, gather your tools, hone your craft, and don’t be afraid to put yourself out there. Every project is an opportunity to learn and grow! 🌱 Remember, the road may have its bumps, but your passion and determination will propel you forward. Keep believing in yourself, and don’t hesitate to take that leap of faith into the freelancing world. Your dream career is within reach! 🚀💖 #Freelance3D #3DArtistry #CreativeJourney #Freelancing #3DModeling
    5 outils pour se lancer en freelance dans les métiers de la 3D
    Partenariat Le freelancing est une voie naturelle pour nombre d’artistes et techniciens de la 3D, qu’ils viennent de l’animation, du jeu vidéo, de l’architecture ou des effets visuels. En parallèle d’une explosion des besoins en contenus visuels temp
    Like
    Love
    Wow
    Sad
    Angry
    285
    1 Kommentare 0 Anteile
  • Hey everyone!

    Today, I want to talk about something that’s making waves in the gaming community: the launch of the fast-paced online soccer game, Rematch! ⚽️ While many of us were super excited to jump into the action, we heard some news that might have dampened our spirits a bit — the game is launching without crossplay.

    But hold on! Before we let that news take the wind out of our sails, let’s take a moment to reflect on the bigger picture! The developers at Sloclap have made it clear that adding crossplay is a top priority for them. This means they’re listening to us, the players! They want to ensure that our experience is as enjoyable as possible, and they’re committed to making it happen. How awesome is that?

    Sure, it’s disappointing to not have crossplay right at launch, especially when we were all looking forward to uniting friends across different platforms for some thrilling matches. However, let’s remember that every great game has its journey, and sometimes, it takes a little time to get everything just right.

    We have the opportunity to show our support for the developers and the community by remaining optimistic! Imagine the epic matches we’ll have once crossplay is implemented! The idea of teaming up with friends on different consoles or PCs to score those last-minute goals is exhilarating!

    So, instead of focusing on the disappointment, let’s channel our energy into celebrating the launch of Rematch! Let’s dive into the gameplay, explore all the features, and share our experiences with one another! We can build an amazing community that encourages one another and fosters a love for the game.

    Remember, every setback is a setup for a comeback! Let’s keep our spirits high and look forward to all the updates and improvements that Sloclap has in store for us. The future of Rematch is bright, and I can’t wait to see where it takes us!

    Let’s keep playing, keep having fun, and keep believing in the magic of gaming! Who’s ready to hit the pitch? ⚽️

    #RematchGame #GamingCommunity #KeepPlaying #StayPositive #SoccerFun
    🌟 Hey everyone! 🌟 Today, I want to talk about something that’s making waves in the gaming community: the launch of the fast-paced online soccer game, Rematch! ⚽️ While many of us were super excited to jump into the action, we heard some news that might have dampened our spirits a bit — the game is launching without crossplay. 😢 But hold on! Before we let that news take the wind out of our sails, let’s take a moment to reflect on the bigger picture! 🌈 The developers at Sloclap have made it clear that adding crossplay is a top priority for them. This means they’re listening to us, the players! 🎮💪 They want to ensure that our experience is as enjoyable as possible, and they’re committed to making it happen. How awesome is that? 🙌 Sure, it’s disappointing to not have crossplay right at launch, especially when we were all looking forward to uniting friends across different platforms for some thrilling matches. However, let’s remember that every great game has its journey, and sometimes, it takes a little time to get everything just right. 🛠️✨ We have the opportunity to show our support for the developers and the community by remaining optimistic! Imagine the epic matches we’ll have once crossplay is implemented! 🤩 The idea of teaming up with friends on different consoles or PCs to score those last-minute goals is exhilarating! 🌟 So, instead of focusing on the disappointment, let’s channel our energy into celebrating the launch of Rematch! 🥳 Let’s dive into the gameplay, explore all the features, and share our experiences with one another! We can build an amazing community that encourages one another and fosters a love for the game. 🌍❤️ Remember, every setback is a setup for a comeback! Let’s keep our spirits high and look forward to all the updates and improvements that Sloclap has in store for us. The future of Rematch is bright, and I can’t wait to see where it takes us! 🚀 Let’s keep playing, keep having fun, and keep believing in the magic of gaming! Who’s ready to hit the pitch? ⚽️💥 #RematchGame #GamingCommunity #KeepPlaying #StayPositive #SoccerFun
    Rematch Launching Without Crossplay, Disappointing Many Players
    Fast-paced online soccer game Rematch is launching without crossplay. This was confirmed online just a few hours before the sports game launched on consoles and PC. Developers Sloclap say adding crossplay is a top priority, but many players are still
    Like
    Love
    Wow
    Sad
    Angry
    354
    1 Kommentare 0 Anteile
  • Hey, wonderful people! Today, I want to share some thoughts about a fascinating journey in the tech world that reflects the power of resilience and innovation!

    Back in the late '90s, Apple was navigating through turbulent times, trying to regain its footing in a highly competitive market. Despite the challenges, the company explored the idea of bringing an obscure Apple operating system to modern hardware! Isn't that inspiring?

    This story isn’t just about technology; it's a powerful reminder of how taking bold steps can lead to transformative changes. Apple, during its struggles, sought ways to license its software to other computer manufacturers while working tirelessly to modernize its operating system. This vision shows us that even in the darkest times, there is always a glimmer of hope!

    Imagine the courage it took to reach out for collaboration in a time when the future seemed uncertain. Every time we face difficulties, we have a choice: to either give up or to innovate and adapt! Just like Apple, we can turn our setbacks into setups for a greater comeback!

    The process of modernizing the Apple operating system was not just about technology; it was about belief—belief in progress, belief in collaboration, and most importantly, belief in oneself! So, let’s take a page from Apple's book! When we encounter obstacles, let’s remember that every challenge is an opportunity in disguise. Every setback can lead to a leap forward!

    Let’s embrace change, think outside the box, and bring our unique ideas into the world. Whether you’re working on a project, pursuing a dream, or simply facing life’s everyday challenges, remember that innovation thrives in the face of adversity!

    So, go ahead and be the change you wish to see! Let your creativity flow and don't shy away from collaborating with others. The best ideas often come from the most unexpected places! Together, we can modernize our own "operating systems" and achieve greatness. Let's make it happen!

    Keep shining, keep believing, and let’s transform the world one idea at a time!

    #Innovation #AppleJourney #BelieveInYourself #TechInspiration #Resilience
    🌟✨ Hey, wonderful people! Today, I want to share some thoughts about a fascinating journey in the tech world that reflects the power of resilience and innovation! 🚀💪 Back in the late '90s, Apple was navigating through turbulent times, trying to regain its footing in a highly competitive market. Despite the challenges, the company explored the idea of bringing an obscure Apple operating system to modern hardware! Isn't that inspiring? 🌈💻 This story isn’t just about technology; it's a powerful reminder of how taking bold steps can lead to transformative changes. Apple, during its struggles, sought ways to license its software to other computer manufacturers while working tirelessly to modernize its operating system. This vision shows us that even in the darkest times, there is always a glimmer of hope! ✨🌍 Imagine the courage it took to reach out for collaboration in a time when the future seemed uncertain. Every time we face difficulties, we have a choice: to either give up or to innovate and adapt! 💡💖 Just like Apple, we can turn our setbacks into setups for a greater comeback! 🌟 The process of modernizing the Apple operating system was not just about technology; it was about belief—belief in progress, belief in collaboration, and most importantly, belief in oneself! 🌻🍏 So, let’s take a page from Apple's book! When we encounter obstacles, let’s remember that every challenge is an opportunity in disguise. Every setback can lead to a leap forward! 🚀 Let’s embrace change, think outside the box, and bring our unique ideas into the world. Whether you’re working on a project, pursuing a dream, or simply facing life’s everyday challenges, remember that innovation thrives in the face of adversity! 🌈💖 So, go ahead and be the change you wish to see! Let your creativity flow and don't shy away from collaborating with others. The best ideas often come from the most unexpected places! Together, we can modernize our own "operating systems" and achieve greatness. Let's make it happen! 💪✨ Keep shining, keep believing, and let’s transform the world one idea at a time! 🌟💖 #Innovation #AppleJourney #BelieveInYourself #TechInspiration #Resilience
    Bringing an Obscure Apple Operating System to Modern Hardware
    During Apple’s late-90s struggles with profitability, it made a few overtures toward licensing its software to other computer manufacturers, while at the same time trying to modernize its operating system, …read more
    Like
    Love
    Wow
    Sad
    Angry
    547
    1 Kommentare 0 Anteile
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Kommentare 0 Anteile
  • Can imagining a better future really make it come true?

    Brett Ryder / Alamy
    First popularised by the bestselling New Age book The Secret, manifestation has remained a cultural phenomenon for decades, championed by people from Oprah Winfrey to Deepak Chopra. Advocates claim you can attract whatever you want — whether that’s a romantic partner, a new business opportunity or even a material object — by asking the universe for it and believing that it can deliver. Some practitioners propose physics-defying explanations that evoke mysterious vibrational forces to explain its effectiveness.

    This article is part of a special series exploring the radical potential of the human imagination. here.

    This is clearly nonsense, but neuroscientist Sabina Brennan was nevertheless intrigued. What might be the real reason that the practices involved in manifesting can benefit people’s lives? She realised that there were several fascinating, evidence-based explanations for why such interventions can rewire the brain in ways that help you achieve what you desire. In her new book, The Neuroscience of Manifesting, Brennan unpacks some of the mechanisms behind this enduring practice.
    Helen Thomson: Can you start by telling me what manifestation is? 
    Sabina Brennan: Manifesting is the practice of transforming thought into reality by visualising your goal and then developing the discipline to stay focused on and take action to achieve that goal. You can’t magically make things happen — you can’t defy physics — but you can change your reality and your future through focused action.
    Manifestation is easy to disregard as unscientific nonsense – why did you think differently? 
    There are a few reasons why manifesting is dismissed by some academics. One is the misconception that manifesting is just wishful thinking rather than the…
    #can #imagining #better #future #really
    Can imagining a better future really make it come true?
    Brett Ryder / Alamy First popularised by the bestselling New Age book The Secret, manifestation has remained a cultural phenomenon for decades, championed by people from Oprah Winfrey to Deepak Chopra. Advocates claim you can attract whatever you want — whether that’s a romantic partner, a new business opportunity or even a material object — by asking the universe for it and believing that it can deliver. Some practitioners propose physics-defying explanations that evoke mysterious vibrational forces to explain its effectiveness. This article is part of a special series exploring the radical potential of the human imagination. here. This is clearly nonsense, but neuroscientist Sabina Brennan was nevertheless intrigued. What might be the real reason that the practices involved in manifesting can benefit people’s lives? She realised that there were several fascinating, evidence-based explanations for why such interventions can rewire the brain in ways that help you achieve what you desire. In her new book, The Neuroscience of Manifesting, Brennan unpacks some of the mechanisms behind this enduring practice. Helen Thomson: Can you start by telling me what manifestation is?  Sabina Brennan: Manifesting is the practice of transforming thought into reality by visualising your goal and then developing the discipline to stay focused on and take action to achieve that goal. You can’t magically make things happen — you can’t defy physics — but you can change your reality and your future through focused action. Manifestation is easy to disregard as unscientific nonsense – why did you think differently?  There are a few reasons why manifesting is dismissed by some academics. One is the misconception that manifesting is just wishful thinking rather than the… #can #imagining #better #future #really
    WWW.NEWSCIENTIST.COM
    Can imagining a better future really make it come true?
    Brett Ryder / Alamy First popularised by the bestselling New Age book The Secret, manifestation has remained a cultural phenomenon for decades, championed by people from Oprah Winfrey to Deepak Chopra. Advocates claim you can attract whatever you want — whether that’s a romantic partner, a new business opportunity or even a material object — by asking the universe for it and believing that it can deliver. Some practitioners propose physics-defying explanations that evoke mysterious vibrational forces to explain its effectiveness. This article is part of a special series exploring the radical potential of the human imagination. Read more here. This is clearly nonsense, but neuroscientist Sabina Brennan was nevertheless intrigued. What might be the real reason that the practices involved in manifesting can benefit people’s lives? She realised that there were several fascinating, evidence-based explanations for why such interventions can rewire the brain in ways that help you achieve what you desire. In her new book, The Neuroscience of Manifesting, Brennan unpacks some of the mechanisms behind this enduring practice. Helen Thomson: Can you start by telling me what manifestation is?  Sabina Brennan: Manifesting is the practice of transforming thought into reality by visualising your goal and then developing the discipline to stay focused on and take action to achieve that goal. You can’t magically make things happen — you can’t defy physics — but you can change your reality and your future through focused action. Manifestation is easy to disregard as unscientific nonsense – why did you think differently?  There are a few reasons why manifesting is dismissed by some academics. One is the misconception that manifesting is just wishful thinking rather than the…
    0 Kommentare 0 Anteile
  • MindsEye Requires “20ish” Hours to Complete, Says Director

    Build A Rocket Boy’s MindsEye has had quite the journey in the past week, from finally showcasing extensive gameplay at last to its co-CEO believing that those with negative reactions are “100 percent” financed by “someone.” Nevertheless, studio founder and director Leslie Benzies believes in the game’s vision and offering value for money without the filler.
    Speaking to GamesIndustry, Benzies said, “I don’t think you can have filler content in games. I think people want the meat and they want the potatoes. We’ve tried to make as much meat as we can if that makes sense.”
    Though MindsEye’s story was previously touted as requiring about 15 hours to finish, Benzies has since noted that it will take “20ish” hours, which is a “good length for a game.”
    “What you also find through data is thatbig games, people don’t play them all. The majority of people – 60% or 70% of people – don’t actually play games to the end.”
    “So when you’re making something, I would prefer – I’m sure the team would say the same –you had the whole experience from start to finish and not create this 200-hour game. Create something that is finishable but has some side things that will fill out the universe. A lot of the side missions on the play side of MindsEye do fill out the characters’ back stories or fill out what was happening in the world.”
    As for the price, which is Benzies said, “The world’s in a funny place. People are worried about the price of eggs. So value for money, I think people appreciate that when times are difficult.”
    The discussion is interesting since titles like Clair Obscur: Expedition 33 and The Elder Scrolls 4: Oblivion Remastered have received praise for offering so much at Whether MindsEye can aspire to the same quality of content in its runtime remains to be seen.
    It launches on June 10th worldwide for Xbox Series X/S, PS5, and PC, but the journey only starts there, with a post-launch roadmap announced. Check it out here.
    #mindseye #requires #20ish #hours #complete
    MindsEye Requires “20ish” Hours to Complete, Says Director
    Build A Rocket Boy’s MindsEye has had quite the journey in the past week, from finally showcasing extensive gameplay at last to its co-CEO believing that those with negative reactions are “100 percent” financed by “someone.” Nevertheless, studio founder and director Leslie Benzies believes in the game’s vision and offering value for money without the filler. Speaking to GamesIndustry, Benzies said, “I don’t think you can have filler content in games. I think people want the meat and they want the potatoes. We’ve tried to make as much meat as we can if that makes sense.” Though MindsEye’s story was previously touted as requiring about 15 hours to finish, Benzies has since noted that it will take “20ish” hours, which is a “good length for a game.” “What you also find through data is thatbig games, people don’t play them all. The majority of people – 60% or 70% of people – don’t actually play games to the end.” “So when you’re making something, I would prefer – I’m sure the team would say the same –you had the whole experience from start to finish and not create this 200-hour game. Create something that is finishable but has some side things that will fill out the universe. A lot of the side missions on the play side of MindsEye do fill out the characters’ back stories or fill out what was happening in the world.” As for the price, which is Benzies said, “The world’s in a funny place. People are worried about the price of eggs. So value for money, I think people appreciate that when times are difficult.” The discussion is interesting since titles like Clair Obscur: Expedition 33 and The Elder Scrolls 4: Oblivion Remastered have received praise for offering so much at Whether MindsEye can aspire to the same quality of content in its runtime remains to be seen. It launches on June 10th worldwide for Xbox Series X/S, PS5, and PC, but the journey only starts there, with a post-launch roadmap announced. Check it out here. #mindseye #requires #20ish #hours #complete
    GAMINGBOLT.COM
    MindsEye Requires “20ish” Hours to Complete, Says Director
    Build A Rocket Boy’s MindsEye has had quite the journey in the past week, from finally showcasing extensive gameplay at last to its co-CEO believing that those with negative reactions are “100 percent” financed by “someone.” Nevertheless, studio founder and director Leslie Benzies believes in the game’s vision and offering value for money without the filler. Speaking to GamesIndustry, Benzies said, “I don’t think you can have filler content in games. I think people want the meat and they want the potatoes. We’ve tried to make as much meat as we can if that makes sense.” Though MindsEye’s story was previously touted as requiring about 15 hours to finish, Benzies has since noted that it will take “20ish” hours, which is a “good length for a game.” “What you also find through data is that [with] big games, people don’t play them all. The majority of people – 60% or 70% of people – don’t actually play games to the end.” “So when you’re making something, I would prefer – I’m sure the team would say the same – [that] you had the whole experience from start to finish and not create this 200-hour game. Create something that is finishable but has some side things that will fill out the universe. A lot of the side missions on the play side of MindsEye do fill out the characters’ back stories or fill out what was happening in the world.” As for the price, which is $60, Benzies said, “The world’s in a funny place. People are worried about the price of eggs. So value for money, I think people appreciate that when times are difficult.” The discussion is interesting since titles like Clair Obscur: Expedition 33 and The Elder Scrolls 4: Oblivion Remastered have received praise for offering so much at $49.99. Whether MindsEye can aspire to the same quality of content in its runtime remains to be seen. It launches on June 10th worldwide for Xbox Series X/S, PS5, and PC, but the journey only starts there, with a post-launch roadmap announced. Check it out here.
    0 Kommentare 0 Anteile
  • Taylor Swift finally owns it all: Every album, every song, every era

    Swifties have plenty to celebrate on Friday as Taylor Swift announced that she now owns the master recordings of her first six albums after years of trying and failing to buy them.

    Swift posted the news to her website, explaining that she was able to purchase the original versions of the albums from Shamrock Capital, the private equity firm that bought the recordings from music manager Scooter Braun in 2020 for at least million. 

    In an emotional letter, Swift called securing her masters a dream come true, describing herself as “endlessly thankful” to Shamrock Capital for handling the deal fairly and offering her the first chance she’s ever been given to buy back her own music. “This was a business deal to them, but I really felt like they saw it for what it was to me: My memories and my sweat and my handwriting and my decades of dreams,” Swift wrote.

    An uphill battle, even for a billionaire titan of the music industry

    After two decades “of having the carrot dangled and then yanked away,” Swift admitted that she almost stopped believing that she would ever own the original recordings.

    “But that’s all in the past now,” Swift wrote. “I’ve been bursting into tears of joy at random intervals ever since I found out that this is really happening. I really get to say these words: All of the music I’ve ever made . . . now belongs . . . to me.”

    In 2019, Braun acquired Nashville indie record label Big Machine, along with the rights to the albums Swift had recorded there. After Braun’s purchase, Swift stated that she was in no way consulted on the deal and had suffered from “incessant, manipulative bullying” by the industry executive. 

    “It’s a shame to know that I will now be unable to help grow the future of these past works and it pains me very deeply to be separated from the music I spent over a decade creating,” Swift said after the deal went public.

    An update on the status of Reputation

    In light of her struggle to regain control of her own music, Swift set out to rerecord the albums she didn’t own. She began issuing Taylor’s Version updates to her missing catalog albums in 2021, putting out rerecordings of Fearless, Red, Speak Now, and 1989 accompanied by previously unreleased songs.

    Fans eager for news that Swift had finished rerecording her sixth studio album, Reputation, have plenty to cheer but are still in for a wait. In her announcement, Swift divulged that, “full transparency,” she’s less than a quarter of the way through the process.

    “To be perfectly honest, it’s the one album in the first 6 that I thought couldn’t be improved upon by redoing it. Not the music, or photos, or videos. So I kept putting it off,” Swift wrote, adding that she’s happy with a now-finished rerecording of her self-titled debut album. 

    “Those 2 albums can still have their moments to re-emerge when the time is right . . . But if it happens, it won’t be from a place of sadness and longing for what I wish I could have,” Swift wrote. “It will just be a celebration now.”
    #taylor #swift #finally #owns #all
    Taylor Swift finally owns it all: Every album, every song, every era
    Swifties have plenty to celebrate on Friday as Taylor Swift announced that she now owns the master recordings of her first six albums after years of trying and failing to buy them. Swift posted the news to her website, explaining that she was able to purchase the original versions of the albums from Shamrock Capital, the private equity firm that bought the recordings from music manager Scooter Braun in 2020 for at least million.  In an emotional letter, Swift called securing her masters a dream come true, describing herself as “endlessly thankful” to Shamrock Capital for handling the deal fairly and offering her the first chance she’s ever been given to buy back her own music. “This was a business deal to them, but I really felt like they saw it for what it was to me: My memories and my sweat and my handwriting and my decades of dreams,” Swift wrote. An uphill battle, even for a billionaire titan of the music industry After two decades “of having the carrot dangled and then yanked away,” Swift admitted that she almost stopped believing that she would ever own the original recordings. “But that’s all in the past now,” Swift wrote. “I’ve been bursting into tears of joy at random intervals ever since I found out that this is really happening. I really get to say these words: All of the music I’ve ever made . . . now belongs . . . to me.” In 2019, Braun acquired Nashville indie record label Big Machine, along with the rights to the albums Swift had recorded there. After Braun’s purchase, Swift stated that she was in no way consulted on the deal and had suffered from “incessant, manipulative bullying” by the industry executive.  “It’s a shame to know that I will now be unable to help grow the future of these past works and it pains me very deeply to be separated from the music I spent over a decade creating,” Swift said after the deal went public. An update on the status of Reputation In light of her struggle to regain control of her own music, Swift set out to rerecord the albums she didn’t own. She began issuing Taylor’s Version updates to her missing catalog albums in 2021, putting out rerecordings of Fearless, Red, Speak Now, and 1989 accompanied by previously unreleased songs. Fans eager for news that Swift had finished rerecording her sixth studio album, Reputation, have plenty to cheer but are still in for a wait. In her announcement, Swift divulged that, “full transparency,” she’s less than a quarter of the way through the process. “To be perfectly honest, it’s the one album in the first 6 that I thought couldn’t be improved upon by redoing it. Not the music, or photos, or videos. So I kept putting it off,” Swift wrote, adding that she’s happy with a now-finished rerecording of her self-titled debut album.  “Those 2 albums can still have their moments to re-emerge when the time is right . . . But if it happens, it won’t be from a place of sadness and longing for what I wish I could have,” Swift wrote. “It will just be a celebration now.” #taylor #swift #finally #owns #all
    WWW.FASTCOMPANY.COM
    Taylor Swift finally owns it all: Every album, every song, every era
    Swifties have plenty to celebrate on Friday as Taylor Swift announced that she now owns the master recordings of her first six albums after years of trying and failing to buy them. Swift posted the news to her website, explaining that she was able to purchase the original versions of the albums from Shamrock Capital, the private equity firm that bought the recordings from music manager Scooter Braun in 2020 for at least $300 million.  In an emotional letter, Swift called securing her masters a dream come true, describing herself as “endlessly thankful” to Shamrock Capital for handling the deal fairly and offering her the first chance she’s ever been given to buy back her own music. “This was a business deal to them, but I really felt like they saw it for what it was to me: My memories and my sweat and my handwriting and my decades of dreams,” Swift wrote. An uphill battle, even for a billionaire titan of the music industry After two decades “of having the carrot dangled and then yanked away,” Swift admitted that she almost stopped believing that she would ever own the original recordings. “But that’s all in the past now,” Swift wrote. “I’ve been bursting into tears of joy at random intervals ever since I found out that this is really happening. I really get to say these words: All of the music I’ve ever made . . . now belongs . . . to me.” In 2019, Braun acquired Nashville indie record label Big Machine, along with the rights to the albums Swift had recorded there. After Braun’s purchase, Swift stated that she was in no way consulted on the deal and had suffered from “incessant, manipulative bullying” by the industry executive.  “It’s a shame to know that I will now be unable to help grow the future of these past works and it pains me very deeply to be separated from the music I spent over a decade creating,” Swift said after the deal went public. An update on the status of Reputation In light of her struggle to regain control of her own music, Swift set out to rerecord the albums she didn’t own. She began issuing Taylor’s Version updates to her missing catalog albums in 2021, putting out rerecordings of Fearless, Red, Speak Now, and 1989 accompanied by previously unreleased songs. Fans eager for news that Swift had finished rerecording her sixth studio album, Reputation, have plenty to cheer but are still in for a wait. In her announcement, Swift divulged that, “full transparency,” she’s less than a quarter of the way through the process. “To be perfectly honest, it’s the one album in the first 6 that I thought couldn’t be improved upon by redoing it. Not the music, or photos, or videos. So I kept putting it off,” Swift wrote, adding that she’s happy with a now-finished rerecording of her self-titled debut album.  “Those 2 albums can still have their moments to re-emerge when the time is right . . . But if it happens, it won’t be from a place of sadness and longing for what I wish I could have,” Swift wrote. “It will just be a celebration now.”
    0 Kommentare 0 Anteile