• European Broadcasting Union and NVIDIA Partner on Sovereign AI to Support Public Broadcasters

    In a new effort to advance sovereign AI for European public service media, NVIDIA and the European Broadcasting Unionare working together to give the media industry access to high-quality and trusted cloud and AI technologies.
    Announced at NVIDIA GTC Paris at VivaTech, NVIDIA’s collaboration with the EBU — the world’s leading alliance of public service media with more than 110 member organizations in 50+ countries, reaching an audience of over 1 billion — focuses on helping build sovereign AI and cloud frameworks, driving workforce development and cultivating an AI ecosystem to create a more equitable, accessible and resilient European media landscape.
    The work will create better foundations for public service media to benefit from European cloud infrastructure and AI services that are exclusively governed by European policy, comply with European data protection and privacy rules, and embody European values.
    Sovereign AI ensures nations can develop and deploy artificial intelligence using local infrastructure, datasets and expertise. By investing in it, European countries can preserve their cultural identity, enhance public trust and support innovation specific to their needs.
    “We are proud to collaborate with NVIDIA to drive the development of sovereign AI and cloud services,” said Michael Eberhard, chief technology officer of public broadcaster ARD/SWR, and chair of the EBU Technical Committee. “By advancing these capabilities together, we’re helping ensure that powerful, compliant and accessible media services are made available to all EBU members — powering innovation, resilience and strategic autonomy across the board.”

    Empowering Media Innovation in Europe
    To support the development of sovereign AI technologies, NVIDIA and the EBU will establish frameworks that prioritize independence and public trust, helping ensure that AI serves the interests of Europeans while preserving the autonomy of media organizations.
    Through this collaboration, NVIDIA and the EBU will develop hybrid cloud architectures designed to meet the highest standards of European public service media. The EBU will contribute its Dynamic Media Facilityand Media eXchange Layerarchitecture, aiming to enable interoperability and scalability for workflows, as well as cost- and energy-efficient AI training and inference. Following open-source principles, this work aims to create an accessible, dynamic technology ecosystem.
    The collaboration will also provide public service media companies with the tools to deliver personalized, contextually relevant services and content recommendation systems, with a focus on transparency, accountability and cultural identity. This will be realized through investment in sovereign cloud and AI infrastructure and software platforms such as NVIDIA AI Enterprise, custom foundation models, large language models trained with local data, and retrieval-augmented generation technologies.
    As part of the collaboration, NVIDIA is also making available resources from its Deep Learning Institute, offering European media organizations comprehensive training programs to create an AI-ready workforce. This will support the EBU’s efforts to help ensure news integrity in the age of AI.
    In addition, the EBU and its partners are investing in local data centers and cloud platforms that support sovereign technologies, such as NVIDIA GB200 Grace Blackwell Superchip, NVIDIA RTX PRO Servers, NVIDIA DGX Cloud and NVIDIA Holoscan for Media — helping members of the union achieve secure and cost- and energy-efficient AI training, while promoting AI research and development.
    Partnering With Public Service Media for Sovereign Cloud and AI
    Collaboration within the media sector is essential for the development and application of comprehensive standards and best practices that ensure the creation and deployment of sovereign European cloud and AI.
    By engaging with independent software vendors, data center providers, cloud service providers and original equipment manufacturers, NVIDIA and the EBU aim to create a unified approach to sovereign cloud and AI.
    This work will also facilitate discussions between the cloud and AI industry and European regulators, helping ensure the development of practical solutions that benefit both the general public and media organizations.
    “Building sovereign cloud and AI capabilities based on EBU’s Dynamic Media Facility and Media eXchange Layer architecture requires strong cross-industry collaboration,” said Antonio Arcidiacono, chief technology and innovation officer at the EBU. “By collaborating with NVIDIA, as well as a broad ecosystem of media technology partners, we are fostering a shared foundation for trust, innovation and resilience that supports the growth of European media.”
    Learn more about the EBU.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. 
    #european #broadcasting #union #nvidia #partner
    European Broadcasting Union and NVIDIA Partner on Sovereign AI to Support Public Broadcasters
    In a new effort to advance sovereign AI for European public service media, NVIDIA and the European Broadcasting Unionare working together to give the media industry access to high-quality and trusted cloud and AI technologies. Announced at NVIDIA GTC Paris at VivaTech, NVIDIA’s collaboration with the EBU — the world’s leading alliance of public service media with more than 110 member organizations in 50+ countries, reaching an audience of over 1 billion — focuses on helping build sovereign AI and cloud frameworks, driving workforce development and cultivating an AI ecosystem to create a more equitable, accessible and resilient European media landscape. The work will create better foundations for public service media to benefit from European cloud infrastructure and AI services that are exclusively governed by European policy, comply with European data protection and privacy rules, and embody European values. Sovereign AI ensures nations can develop and deploy artificial intelligence using local infrastructure, datasets and expertise. By investing in it, European countries can preserve their cultural identity, enhance public trust and support innovation specific to their needs. “We are proud to collaborate with NVIDIA to drive the development of sovereign AI and cloud services,” said Michael Eberhard, chief technology officer of public broadcaster ARD/SWR, and chair of the EBU Technical Committee. “By advancing these capabilities together, we’re helping ensure that powerful, compliant and accessible media services are made available to all EBU members — powering innovation, resilience and strategic autonomy across the board.” Empowering Media Innovation in Europe To support the development of sovereign AI technologies, NVIDIA and the EBU will establish frameworks that prioritize independence and public trust, helping ensure that AI serves the interests of Europeans while preserving the autonomy of media organizations. Through this collaboration, NVIDIA and the EBU will develop hybrid cloud architectures designed to meet the highest standards of European public service media. The EBU will contribute its Dynamic Media Facilityand Media eXchange Layerarchitecture, aiming to enable interoperability and scalability for workflows, as well as cost- and energy-efficient AI training and inference. Following open-source principles, this work aims to create an accessible, dynamic technology ecosystem. The collaboration will also provide public service media companies with the tools to deliver personalized, contextually relevant services and content recommendation systems, with a focus on transparency, accountability and cultural identity. This will be realized through investment in sovereign cloud and AI infrastructure and software platforms such as NVIDIA AI Enterprise, custom foundation models, large language models trained with local data, and retrieval-augmented generation technologies. As part of the collaboration, NVIDIA is also making available resources from its Deep Learning Institute, offering European media organizations comprehensive training programs to create an AI-ready workforce. This will support the EBU’s efforts to help ensure news integrity in the age of AI. In addition, the EBU and its partners are investing in local data centers and cloud platforms that support sovereign technologies, such as NVIDIA GB200 Grace Blackwell Superchip, NVIDIA RTX PRO Servers, NVIDIA DGX Cloud and NVIDIA Holoscan for Media — helping members of the union achieve secure and cost- and energy-efficient AI training, while promoting AI research and development. Partnering With Public Service Media for Sovereign Cloud and AI Collaboration within the media sector is essential for the development and application of comprehensive standards and best practices that ensure the creation and deployment of sovereign European cloud and AI. By engaging with independent software vendors, data center providers, cloud service providers and original equipment manufacturers, NVIDIA and the EBU aim to create a unified approach to sovereign cloud and AI. This work will also facilitate discussions between the cloud and AI industry and European regulators, helping ensure the development of practical solutions that benefit both the general public and media organizations. “Building sovereign cloud and AI capabilities based on EBU’s Dynamic Media Facility and Media eXchange Layer architecture requires strong cross-industry collaboration,” said Antonio Arcidiacono, chief technology and innovation officer at the EBU. “By collaborating with NVIDIA, as well as a broad ecosystem of media technology partners, we are fostering a shared foundation for trust, innovation and resilience that supports the growth of European media.” Learn more about the EBU. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.  #european #broadcasting #union #nvidia #partner
    BLOGS.NVIDIA.COM
    European Broadcasting Union and NVIDIA Partner on Sovereign AI to Support Public Broadcasters
    In a new effort to advance sovereign AI for European public service media, NVIDIA and the European Broadcasting Union (EBU) are working together to give the media industry access to high-quality and trusted cloud and AI technologies. Announced at NVIDIA GTC Paris at VivaTech, NVIDIA’s collaboration with the EBU — the world’s leading alliance of public service media with more than 110 member organizations in 50+ countries, reaching an audience of over 1 billion — focuses on helping build sovereign AI and cloud frameworks, driving workforce development and cultivating an AI ecosystem to create a more equitable, accessible and resilient European media landscape. The work will create better foundations for public service media to benefit from European cloud infrastructure and AI services that are exclusively governed by European policy, comply with European data protection and privacy rules, and embody European values. Sovereign AI ensures nations can develop and deploy artificial intelligence using local infrastructure, datasets and expertise. By investing in it, European countries can preserve their cultural identity, enhance public trust and support innovation specific to their needs. “We are proud to collaborate with NVIDIA to drive the development of sovereign AI and cloud services,” said Michael Eberhard, chief technology officer of public broadcaster ARD/SWR, and chair of the EBU Technical Committee. “By advancing these capabilities together, we’re helping ensure that powerful, compliant and accessible media services are made available to all EBU members — powering innovation, resilience and strategic autonomy across the board.” Empowering Media Innovation in Europe To support the development of sovereign AI technologies, NVIDIA and the EBU will establish frameworks that prioritize independence and public trust, helping ensure that AI serves the interests of Europeans while preserving the autonomy of media organizations. Through this collaboration, NVIDIA and the EBU will develop hybrid cloud architectures designed to meet the highest standards of European public service media. The EBU will contribute its Dynamic Media Facility (DMF) and Media eXchange Layer (MXL) architecture, aiming to enable interoperability and scalability for workflows, as well as cost- and energy-efficient AI training and inference. Following open-source principles, this work aims to create an accessible, dynamic technology ecosystem. The collaboration will also provide public service media companies with the tools to deliver personalized, contextually relevant services and content recommendation systems, with a focus on transparency, accountability and cultural identity. This will be realized through investment in sovereign cloud and AI infrastructure and software platforms such as NVIDIA AI Enterprise, custom foundation models, large language models trained with local data, and retrieval-augmented generation technologies. As part of the collaboration, NVIDIA is also making available resources from its Deep Learning Institute, offering European media organizations comprehensive training programs to create an AI-ready workforce. This will support the EBU’s efforts to help ensure news integrity in the age of AI. In addition, the EBU and its partners are investing in local data centers and cloud platforms that support sovereign technologies, such as NVIDIA GB200 Grace Blackwell Superchip, NVIDIA RTX PRO Servers, NVIDIA DGX Cloud and NVIDIA Holoscan for Media — helping members of the union achieve secure and cost- and energy-efficient AI training, while promoting AI research and development. Partnering With Public Service Media for Sovereign Cloud and AI Collaboration within the media sector is essential for the development and application of comprehensive standards and best practices that ensure the creation and deployment of sovereign European cloud and AI. By engaging with independent software vendors, data center providers, cloud service providers and original equipment manufacturers, NVIDIA and the EBU aim to create a unified approach to sovereign cloud and AI. This work will also facilitate discussions between the cloud and AI industry and European regulators, helping ensure the development of practical solutions that benefit both the general public and media organizations. “Building sovereign cloud and AI capabilities based on EBU’s Dynamic Media Facility and Media eXchange Layer architecture requires strong cross-industry collaboration,” said Antonio Arcidiacono, chief technology and innovation officer at the EBU. “By collaborating with NVIDIA, as well as a broad ecosystem of media technology partners, we are fostering a shared foundation for trust, innovation and resilience that supports the growth of European media.” Learn more about the EBU. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. 
    Like
    Love
    Wow
    Sad
    Angry
    35
    0 Commentarii 0 Distribuiri
  • Spotify and Apple are killing the album cover, and it’s time we raised our voices against this travesty! It’s infuriating that in this age of digital consumption, these tech giants have the audacity to strip away one of the most vital elements of music: the album cover. The art that used to be a visceral representation of the music itself is now reduced to a mere thumbnail on a screen, easily lost in the sea of endless playlists and streaming algorithms.

    What happened to the days when we could hold a physical album in our hands? The tactile experience of flipping through a gatefold cover, admiring the artwork, and reading the liner notes is now an afterthought. Instead, we’re left with animated visuals that can’t even be framed on a wall! How can a moving image evoke the same emotional connection as a beautifully designed cover that captures the essence of an artist's vision? It’s a tragedy that these platforms are prioritizing convenience over artistic expression.

    The music industry needs to wake up! Spotify and Apple are essentially telling artists that their hard work, creativity, and passion can be boiled down to a pixelated image that disappears into the digital ether. This is an outright assault on the artistry of music! Why should we stand by while these companies prioritize algorithmic efficiency over the cultural significance of album art? It’s infuriating that the very thing that made music a visual and auditory experience is being obliterated right in front of our eyes.

    Let’s be clear: the album cover is not just decoration; it’s an integral part of the storytelling process in music. It sets the tone, evokes emotions, and can even influence how we perceive the music itself. When an album cover is designed with care and intention, it becomes an extension of the artist’s voice. Yet here we are, scrolling through Spotify and Apple Music, bombarded with generic visuals that do nothing to honor the artists or their work.

    Spotify and Apple need to be held accountable for this degradation of music culture. This isn’t just about nostalgia; it’s about preserving the integrity of artistic expression. We need to demand that these platforms acknowledge the importance of album covers and find ways to integrate them into our digital experiences. Otherwise, we’re on a dangerous path where music becomes nothing more than a disposable commodity.

    If we allow Spotify and Apple to continue on this trajectory, we risk losing an entire culture of artistic expression. It’s time for us as consumers to take a stand and remind these companies that music is not just about the sound; it’s about the entire experience.

    Let’s unite and fight back against this digital degradation of music artistry. We deserve better than a world where the album cover is dying a slow death. Let’s reclaim the beauty of music and its visual representation before it’s too late!

    #AlbumArt #MusicCulture #Spotify #AppleMusic #ProtectArtistry
    Spotify and Apple are killing the album cover, and it’s time we raised our voices against this travesty! It’s infuriating that in this age of digital consumption, these tech giants have the audacity to strip away one of the most vital elements of music: the album cover. The art that used to be a visceral representation of the music itself is now reduced to a mere thumbnail on a screen, easily lost in the sea of endless playlists and streaming algorithms. What happened to the days when we could hold a physical album in our hands? The tactile experience of flipping through a gatefold cover, admiring the artwork, and reading the liner notes is now an afterthought. Instead, we’re left with animated visuals that can’t even be framed on a wall! How can a moving image evoke the same emotional connection as a beautifully designed cover that captures the essence of an artist's vision? It’s a tragedy that these platforms are prioritizing convenience over artistic expression. The music industry needs to wake up! Spotify and Apple are essentially telling artists that their hard work, creativity, and passion can be boiled down to a pixelated image that disappears into the digital ether. This is an outright assault on the artistry of music! Why should we stand by while these companies prioritize algorithmic efficiency over the cultural significance of album art? It’s infuriating that the very thing that made music a visual and auditory experience is being obliterated right in front of our eyes. Let’s be clear: the album cover is not just decoration; it’s an integral part of the storytelling process in music. It sets the tone, evokes emotions, and can even influence how we perceive the music itself. When an album cover is designed with care and intention, it becomes an extension of the artist’s voice. Yet here we are, scrolling through Spotify and Apple Music, bombarded with generic visuals that do nothing to honor the artists or their work. Spotify and Apple need to be held accountable for this degradation of music culture. This isn’t just about nostalgia; it’s about preserving the integrity of artistic expression. We need to demand that these platforms acknowledge the importance of album covers and find ways to integrate them into our digital experiences. Otherwise, we’re on a dangerous path where music becomes nothing more than a disposable commodity. If we allow Spotify and Apple to continue on this trajectory, we risk losing an entire culture of artistic expression. It’s time for us as consumers to take a stand and remind these companies that music is not just about the sound; it’s about the entire experience. Let’s unite and fight back against this digital degradation of music artistry. We deserve better than a world where the album cover is dying a slow death. Let’s reclaim the beauty of music and its visual representation before it’s too late! #AlbumArt #MusicCulture #Spotify #AppleMusic #ProtectArtistry
    Like
    Love
    Wow
    Angry
    Sad
    217
    1 Commentarii 0 Distribuiri
  • Hey there, amazing gamers!

    Today, let’s talk about something super exciting and absolutely essential for all of you proud owners of the Nintendo Switch 2! Yes, I’m talking about the LOCK SCREEN feature that you’ve probably heard about! It might seem like a small thing, but trust me, it can make a HUGE difference in how you protect your gaming world!

    In a time where security is more important than ever, our beloved devices deserve the best protection we can give them! Think about it: your Nintendo Switch 2 isn’t just a gaming console; it’s a vault of your precious memories! From those epic save data moments to screenshots that capture your greatest achievements, you don’t want just anyone getting their hands on it!

    And let's be real here, whether it’s keeping your friends from accidentally deleting your progress or ensuring your little ones aren’t accessing games they shouldn’t, that lock screen is a GAME CHANGER! By activating it, you’re not just protecting your data; you’re also preserving the joy and excitement that comes with every gaming session!

    Using the lock screen is a simple step that offers peace of mind. Just imagine being able to dive into your gaming adventure without worrying about someone else interrupting your flow! Security is about more than just privacy; it’s about creating a safe space where you can enjoy your favorite games without any interruptions! And we all know how important that is, right?

    So, let’s celebrate this fantastic feature! Embrace it, activate that lock screen, and take your gaming experience to the next level! Whether you’re on a quest to save the world or simply enjoying a cozy evening of gaming, remember that a little extra protection goes a long way!

    Also, don’t forget to share this with your fellow gamers! Let’s spread the word about the importance of security and make sure everyone is using that lock screen! Together, we can create a safer gaming community where everyone can enjoy their adventures!

    Stay positive, keep gaming, and remember: YOU have the power to protect your gaming treasures! Let’s do this!

    #NintendoSwitch2 #GamingSecurity #LockScreen #ProtectYourGames #GameOn
    🎉🌟 Hey there, amazing gamers! 🌟🎉 Today, let’s talk about something super exciting and absolutely essential for all of you proud owners of the Nintendo Switch 2! 🔥✨ Yes, I’m talking about the LOCK SCREEN feature that you’ve probably heard about! It might seem like a small thing, but trust me, it can make a HUGE difference in how you protect your gaming world! 🎮💖 In a time where security is more important than ever, our beloved devices deserve the best protection we can give them! 🛡️💪 Think about it: your Nintendo Switch 2 isn’t just a gaming console; it’s a vault of your precious memories! 💾From those epic save data moments to screenshots that capture your greatest achievements, you don’t want just anyone getting their hands on it! 🙅‍♂️🙅‍♀️ And let's be real here, whether it’s keeping your friends from accidentally deleting your progress or ensuring your little ones aren’t accessing games they shouldn’t, that lock screen is a GAME CHANGER! 🔑✨ By activating it, you’re not just protecting your data; you’re also preserving the joy and excitement that comes with every gaming session! 🌈💫 Using the lock screen is a simple step that offers peace of mind. Just imagine being able to dive into your gaming adventure without worrying about someone else interrupting your flow! 🚀💥 Security is about more than just privacy; it’s about creating a safe space where you can enjoy your favorite games without any interruptions! And we all know how important that is, right? 😄 So, let’s celebrate this fantastic feature! 🎊💖 Embrace it, activate that lock screen, and take your gaming experience to the next level! Whether you’re on a quest to save the world or simply enjoying a cozy evening of gaming, remember that a little extra protection goes a long way! 🌍💖 Also, don’t forget to share this with your fellow gamers! Let’s spread the word about the importance of security and make sure everyone is using that lock screen! Together, we can create a safer gaming community where everyone can enjoy their adventures! 🌟🤝 Stay positive, keep gaming, and remember: YOU have the power to protect your gaming treasures! Let’s do this! 💖✨ #NintendoSwitch2 #GamingSecurity #LockScreen #ProtectYourGames #GameOn
    Yes, Your Switch 2 Has A Lock Screen And You Should Use It
    Security is more important than ever these days, and that extends even to your entertainment devices. Even beyond your precious save data and screenshots, there’s a lot of important stuff on your Nintendo Switch 2 that you don’t want just anyone acce
    Like
    Love
    Wow
    Sad
    Angry
    168
    1 Commentarii 0 Distribuiri
  • GOG is talking about game preservation again. Apparently, they think it’s a big deal. They’ve got this whole GOG Preservation Program now. Seems like they really want to keep old games alive or something. Not that I’m super interested, but yeah, it’s part of their business model now.

    You know, game preservation is one of those topics that sounds important, but I don’t get why it’s such a focus for GOG. I mean, we have so many games already. Do we really need to save every single one? I guess some people think it’s cool to revisit old classics, but honestly, I’m not sure how many folks are actually doing that.

    The whole idea of preserving games seems a bit… I don’t know, unnecessary? I can’t shake the feeling that it’s just a way for GOG to make more money. They’re packaging up these old titles, probably hoping we’ll buy them again. It’s like, “Look, it’s a classic!” But, is it really that exciting?

    I can see why they’d want to make it part of their business strategy. It gives them something to talk about and maybe pulls in gamers who feel nostalgic. But for me, it’s a bit of a snooze-fest. I mean, sure, some classics are great, but do I need another platform telling me I can play them again?

    So, yeah, GOG is laying out their case for why game preservation matters. They think it’s central to their business model. I guess if you’re into that kind of thing, it could be interesting. But, overall, it feels kinda slow and boring to me. Just another day in the world of gaming, I suppose.

    #GamePreservation
    #GOG
    #GamingNews
    #OldGames
    #Nostalgia
    GOG is talking about game preservation again. Apparently, they think it’s a big deal. They’ve got this whole GOG Preservation Program now. Seems like they really want to keep old games alive or something. Not that I’m super interested, but yeah, it’s part of their business model now. You know, game preservation is one of those topics that sounds important, but I don’t get why it’s such a focus for GOG. I mean, we have so many games already. Do we really need to save every single one? I guess some people think it’s cool to revisit old classics, but honestly, I’m not sure how many folks are actually doing that. The whole idea of preserving games seems a bit… I don’t know, unnecessary? I can’t shake the feeling that it’s just a way for GOG to make more money. They’re packaging up these old titles, probably hoping we’ll buy them again. It’s like, “Look, it’s a classic!” But, is it really that exciting? I can see why they’d want to make it part of their business strategy. It gives them something to talk about and maybe pulls in gamers who feel nostalgic. But for me, it’s a bit of a snooze-fest. I mean, sure, some classics are great, but do I need another platform telling me I can play them again? So, yeah, GOG is laying out their case for why game preservation matters. They think it’s central to their business model. I guess if you’re into that kind of thing, it could be interesting. But, overall, it feels kinda slow and boring to me. Just another day in the world of gaming, I suppose. #GamePreservation #GOG #GamingNews #OldGames #Nostalgia
    GOG lays out the business case for robust game preservation
    Game preservation is now at the heart of GOG's business model with the GOG Preservation Program.
    Like
    Love
    Wow
    Sad
    Angry
    622
    1 Commentarii 0 Distribuiri
  • Why is it so hard for people to grasp the absolute necessity of setting up 301 redirects in an .htaccess file? Honestly, it’s infuriating! We’re in a digital age where every click counts, and yet, so many website owners continue to neglect this vital aspect of web management. Why? Because they’re either too lazy to learn or they just don’t care about preserving their ranking authority!

    Let’s get one thing straight: if you think you can just change URLs and your content magically stays relevant, you’re living in a fantasy world! When you fail to implement 301 redirects properly, you’re not just risking your SEO; you’re throwing away all the hard work you’ve put into building your online presence. It’s like setting fire to a pile of money because you couldn’t be bothered to use a fire extinguisher. Ridiculous!

    The process of adding 301 redirects in .htaccess files is straightforward. It’s not rocket science, people! You have two methods at your disposal, and yet countless websites are still losing traffic and authority daily because their owners can’t figure it out. You would think that in a realm where every detail matters, folks would prioritize understanding how to maintain their site’s integrity. But no! Instead, they leave their sites vulnerable, confused visitors, and plunging search rankings in their wake.

    If you’re still scratching your head over how to set up 301 redirects in an .htaccess file, wake up! The first method is simply to use the `RedirectPermanent` directive. It’s right there for you, and it’s as easy as pie. You just need to specify the old URL and the new URL, and boom! You’re done. Or, if you’re feeling fancy, the second method involves using the `RewriteRule` directive. Again, it’s not complicated! Just a few lines of code, and you’re on your way to preserving that precious ranking authority.

    What’s more infuriating is when people rush into updating their websites without even considering the fallout of their actions. Do you think Google is going to give you a free pass for being reckless? No! It will punish you for not taking the necessary precautions. Imagine losing all that traffic you worked so hard to get, just because you couldn’t be bothered to set up a simple redirect. Pathetic!

    Let’s not even begin to talk about the customer experience. When users click on a link and end up on a 404 error page because you didn’t implement a 301 redirect, that’s a surefire way to lose their trust and business. Do you really want to be known as the website that provides a dead-end for visitors? Absolutely not! So, for the love of all that is holy in the digital world, get your act together and learn how to set up those redirects!

    In conclusion, if you’re still ignoring the importance of 301 redirects in your .htaccess file, you’re not just being negligent; you’re actively sabotaging your own success. Stop making excuses, roll up your sleeves, and do what needs to be done. Your website deserves better!

    #301Redirects #SEO #WebManagement #DigitalMarketing #htaccess
    Why is it so hard for people to grasp the absolute necessity of setting up 301 redirects in an .htaccess file? Honestly, it’s infuriating! We’re in a digital age where every click counts, and yet, so many website owners continue to neglect this vital aspect of web management. Why? Because they’re either too lazy to learn or they just don’t care about preserving their ranking authority! Let’s get one thing straight: if you think you can just change URLs and your content magically stays relevant, you’re living in a fantasy world! When you fail to implement 301 redirects properly, you’re not just risking your SEO; you’re throwing away all the hard work you’ve put into building your online presence. It’s like setting fire to a pile of money because you couldn’t be bothered to use a fire extinguisher. Ridiculous! The process of adding 301 redirects in .htaccess files is straightforward. It’s not rocket science, people! You have two methods at your disposal, and yet countless websites are still losing traffic and authority daily because their owners can’t figure it out. You would think that in a realm where every detail matters, folks would prioritize understanding how to maintain their site’s integrity. But no! Instead, they leave their sites vulnerable, confused visitors, and plunging search rankings in their wake. If you’re still scratching your head over how to set up 301 redirects in an .htaccess file, wake up! The first method is simply to use the `RedirectPermanent` directive. It’s right there for you, and it’s as easy as pie. You just need to specify the old URL and the new URL, and boom! You’re done. Or, if you’re feeling fancy, the second method involves using the `RewriteRule` directive. Again, it’s not complicated! Just a few lines of code, and you’re on your way to preserving that precious ranking authority. What’s more infuriating is when people rush into updating their websites without even considering the fallout of their actions. Do you think Google is going to give you a free pass for being reckless? No! It will punish you for not taking the necessary precautions. Imagine losing all that traffic you worked so hard to get, just because you couldn’t be bothered to set up a simple redirect. Pathetic! Let’s not even begin to talk about the customer experience. When users click on a link and end up on a 404 error page because you didn’t implement a 301 redirect, that’s a surefire way to lose their trust and business. Do you really want to be known as the website that provides a dead-end for visitors? Absolutely not! So, for the love of all that is holy in the digital world, get your act together and learn how to set up those redirects! In conclusion, if you’re still ignoring the importance of 301 redirects in your .htaccess file, you’re not just being negligent; you’re actively sabotaging your own success. Stop making excuses, roll up your sleeves, and do what needs to be done. Your website deserves better! #301Redirects #SEO #WebManagement #DigitalMarketing #htaccess
    How to Set Up 301 Redirects in an .htaccess File
    Adding 301 redirects in .htaccess files is useful to preserve ranking authority. Here are two methods.
    Like
    Love
    Wow
    Angry
    Sad
    506
    1 Commentarii 0 Distribuiri
  • EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs

    Originally published at EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs by Anush Yolyan.

    The integration will deliver simple, accessible, and streamlined email security for vulnerable inboxes

    Global, 4 November 2024 – US-based email security firm EasyDMARC has today announced its integration with Pax8 Marketplace, the leading cloud commerce marketplace. As one of the first DMARC solution providers on the Pax8 Marketplace, EasyDMARC is expanding its mission to protect inboxes from the rising threat of phishing attacks with a rigorous, user-friendly DMARC solution.

    The integration comes as Google highlights the impressive results of recently implemented email authentication measures for bulk senders: a 65% reduction in unauthenticated messages to Gmail users, a 50% increase in bulk senders following best security practices, and 265 billion fewer unauthenticated messages sent in 2024. With email being such a crucial communication channel for businesses, email authentication measures are an essential part of any business’s cybersecurity offering. 

    Key features of the integration include:

    Centralized billing

    With centralized billing, customers can now streamline their cloud services under a single pane of glass, simplifying the management and billing of their EasyDMARC solution. This consolidated approach enables partners to reduce administrative complexity and manage all cloud expenses through one interface, providing a seamless billing and support experience.

    Automated provisioning 

    Through automated provisioning, Pax8’s automation capabilities make deploying DMARC across client accounts quick and hassle-free. By eliminating manual configurations, this integration ensures that customers can implement email security solutions rapidly, allowing them to safeguard client inboxes without delay.

    Bundled offerings

    The bundled offerings available through Pax8 allow partners to enhance their service portfolios by combining EasyDMARC with complementary security solutions. By creating all-in-one security packages, partners can offer their clients more robust protection, addressing a broader range of security needs from a single, trusted platform.

    Gerasim Hovhannisyan, Co-Founder and CEO of EasyDMARC, said:

    “We’re thrilled to be working with Pax8  to provide MSPs with a streamlined, effective way to deliver top-tier email security to their clients, all within a platform that equips them with everything needed to stay secure.  As phishing attacks grow in frequency and sophistication, businesses can no longer afford to overlook the importance of email security. Email authentication is a vital defense against the evolving threat of phishing and is crucial in preserving the integrity of email communication. This integration is designed to allow businesses of all sizes to benefit from DMARC’s extensive capabilities.”

    Ryan Burton, Vice President of Marketplace Vendor Strategy, at Pax8 said: 

    “We’re delighted to welcome EasyDMARC to the Pax8 Marketplace as an enterprise-class DMARC solution provider. This integration gives MSPs the tools they need to meet the growing demand for email security, with simplified deployment, billing, and bundling benefits. With EasyDMARC’s technical capabilities and intelligence, MSPs can deliver robust protection against phishing threats without the technical hassle that often holds businesses back.”

    About EasyDMARC

    EasyDMARC is a cloud-native B2B SaaS solution that addresses email security and deliverability problems with just a few clicks. For Managed Service Providers seeking to increase their revenue, EasyDMARC presents an ideal solution. The email authentication platform streamlines domain management, providing capabilities such as organizational control, domain grouping, and access management.

    Additionally, EasyDMARC offers a comprehensive sales and marketing enablement program designed to boost DMARC sales. All of these features are available for MSPs on a scalable platform with a flexible pay-as-you-go pricing model.

    For more information on the EasyDMARC, visit: /

    About Pax8 

    Pax8 is the technology marketplace of the future, linking partners, vendors, and small to midsized businessesthrough AI-powered insights and comprehensive product support. With a global partner ecosystem of over 38,000 managed service providers, Pax8 empowers SMBs worldwide by providing software and services that unlock their growth potential and enhance their security. Committed to innovating cloud commerce at scale, Pax8 drives customer acquisition and solution consumption across its entire ecosystem.

    Find out more: /

    The post EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs appeared first on EasyDMARC.
    #easydmarc #integrates #with #pax8 #marketplace
    EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs
    Originally published at EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs by Anush Yolyan. The integration will deliver simple, accessible, and streamlined email security for vulnerable inboxes Global, 4 November 2024 – US-based email security firm EasyDMARC has today announced its integration with Pax8 Marketplace, the leading cloud commerce marketplace. As one of the first DMARC solution providers on the Pax8 Marketplace, EasyDMARC is expanding its mission to protect inboxes from the rising threat of phishing attacks with a rigorous, user-friendly DMARC solution. The integration comes as Google highlights the impressive results of recently implemented email authentication measures for bulk senders: a 65% reduction in unauthenticated messages to Gmail users, a 50% increase in bulk senders following best security practices, and 265 billion fewer unauthenticated messages sent in 2024. With email being such a crucial communication channel for businesses, email authentication measures are an essential part of any business’s cybersecurity offering.  Key features of the integration include: Centralized billing With centralized billing, customers can now streamline their cloud services under a single pane of glass, simplifying the management and billing of their EasyDMARC solution. This consolidated approach enables partners to reduce administrative complexity and manage all cloud expenses through one interface, providing a seamless billing and support experience. Automated provisioning  Through automated provisioning, Pax8’s automation capabilities make deploying DMARC across client accounts quick and hassle-free. By eliminating manual configurations, this integration ensures that customers can implement email security solutions rapidly, allowing them to safeguard client inboxes without delay. Bundled offerings The bundled offerings available through Pax8 allow partners to enhance their service portfolios by combining EasyDMARC with complementary security solutions. By creating all-in-one security packages, partners can offer their clients more robust protection, addressing a broader range of security needs from a single, trusted platform. Gerasim Hovhannisyan, Co-Founder and CEO of EasyDMARC, said: “We’re thrilled to be working with Pax8  to provide MSPs with a streamlined, effective way to deliver top-tier email security to their clients, all within a platform that equips them with everything needed to stay secure.  As phishing attacks grow in frequency and sophistication, businesses can no longer afford to overlook the importance of email security. Email authentication is a vital defense against the evolving threat of phishing and is crucial in preserving the integrity of email communication. This integration is designed to allow businesses of all sizes to benefit from DMARC’s extensive capabilities.” Ryan Burton, Vice President of Marketplace Vendor Strategy, at Pax8 said:  “We’re delighted to welcome EasyDMARC to the Pax8 Marketplace as an enterprise-class DMARC solution provider. This integration gives MSPs the tools they need to meet the growing demand for email security, with simplified deployment, billing, and bundling benefits. With EasyDMARC’s technical capabilities and intelligence, MSPs can deliver robust protection against phishing threats without the technical hassle that often holds businesses back.” About EasyDMARC EasyDMARC is a cloud-native B2B SaaS solution that addresses email security and deliverability problems with just a few clicks. For Managed Service Providers seeking to increase their revenue, EasyDMARC presents an ideal solution. The email authentication platform streamlines domain management, providing capabilities such as organizational control, domain grouping, and access management. Additionally, EasyDMARC offers a comprehensive sales and marketing enablement program designed to boost DMARC sales. All of these features are available for MSPs on a scalable platform with a flexible pay-as-you-go pricing model. For more information on the EasyDMARC, visit: / About Pax8  Pax8 is the technology marketplace of the future, linking partners, vendors, and small to midsized businessesthrough AI-powered insights and comprehensive product support. With a global partner ecosystem of over 38,000 managed service providers, Pax8 empowers SMBs worldwide by providing software and services that unlock their growth potential and enhance their security. Committed to innovating cloud commerce at scale, Pax8 drives customer acquisition and solution consumption across its entire ecosystem. Find out more: / The post EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs appeared first on EasyDMARC. #easydmarc #integrates #with #pax8 #marketplace
    EASYDMARC.COM
    EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs
    Originally published at EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs by Anush Yolyan. The integration will deliver simple, accessible, and streamlined email security for vulnerable inboxes Global, 4 November 2024 – US-based email security firm EasyDMARC has today announced its integration with Pax8 Marketplace, the leading cloud commerce marketplace. As one of the first DMARC solution providers on the Pax8 Marketplace, EasyDMARC is expanding its mission to protect inboxes from the rising threat of phishing attacks with a rigorous, user-friendly DMARC solution. The integration comes as Google highlights the impressive results of recently implemented email authentication measures for bulk senders: a 65% reduction in unauthenticated messages to Gmail users, a 50% increase in bulk senders following best security practices, and 265 billion fewer unauthenticated messages sent in 2024. With email being such a crucial communication channel for businesses, email authentication measures are an essential part of any business’s cybersecurity offering.  Key features of the integration include: Centralized billing With centralized billing, customers can now streamline their cloud services under a single pane of glass, simplifying the management and billing of their EasyDMARC solution. This consolidated approach enables partners to reduce administrative complexity and manage all cloud expenses through one interface, providing a seamless billing and support experience. Automated provisioning  Through automated provisioning, Pax8’s automation capabilities make deploying DMARC across client accounts quick and hassle-free. By eliminating manual configurations, this integration ensures that customers can implement email security solutions rapidly, allowing them to safeguard client inboxes without delay. Bundled offerings The bundled offerings available through Pax8 allow partners to enhance their service portfolios by combining EasyDMARC with complementary security solutions. By creating all-in-one security packages, partners can offer their clients more robust protection, addressing a broader range of security needs from a single, trusted platform. Gerasim Hovhannisyan, Co-Founder and CEO of EasyDMARC, said: “We’re thrilled to be working with Pax8  to provide MSPs with a streamlined, effective way to deliver top-tier email security to their clients, all within a platform that equips them with everything needed to stay secure.  As phishing attacks grow in frequency and sophistication, businesses can no longer afford to overlook the importance of email security. Email authentication is a vital defense against the evolving threat of phishing and is crucial in preserving the integrity of email communication. This integration is designed to allow businesses of all sizes to benefit from DMARC’s extensive capabilities.” Ryan Burton, Vice President of Marketplace Vendor Strategy, at Pax8 said:  “We’re delighted to welcome EasyDMARC to the Pax8 Marketplace as an enterprise-class DMARC solution provider. This integration gives MSPs the tools they need to meet the growing demand for email security, with simplified deployment, billing, and bundling benefits. With EasyDMARC’s technical capabilities and intelligence, MSPs can deliver robust protection against phishing threats without the technical hassle that often holds businesses back.” About EasyDMARC EasyDMARC is a cloud-native B2B SaaS solution that addresses email security and deliverability problems with just a few clicks. For Managed Service Providers seeking to increase their revenue, EasyDMARC presents an ideal solution. The email authentication platform streamlines domain management, providing capabilities such as organizational control, domain grouping, and access management. Additionally, EasyDMARC offers a comprehensive sales and marketing enablement program designed to boost DMARC sales. All of these features are available for MSPs on a scalable platform with a flexible pay-as-you-go pricing model. For more information on the EasyDMARC, visit: https://easydmarc.com/ About Pax8  Pax8 is the technology marketplace of the future, linking partners, vendors, and small to midsized businesses (SMBs) through AI-powered insights and comprehensive product support. With a global partner ecosystem of over 38,000 managed service providers, Pax8 empowers SMBs worldwide by providing software and services that unlock their growth potential and enhance their security. Committed to innovating cloud commerce at scale, Pax8 drives customer acquisition and solution consumption across its entire ecosystem. Find out more: https://www.pax8.com/en-us/ The post EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs appeared first on EasyDMARC.
    0 Commentarii 0 Distribuiri
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    WWW.MARKTECHPOST.COM
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Commentarii 0 Distribuiri
  • Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory

    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light

    Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask.
    Courtesy of the researchers via MIT

    In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred.
    Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work.
    In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light.
    The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt.

    Meet the engineer who invented an AI-powered way to restore art
    Watch on

    While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible.
    As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory.
    That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today.
    The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible.
    “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
    The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study.
    Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken.

    Overview of Physically-Applied Digital Restoration
    Watch on

    “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.”
    The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply.
    Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.”

    Get the latest stories in your inbox every weekday.
    #graduate #student #develops #aibased #approach
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday. #graduate #student #develops #aibased #approach
    WWW.SMITHSONIANMAG.COM
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday.
    0 Commentarii 0 Distribuiri