• EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs

    Originally published at EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs by Anush Yolyan.

    The integration will deliver simple, accessible, and streamlined email security for vulnerable inboxes

    Global, 4 November 2024 – US-based email security firm EasyDMARC has today announced its integration with Pax8 Marketplace, the leading cloud commerce marketplace. As one of the first DMARC solution providers on the Pax8 Marketplace, EasyDMARC is expanding its mission to protect inboxes from the rising threat of phishing attacks with a rigorous, user-friendly DMARC solution.

    The integration comes as Google highlights the impressive results of recently implemented email authentication measures for bulk senders: a 65% reduction in unauthenticated messages to Gmail users, a 50% increase in bulk senders following best security practices, and 265 billion fewer unauthenticated messages sent in 2024. With email being such a crucial communication channel for businesses, email authentication measures are an essential part of any business’s cybersecurity offering. 

    Key features of the integration include:

    Centralized billing

    With centralized billing, customers can now streamline their cloud services under a single pane of glass, simplifying the management and billing of their EasyDMARC solution. This consolidated approach enables partners to reduce administrative complexity and manage all cloud expenses through one interface, providing a seamless billing and support experience.

    Automated provisioning 

    Through automated provisioning, Pax8’s automation capabilities make deploying DMARC across client accounts quick and hassle-free. By eliminating manual configurations, this integration ensures that customers can implement email security solutions rapidly, allowing them to safeguard client inboxes without delay.

    Bundled offerings

    The bundled offerings available through Pax8 allow partners to enhance their service portfolios by combining EasyDMARC with complementary security solutions. By creating all-in-one security packages, partners can offer their clients more robust protection, addressing a broader range of security needs from a single, trusted platform.

    Gerasim Hovhannisyan, Co-Founder and CEO of EasyDMARC, said:

    “We’re thrilled to be working with Pax8  to provide MSPs with a streamlined, effective way to deliver top-tier email security to their clients, all within a platform that equips them with everything needed to stay secure.  As phishing attacks grow in frequency and sophistication, businesses can no longer afford to overlook the importance of email security. Email authentication is a vital defense against the evolving threat of phishing and is crucial in preserving the integrity of email communication. This integration is designed to allow businesses of all sizes to benefit from DMARC’s extensive capabilities.”

    Ryan Burton, Vice President of Marketplace Vendor Strategy, at Pax8 said: 

    “We’re delighted to welcome EasyDMARC to the Pax8 Marketplace as an enterprise-class DMARC solution provider. This integration gives MSPs the tools they need to meet the growing demand for email security, with simplified deployment, billing, and bundling benefits. With EasyDMARC’s technical capabilities and intelligence, MSPs can deliver robust protection against phishing threats without the technical hassle that often holds businesses back.”

    About EasyDMARC

    EasyDMARC is a cloud-native B2B SaaS solution that addresses email security and deliverability problems with just a few clicks. For Managed Service Providers seeking to increase their revenue, EasyDMARC presents an ideal solution. The email authentication platform streamlines domain management, providing capabilities such as organizational control, domain grouping, and access management.

    Additionally, EasyDMARC offers a comprehensive sales and marketing enablement program designed to boost DMARC sales. All of these features are available for MSPs on a scalable platform with a flexible pay-as-you-go pricing model.

    For more information on the EasyDMARC, visit: /

    About Pax8 

    Pax8 is the technology marketplace of the future, linking partners, vendors, and small to midsized businessesthrough AI-powered insights and comprehensive product support. With a global partner ecosystem of over 38,000 managed service providers, Pax8 empowers SMBs worldwide by providing software and services that unlock their growth potential and enhance their security. Committed to innovating cloud commerce at scale, Pax8 drives customer acquisition and solution consumption across its entire ecosystem.

    Find out more: /

    The post EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs appeared first on EasyDMARC.
    #easydmarc #integrates #with #pax8 #marketplace
    EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs
    Originally published at EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs by Anush Yolyan. The integration will deliver simple, accessible, and streamlined email security for vulnerable inboxes Global, 4 November 2024 – US-based email security firm EasyDMARC has today announced its integration with Pax8 Marketplace, the leading cloud commerce marketplace. As one of the first DMARC solution providers on the Pax8 Marketplace, EasyDMARC is expanding its mission to protect inboxes from the rising threat of phishing attacks with a rigorous, user-friendly DMARC solution. The integration comes as Google highlights the impressive results of recently implemented email authentication measures for bulk senders: a 65% reduction in unauthenticated messages to Gmail users, a 50% increase in bulk senders following best security practices, and 265 billion fewer unauthenticated messages sent in 2024. With email being such a crucial communication channel for businesses, email authentication measures are an essential part of any business’s cybersecurity offering.  Key features of the integration include: Centralized billing With centralized billing, customers can now streamline their cloud services under a single pane of glass, simplifying the management and billing of their EasyDMARC solution. This consolidated approach enables partners to reduce administrative complexity and manage all cloud expenses through one interface, providing a seamless billing and support experience. Automated provisioning  Through automated provisioning, Pax8’s automation capabilities make deploying DMARC across client accounts quick and hassle-free. By eliminating manual configurations, this integration ensures that customers can implement email security solutions rapidly, allowing them to safeguard client inboxes without delay. Bundled offerings The bundled offerings available through Pax8 allow partners to enhance their service portfolios by combining EasyDMARC with complementary security solutions. By creating all-in-one security packages, partners can offer their clients more robust protection, addressing a broader range of security needs from a single, trusted platform. Gerasim Hovhannisyan, Co-Founder and CEO of EasyDMARC, said: “We’re thrilled to be working with Pax8  to provide MSPs with a streamlined, effective way to deliver top-tier email security to their clients, all within a platform that equips them with everything needed to stay secure.  As phishing attacks grow in frequency and sophistication, businesses can no longer afford to overlook the importance of email security. Email authentication is a vital defense against the evolving threat of phishing and is crucial in preserving the integrity of email communication. This integration is designed to allow businesses of all sizes to benefit from DMARC’s extensive capabilities.” Ryan Burton, Vice President of Marketplace Vendor Strategy, at Pax8 said:  “We’re delighted to welcome EasyDMARC to the Pax8 Marketplace as an enterprise-class DMARC solution provider. This integration gives MSPs the tools they need to meet the growing demand for email security, with simplified deployment, billing, and bundling benefits. With EasyDMARC’s technical capabilities and intelligence, MSPs can deliver robust protection against phishing threats without the technical hassle that often holds businesses back.” About EasyDMARC EasyDMARC is a cloud-native B2B SaaS solution that addresses email security and deliverability problems with just a few clicks. For Managed Service Providers seeking to increase their revenue, EasyDMARC presents an ideal solution. The email authentication platform streamlines domain management, providing capabilities such as organizational control, domain grouping, and access management. Additionally, EasyDMARC offers a comprehensive sales and marketing enablement program designed to boost DMARC sales. All of these features are available for MSPs on a scalable platform with a flexible pay-as-you-go pricing model. For more information on the EasyDMARC, visit: / About Pax8  Pax8 is the technology marketplace of the future, linking partners, vendors, and small to midsized businessesthrough AI-powered insights and comprehensive product support. With a global partner ecosystem of over 38,000 managed service providers, Pax8 empowers SMBs worldwide by providing software and services that unlock their growth potential and enhance their security. Committed to innovating cloud commerce at scale, Pax8 drives customer acquisition and solution consumption across its entire ecosystem. Find out more: / The post EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs appeared first on EasyDMARC. #easydmarc #integrates #with #pax8 #marketplace
    EASYDMARC.COM
    EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs
    Originally published at EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs by Anush Yolyan. The integration will deliver simple, accessible, and streamlined email security for vulnerable inboxes Global, 4 November 2024 – US-based email security firm EasyDMARC has today announced its integration with Pax8 Marketplace, the leading cloud commerce marketplace. As one of the first DMARC solution providers on the Pax8 Marketplace, EasyDMARC is expanding its mission to protect inboxes from the rising threat of phishing attacks with a rigorous, user-friendly DMARC solution. The integration comes as Google highlights the impressive results of recently implemented email authentication measures for bulk senders: a 65% reduction in unauthenticated messages to Gmail users, a 50% increase in bulk senders following best security practices, and 265 billion fewer unauthenticated messages sent in 2024. With email being such a crucial communication channel for businesses, email authentication measures are an essential part of any business’s cybersecurity offering.  Key features of the integration include: Centralized billing With centralized billing, customers can now streamline their cloud services under a single pane of glass, simplifying the management and billing of their EasyDMARC solution. This consolidated approach enables partners to reduce administrative complexity and manage all cloud expenses through one interface, providing a seamless billing and support experience. Automated provisioning  Through automated provisioning, Pax8’s automation capabilities make deploying DMARC across client accounts quick and hassle-free. By eliminating manual configurations, this integration ensures that customers can implement email security solutions rapidly, allowing them to safeguard client inboxes without delay. Bundled offerings The bundled offerings available through Pax8 allow partners to enhance their service portfolios by combining EasyDMARC with complementary security solutions. By creating all-in-one security packages, partners can offer their clients more robust protection, addressing a broader range of security needs from a single, trusted platform. Gerasim Hovhannisyan, Co-Founder and CEO of EasyDMARC, said: “We’re thrilled to be working with Pax8  to provide MSPs with a streamlined, effective way to deliver top-tier email security to their clients, all within a platform that equips them with everything needed to stay secure.  As phishing attacks grow in frequency and sophistication, businesses can no longer afford to overlook the importance of email security. Email authentication is a vital defense against the evolving threat of phishing and is crucial in preserving the integrity of email communication. This integration is designed to allow businesses of all sizes to benefit from DMARC’s extensive capabilities.” Ryan Burton, Vice President of Marketplace Vendor Strategy, at Pax8 said:  “We’re delighted to welcome EasyDMARC to the Pax8 Marketplace as an enterprise-class DMARC solution provider. This integration gives MSPs the tools they need to meet the growing demand for email security, with simplified deployment, billing, and bundling benefits. With EasyDMARC’s technical capabilities and intelligence, MSPs can deliver robust protection against phishing threats without the technical hassle that often holds businesses back.” About EasyDMARC EasyDMARC is a cloud-native B2B SaaS solution that addresses email security and deliverability problems with just a few clicks. For Managed Service Providers seeking to increase their revenue, EasyDMARC presents an ideal solution. The email authentication platform streamlines domain management, providing capabilities such as organizational control, domain grouping, and access management. Additionally, EasyDMARC offers a comprehensive sales and marketing enablement program designed to boost DMARC sales. All of these features are available for MSPs on a scalable platform with a flexible pay-as-you-go pricing model. For more information on the EasyDMARC, visit: https://easydmarc.com/ About Pax8  Pax8 is the technology marketplace of the future, linking partners, vendors, and small to midsized businesses (SMBs) through AI-powered insights and comprehensive product support. With a global partner ecosystem of over 38,000 managed service providers, Pax8 empowers SMBs worldwide by providing software and services that unlock their growth potential and enhance their security. Committed to innovating cloud commerce at scale, Pax8 drives customer acquisition and solution consumption across its entire ecosystem. Find out more: https://www.pax8.com/en-us/ The post EasyDMARC Integrates With Pax8 Marketplace To Simplify Email Security For MSPs appeared first on EasyDMARC.
    0 Commentaires 0 Parts
  • Millennium Systems International: Revenue Enablement

    Millennium Systems International is an exciting and dynamic software company based inParsippany, NJ and was founded in 1987 to provide the beauty and wellness industry withforward-thinking, powerful management software and vital tools. We’ve built a companybased on revolutionary technology, outstanding support, and more importantly, a strongpassion to educate salon and spa owners on how to sustain success. Our software isutilized in thousands of salons and spas in over 36 countries, processes billions of dollarsin transactions per year and is used by hundreds of thousands of users. MillenniumSystems International is honored to have been named one of New Jersey's TopWorkplaces!We are currently searching for a Revenue Enablement professional to train and coach oursales professionals to drive team success. Seeking an enthusiastic and high energyprofessional who is passionate about the beauty/salon space!  This is a remote role.Key Responsibilities:• Develop and deliver training materials and resources, including presentations, e-learning modules, and manuals.• Tailor training programs to meet the specific needs of different sales teams,ensuring relevance to products, services, and market trends.• Continuously update training content to reflect changes in the product portfolio,sales strategies, and market conditions.• Teach core sales techniques such as prospecting, lead qualification, negotiation,closing strategies, and upselling.Focus on improving key areas like objection handling, overcoming customerresistance, and relationship-building.• Onboard new sales team members by providing them with a comprehensiveunderstanding of company products, sales processes, and culture.• Facilitate product and sales training sessions to ensure new hires are equipped toperform in their roles from day one.• Monitor the performance of sales representatives before and after training to assessthe effectiveness of the programs.• Conduct regular performance reviews and provide ongoing coaching to individualteam members to ensure continued development and goal achievement.• Partner with sales managers and senior leaders to understand team performancegaps and emerging needs.• Offer solutions and recommendations for improving sales performance throughcustomized training initiatives.• Train sales teams on the use of CRM systems, sales tools, and other relevantsoftware to improve efficiency and tracking.• Ensure the team is proficient in using digital tools to track progress, managepipelines, and close sales.• Track and measure the success of training programs, using metrics such as salesperformance improvements, knowledge retention rates, employee engagement andsuggest adjustments for continuous improvement.• Stay up to date with the latest trends in sales strategies, tools, and technologies.Qualifications:• Bachelor’s degree in Business, Marketing, Communications, or a related field.• 3+ years proven experience in sales, with a solid understanding of sales processesand techniques.• Previous experience in a sales training, enablement or coaching role is preferred.Skills:• Strong presentation, communication, and interpersonal skills.• Ability to design engaging and impactful training programs.• Knowledge of CRM software and sales automation tools.• Analytical skills to assess performance and adjust training methods accordingly.• Organizational skills and attention to detail.• Motivational and coaching skills to help salespeople improve their performance.• Passionate about helping others succeed and developing their careers.• Creative thinker with a solutions-oriented mindset.Millennium Systems International is committed to providing all Team Members with competitive wages and salaries that are motivational, fair, and equitable. Our compensation program reflects our core values of Teamwork, Excellence, and Integrity, ensuring transparency and fairness while attracting top talent and fostering an environment that encourages growth and retention.We believe that every Team Member is integral to our collective success, and we value the diverse perspectives, creativity, and innovation they bring. Our compensation packages are designed to reflect individual contributions, taking into account skill set, experience, certifications, and work location.In line with our Client-Centric philosophy, we recognize that the success of our Team Members contributes directly to the success of our clients. As such, we offer compensation packages that not only motivate but also reward performance and excellence.The base salary range for this position in the United States is -In addition to base pay, the total compensation package may also include commission, performance bonuses, benefits, and/or other applicable incentive compensation plans.At Millennium Systems International, we approach every challenge with Passion—striving to exceed expectations, solve challenges with urgency and determination, and create an environment where Team Members thrive and celebrate each other’s successes.We Offer:Paid Time Offand Holidays: Enjoy a generous 3 weeks of Paid Time Offthat begins accruing with every pay period from your very first day! Plus, you’ll enjoy tenpaid holidays throughout 2025, along with fivepaid sick days and onepersonal day—because we believe in taking care of you!Medical, Dental, and Vision Benefits: Your well-being is a priority! We offer subsidized Medical, Dental, and Vision plans, with coverage kicking in quickly. It's all about making sure you stay healthy, happy, and well-cared for.Life Insurance: Peace of mind for you and your loved ones! We provide Life Insurance and Accidental Death & Dismemberment. What’s even better? Millennium Systems International fully covers the entire cost—100% on us!Long-Term and Short-Term Disability Insurance: Stay secure no matter what life brings your way. Short-Term and Long-Term Disability insurance.  And we’ve got your back—Millennium Systems International covers the full cost of Long-Term Disability at 100%.401Retirement Plan: Plan for your future with confidence! You’ll be eligible to enroll in our robust 401plan. When you do, you’ll enjoy a 100% match on up to 4% of your contributions, thanks to our Safe Harbor plan. It’s our way of helping you build a brighter tomorrow.Learning & Development Opportunities: We foster a culture of growth and professional excellence. As part of our benefits, we offer unlimited access to Udemy’s online courses, helping you refine your skills, explore new areas, and advance your career. Whether you're deepening your expertise or learning new technologies, we’re here to support your development every step of the way.  Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Sales and Marketing JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    #millennium #systems #international #revenue #enablement
    Millennium Systems International: Revenue Enablement
    Millennium Systems International is an exciting and dynamic software company based inParsippany, NJ and was founded in 1987 to provide the beauty and wellness industry withforward-thinking, powerful management software and vital tools. We’ve built a companybased on revolutionary technology, outstanding support, and more importantly, a strongpassion to educate salon and spa owners on how to sustain success. Our software isutilized in thousands of salons and spas in over 36 countries, processes billions of dollarsin transactions per year and is used by hundreds of thousands of users. MillenniumSystems International is honored to have been named one of New Jersey's TopWorkplaces!We are currently searching for a Revenue Enablement professional to train and coach oursales professionals to drive team success. Seeking an enthusiastic and high energyprofessional who is passionate about the beauty/salon space!  This is a remote role.Key Responsibilities:• Develop and deliver training materials and resources, including presentations, e-learning modules, and manuals.• Tailor training programs to meet the specific needs of different sales teams,ensuring relevance to products, services, and market trends.• Continuously update training content to reflect changes in the product portfolio,sales strategies, and market conditions.• Teach core sales techniques such as prospecting, lead qualification, negotiation,closing strategies, and upselling.Focus on improving key areas like objection handling, overcoming customerresistance, and relationship-building.• Onboard new sales team members by providing them with a comprehensiveunderstanding of company products, sales processes, and culture.• Facilitate product and sales training sessions to ensure new hires are equipped toperform in their roles from day one.• Monitor the performance of sales representatives before and after training to assessthe effectiveness of the programs.• Conduct regular performance reviews and provide ongoing coaching to individualteam members to ensure continued development and goal achievement.• Partner with sales managers and senior leaders to understand team performancegaps and emerging needs.• Offer solutions and recommendations for improving sales performance throughcustomized training initiatives.• Train sales teams on the use of CRM systems, sales tools, and other relevantsoftware to improve efficiency and tracking.• Ensure the team is proficient in using digital tools to track progress, managepipelines, and close sales.• Track and measure the success of training programs, using metrics such as salesperformance improvements, knowledge retention rates, employee engagement andsuggest adjustments for continuous improvement.• Stay up to date with the latest trends in sales strategies, tools, and technologies.Qualifications:• Bachelor’s degree in Business, Marketing, Communications, or a related field.• 3+ years proven experience in sales, with a solid understanding of sales processesand techniques.• Previous experience in a sales training, enablement or coaching role is preferred.Skills:• Strong presentation, communication, and interpersonal skills.• Ability to design engaging and impactful training programs.• Knowledge of CRM software and sales automation tools.• Analytical skills to assess performance and adjust training methods accordingly.• Organizational skills and attention to detail.• Motivational and coaching skills to help salespeople improve their performance.• Passionate about helping others succeed and developing their careers.• Creative thinker with a solutions-oriented mindset.Millennium Systems International is committed to providing all Team Members with competitive wages and salaries that are motivational, fair, and equitable. Our compensation program reflects our core values of Teamwork, Excellence, and Integrity, ensuring transparency and fairness while attracting top talent and fostering an environment that encourages growth and retention.We believe that every Team Member is integral to our collective success, and we value the diverse perspectives, creativity, and innovation they bring. Our compensation packages are designed to reflect individual contributions, taking into account skill set, experience, certifications, and work location.In line with our Client-Centric philosophy, we recognize that the success of our Team Members contributes directly to the success of our clients. As such, we offer compensation packages that not only motivate but also reward performance and excellence.The base salary range for this position in the United States is -In addition to base pay, the total compensation package may also include commission, performance bonuses, benefits, and/or other applicable incentive compensation plans.At Millennium Systems International, we approach every challenge with Passion—striving to exceed expectations, solve challenges with urgency and determination, and create an environment where Team Members thrive and celebrate each other’s successes.We Offer:Paid Time Offand Holidays: Enjoy a generous 3 weeks of Paid Time Offthat begins accruing with every pay period from your very first day! Plus, you’ll enjoy tenpaid holidays throughout 2025, along with fivepaid sick days and onepersonal day—because we believe in taking care of you!Medical, Dental, and Vision Benefits: Your well-being is a priority! We offer subsidized Medical, Dental, and Vision plans, with coverage kicking in quickly. It's all about making sure you stay healthy, happy, and well-cared for.Life Insurance: Peace of mind for you and your loved ones! We provide Life Insurance and Accidental Death & Dismemberment. What’s even better? Millennium Systems International fully covers the entire cost—100% on us!Long-Term and Short-Term Disability Insurance: Stay secure no matter what life brings your way. Short-Term and Long-Term Disability insurance.  And we’ve got your back—Millennium Systems International covers the full cost of Long-Term Disability at 100%.401Retirement Plan: Plan for your future with confidence! You’ll be eligible to enroll in our robust 401plan. When you do, you’ll enjoy a 100% match on up to 4% of your contributions, thanks to our Safe Harbor plan. It’s our way of helping you build a brighter tomorrow.Learning & Development Opportunities: We foster a culture of growth and professional excellence. As part of our benefits, we offer unlimited access to Udemy’s online courses, helping you refine your skills, explore new areas, and advance your career. Whether you're deepening your expertise or learning new technologies, we’re here to support your development every step of the way.  Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Sales and Marketing JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot #millennium #systems #international #revenue #enablement
    WEWORKREMOTELY.COM
    Millennium Systems International: Revenue Enablement
    Millennium Systems International is an exciting and dynamic software company based inParsippany, NJ and was founded in 1987 to provide the beauty and wellness industry withforward-thinking, powerful management software and vital tools. We’ve built a companybased on revolutionary technology, outstanding support, and more importantly, a strongpassion to educate salon and spa owners on how to sustain success. Our software isutilized in thousands of salons and spas in over 36 countries, processes billions of dollarsin transactions per year and is used by hundreds of thousands of users. MillenniumSystems International is honored to have been named one of New Jersey's TopWorkplaces!We are currently searching for a Revenue Enablement professional to train and coach oursales professionals to drive team success. Seeking an enthusiastic and high energyprofessional who is passionate about the beauty/salon space!  This is a remote role.Key Responsibilities:• Develop and deliver training materials and resources, including presentations, e-learning modules, and manuals.• Tailor training programs to meet the specific needs of different sales teams,ensuring relevance to products, services, and market trends.• Continuously update training content to reflect changes in the product portfolio,sales strategies, and market conditions.• Teach core sales techniques such as prospecting, lead qualification, negotiation,closing strategies, and upselling.Focus on improving key areas like objection handling, overcoming customerresistance, and relationship-building.• Onboard new sales team members by providing them with a comprehensiveunderstanding of company products, sales processes, and culture.• Facilitate product and sales training sessions to ensure new hires are equipped toperform in their roles from day one.• Monitor the performance of sales representatives before and after training to assessthe effectiveness of the programs.• Conduct regular performance reviews and provide ongoing coaching to individualteam members to ensure continued development and goal achievement.• Partner with sales managers and senior leaders to understand team performancegaps and emerging needs.• Offer solutions and recommendations for improving sales performance throughcustomized training initiatives.• Train sales teams on the use of CRM systems, sales tools, and other relevantsoftware to improve efficiency and tracking.• Ensure the team is proficient in using digital tools to track progress, managepipelines, and close sales.• Track and measure the success of training programs, using metrics such as salesperformance improvements, knowledge retention rates, employee engagement andsuggest adjustments for continuous improvement.• Stay up to date with the latest trends in sales strategies, tools, and technologies.Qualifications:• Bachelor’s degree in Business, Marketing, Communications, or a related field (orequivalent experience).• 3+ years proven experience in sales, with a solid understanding of sales processesand techniques.• Previous experience in a sales training, enablement or coaching role is preferred.Skills:• Strong presentation, communication, and interpersonal skills.• Ability to design engaging and impactful training programs.• Knowledge of CRM software and sales automation tools.• Analytical skills to assess performance and adjust training methods accordingly.• Organizational skills and attention to detail.• Motivational and coaching skills to help salespeople improve their performance.• Passionate about helping others succeed and developing their careers.• Creative thinker with a solutions-oriented mindset.Millennium Systems International is committed to providing all Team Members with competitive wages and salaries that are motivational, fair, and equitable. Our compensation program reflects our core values of Teamwork, Excellence, and Integrity, ensuring transparency and fairness while attracting top talent and fostering an environment that encourages growth and retention.We believe that every Team Member is integral to our collective success, and we value the diverse perspectives, creativity, and innovation they bring. Our compensation packages are designed to reflect individual contributions, taking into account skill set, experience, certifications, and work location.In line with our Client-Centric philosophy, we recognize that the success of our Team Members contributes directly to the success of our clients. As such, we offer compensation packages that not only motivate but also reward performance and excellence.The base salary range for this position in the United States is $54,000-$70,000. In addition to base pay, the total compensation package may also include commission, performance bonuses, benefits, and/or other applicable incentive compensation plans.At Millennium Systems International, we approach every challenge with Passion—striving to exceed expectations, solve challenges with urgency and determination, and create an environment where Team Members thrive and celebrate each other’s successes.We Offer:Paid Time Off (PTO) and Holidays: Enjoy a generous 3 weeks of Paid Time Off (PTO) that begins accruing with every pay period from your very first day! Plus, you’ll enjoy ten (10) paid holidays throughout 2025, along with five (5) paid sick days and one (1) personal day—because we believe in taking care of you!Medical, Dental, and Vision Benefits: Your well-being is a priority! We offer subsidized Medical, Dental, and Vision plans, with coverage kicking in quickly. It's all about making sure you stay healthy, happy, and well-cared for.Life Insurance: Peace of mind for you and your loved ones! We provide Life Insurance and Accidental Death & Dismemberment (AD&D). What’s even better? Millennium Systems International fully covers the entire cost—100% on us!Long-Term and Short-Term Disability Insurance: Stay secure no matter what life brings your way. Short-Term and Long-Term Disability insurance.  And we’ve got your back—Millennium Systems International covers the full cost of Long-Term Disability at 100%.401(k) Retirement Plan: Plan for your future with confidence! You’ll be eligible to enroll in our robust 401(k) plan. When you do, you’ll enjoy a 100% match on up to 4% of your contributions, thanks to our Safe Harbor plan. It’s our way of helping you build a brighter tomorrow.Learning & Development Opportunities: We foster a culture of growth and professional excellence. As part of our benefits, we offer unlimited access to Udemy’s online courses, helping you refine your skills, explore new areas, and advance your career. Whether you're deepening your expertise or learning new technologies, we’re here to support your development every step of the way.  Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Sales and Marketing JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    Like
    Love
    Wow
    Sad
    Angry
    479
    0 Commentaires 0 Parts
  • AI enables shift from enablement to strategic leadership

    CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge.Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’, is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet.But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues.We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center.Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments.“It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwCFrom support to strategy: shifting expectations for AIRani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation.For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs.Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples.But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk.“It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.”Cleaning house: the data challenge behind AIRani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines.“In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.”Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data.It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge.If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.”For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation.ConclusionIt’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential.But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business.As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys.Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America.
    #enables #shift #enablement #strategic #leadership
    AI enables shift from enablement to strategic leadership
    CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge.Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’, is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet.But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues.We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center.Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments.“It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwCFrom support to strategy: shifting expectations for AIRani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation.For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs.Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples.But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk.“It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.”Cleaning house: the data challenge behind AIRani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines.“In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.”Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data.It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge.If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.”For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation.ConclusionIt’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential.But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business.As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys.Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America. #enables #shift #enablement #strategic #leadership
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    AI enables shift from enablement to strategic leadership
    CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge.Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’ (during a customer interaction, for example), is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet.But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues.We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center.Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments.“It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwCFrom support to strategy: shifting expectations for AIRani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation.For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs.Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples.But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk.“It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.”Cleaning house: the data challenge behind AIRani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines.“In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.”Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data.It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge.If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.”For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation.ConclusionIt’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential.But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business.As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys.Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America.(Image source: “Network Rack” by one individual is licensed under CC BY-SA 2.0.)
    Like
    Love
    Wow
    Sad
    Angry
    400
    0 Commentaires 0 Parts
  • AI and economic pressures reshape tech jobs amid layoffs

    Tech layoffs have continued in 2025. Much of that is being blamed on a combination of a slower economy and the adoption of automation via artificial intelligence.

    Nearly four in 10 Americans, for instance, believe generative AIcould diminish the number of available jobs as it advances, according to a study released in October by the New York Federal Reserve Bank.

    And the World Economic Forum’s Jobs Initiative study found that close to halfof worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models that underpin them.

    In April, the US tech industry lost 214,000 positions as companies shifted toward AI roles and skills-based hiring amid economic uncertainty. Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released by the US Bureau of Labor Statistics showed.

    This year, 137 tech companies have fired 62,114 tech employees, according to Layoffs.fyi. Efforts to reduce headcount at government agencies by the unofficial US Department of Government Efficiencysaw an additional 61,296 federal workers fired this year.

    Kye Mitchell, president of tech workforce staffing firm Experis US, believes the IT employment market is undergoing a fundamental transformation rather than experiencing traditional cyclical layoffs. Although Experis is seeing a 13% month-over-month decline in traditional software developer postings, it doesn’t represent “job destruction, it’s market evolution,” Mitchell said.

    “What we’re witnessing is the emergence of strategic technology orchestrators who harness AI to drive unprecedented business value,” she said.

    For example, organizations that once deployed two scrum teams of ten people to develop high-quality software are now achieving superior results with a single team of five AI-empowered developers.

    “This isn’t about cutting jobs; it’s about elevating roles,” Mitchell said.

    Specialized roles in particular are surging. Database architect positions are up 2,312%, statistician roles have increased 382%, and jobs for mathematicians have increased 1,272%. “These aren’t replacements; they’re vital for an AI-driven future,” she said.

    In fact, it’s an IT talent gap, not an employee surplus, that is now challenging organizations — and will continue to do so.

    With 76% of IT employers already struggling to find skilled tech talent, the market fundamentals favor skilled professionals, according to Mitchell. “The question isn’t whether there will be IT jobs — it’s whether we can develop the right skills fast enough to meet demand,” she said.

    For federal tech workers, outdated systems and slow procurement make it hard to attract and keep top tech talent. Agencies expect fast team deployment but operate with rigid, outdated processes, according to Justin Vianello, CEO of technology workforce development firm SkillStorm.

    Long security clearance delays add cost and time, often forcing companies to hire expensive, already-cleared talent. Meanwhile, modern technologists want to use current tools and make an impact — something hard to do with legacy systems and decade-long modernization efforts, he added.

    Many suggest turning to AI to will solve the tech talent shortage, but there is no evidence that AI will lead to a reduction in demand for tech talent, Vianello said. “On the contrary, companies see that the demand for tech talent has increased as they invest in preparing their workforce to properly use AI tools,” he said.

    A shortage of qualified talent is a bigger barrier to hiring than AI automation, he said, because organizations struggle to find candidates with the right certifications, skills, and clearances — especially in cloud, cybersecurity, and AI. Tech workers often lack skills in these areas because technology evolves faster than education and training can keep up, Vianello said. And while AI helps automate routine tasks, it can’t replace the strategic roles filled by skilled professionals.

    Seven out of 10 US organizations are struggling to find skilled workers to fill roles in an ever-evolving digital transformation landscape, and genAI has added to that headache, according to a ManpowerGroup survey released earlier this year.

    Job postings for AI skills surged 2,000% in 2024, but education and training in this area haven’t kept pace, according to Kelly Stratman, global ecosystem relationships enablement leader at Ernst & Young.

    “As formal education and training in AI skills still lag, it results in a shortage of AI talent that can effectively manage these technologies and demands,” she said in an earlier interview. “The AI talent shortage is most prominent among highly technical roles like data scientists/analysts, machine learning engineers, and software developers.”

    Economic uncertainty is creating a cautious hiring environment, but it’s more complex than tariffs alone. Experis data shows employers adopting a “wait and watch” stance as they monitor economic signals, with job openings down 11% year-over-year, according to Mitchell.

    “However, the bigger story is strategic workforce planning in an era of rapid technological change. Companies are being incredibly precise about where they allocate resources. Not because of economic pressure alone, but because the skills landscape is shifting so rapidly,” Mitchell said. “They’re prioritizing mission-critical roles while restructuring others around AI capabilities.”

    Top organizations see AI as a strategic shift, not just cost-cutting. Cutting talent now risks weakening core areas like cybersecurity, according to Mitchell.

    Skillstorm’s Vianello suggests that IT job hunters should begin to upgrade their skills with certifications that matter: AWS, Azure, CISSP, Security+, and AI/ML credentials open doors quickly, he said.

    “Veterans, in particular, have an edge; they bring leadership, discipline, and security clearances. Apprenticeships and fellowships offer a fast track into full-time roles by giving you experience that actually counts. And don’t overlook the intangibles: soft skills and project leadership are what elevate technologists into impact-makers,” Vianello said.

    Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing, and AI prompt engineering. In fact, demand for genAI courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing.

    “AI isn’t replacing jobs — it’s fundamentally redefining how work gets done. The break point where technology truly displaces a position is when roughly 80% of tasks can be fully automated,” Mitchell said. “We’re nowhere near that threshold for most roles. Instead, we’re seeing AI augment skill sets and make professionals more capable, faster, and able to focus on higher-value work.”

    Leaders use AI as a strategic enabler — embedding it to enhance, not compete with, human developers, she said.

    Some industry forecasts predict a 30% productivity boost from AI tools, potentially adding more than trillion to global GDP.

    For example, AI tools are expected to perform the lion’s share of coding. Techniques where humans use AI-augmented coding tools, such as “vibe coding,” are set to revolutionize software development by creating source code, generating tests automatically, and freeing up developer time for innovation instead of debugging code. 

    With vibe coding, developers use natural language in a conversational way that prompts the AI model to offer contextual ideas and generate code based on the conversation.

    By 2028, 75% of professional developers will be using vibe coding and other genAI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said.

    A report from MIT Technology Review Insights found that 94% of business leaders now use genAI in software development, with 82% applying it in multiple stages — and 26% in four or more.

    Some industry experts place genAI’s use in creating code much higher. “What we are finding is that we’re three to six months from a world where AI is writing 90% of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code,” Anthropic CEO Dario Amodei said in a recent report and video interview.

    “The realtransformation is in role evolution. Developers are becoming strategic technology orchestrators,” Mitchell from Experis said. “Data professionals are becoming business problem solvers. The demand isn’t disappearing; it’s becoming more sophisticated and more valuable.

    “In today’s economic climate, having the right tech talent with AI-enhanced capabilities isn’t a nice-to-have, it’s your competitive edge,” she said.
    #economic #pressures #reshape #tech #jobs
    AI and economic pressures reshape tech jobs amid layoffs
    Tech layoffs have continued in 2025. Much of that is being blamed on a combination of a slower economy and the adoption of automation via artificial intelligence. Nearly four in 10 Americans, for instance, believe generative AIcould diminish the number of available jobs as it advances, according to a study released in October by the New York Federal Reserve Bank. And the World Economic Forum’s Jobs Initiative study found that close to halfof worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models that underpin them. In April, the US tech industry lost 214,000 positions as companies shifted toward AI roles and skills-based hiring amid economic uncertainty. Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released by the US Bureau of Labor Statistics showed. This year, 137 tech companies have fired 62,114 tech employees, according to Layoffs.fyi. Efforts to reduce headcount at government agencies by the unofficial US Department of Government Efficiencysaw an additional 61,296 federal workers fired this year. Kye Mitchell, president of tech workforce staffing firm Experis US, believes the IT employment market is undergoing a fundamental transformation rather than experiencing traditional cyclical layoffs. Although Experis is seeing a 13% month-over-month decline in traditional software developer postings, it doesn’t represent “job destruction, it’s market evolution,” Mitchell said. “What we’re witnessing is the emergence of strategic technology orchestrators who harness AI to drive unprecedented business value,” she said. For example, organizations that once deployed two scrum teams of ten people to develop high-quality software are now achieving superior results with a single team of five AI-empowered developers. “This isn’t about cutting jobs; it’s about elevating roles,” Mitchell said. Specialized roles in particular are surging. Database architect positions are up 2,312%, statistician roles have increased 382%, and jobs for mathematicians have increased 1,272%. “These aren’t replacements; they’re vital for an AI-driven future,” she said. In fact, it’s an IT talent gap, not an employee surplus, that is now challenging organizations — and will continue to do so. With 76% of IT employers already struggling to find skilled tech talent, the market fundamentals favor skilled professionals, according to Mitchell. “The question isn’t whether there will be IT jobs — it’s whether we can develop the right skills fast enough to meet demand,” she said. For federal tech workers, outdated systems and slow procurement make it hard to attract and keep top tech talent. Agencies expect fast team deployment but operate with rigid, outdated processes, according to Justin Vianello, CEO of technology workforce development firm SkillStorm. Long security clearance delays add cost and time, often forcing companies to hire expensive, already-cleared talent. Meanwhile, modern technologists want to use current tools and make an impact — something hard to do with legacy systems and decade-long modernization efforts, he added. Many suggest turning to AI to will solve the tech talent shortage, but there is no evidence that AI will lead to a reduction in demand for tech talent, Vianello said. “On the contrary, companies see that the demand for tech talent has increased as they invest in preparing their workforce to properly use AI tools,” he said. A shortage of qualified talent is a bigger barrier to hiring than AI automation, he said, because organizations struggle to find candidates with the right certifications, skills, and clearances — especially in cloud, cybersecurity, and AI. Tech workers often lack skills in these areas because technology evolves faster than education and training can keep up, Vianello said. And while AI helps automate routine tasks, it can’t replace the strategic roles filled by skilled professionals. Seven out of 10 US organizations are struggling to find skilled workers to fill roles in an ever-evolving digital transformation landscape, and genAI has added to that headache, according to a ManpowerGroup survey released earlier this year. Job postings for AI skills surged 2,000% in 2024, but education and training in this area haven’t kept pace, according to Kelly Stratman, global ecosystem relationships enablement leader at Ernst & Young. “As formal education and training in AI skills still lag, it results in a shortage of AI talent that can effectively manage these technologies and demands,” she said in an earlier interview. “The AI talent shortage is most prominent among highly technical roles like data scientists/analysts, machine learning engineers, and software developers.” Economic uncertainty is creating a cautious hiring environment, but it’s more complex than tariffs alone. Experis data shows employers adopting a “wait and watch” stance as they monitor economic signals, with job openings down 11% year-over-year, according to Mitchell. “However, the bigger story is strategic workforce planning in an era of rapid technological change. Companies are being incredibly precise about where they allocate resources. Not because of economic pressure alone, but because the skills landscape is shifting so rapidly,” Mitchell said. “They’re prioritizing mission-critical roles while restructuring others around AI capabilities.” Top organizations see AI as a strategic shift, not just cost-cutting. Cutting talent now risks weakening core areas like cybersecurity, according to Mitchell. Skillstorm’s Vianello suggests that IT job hunters should begin to upgrade their skills with certifications that matter: AWS, Azure, CISSP, Security+, and AI/ML credentials open doors quickly, he said. “Veterans, in particular, have an edge; they bring leadership, discipline, and security clearances. Apprenticeships and fellowships offer a fast track into full-time roles by giving you experience that actually counts. And don’t overlook the intangibles: soft skills and project leadership are what elevate technologists into impact-makers,” Vianello said. Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing, and AI prompt engineering. In fact, demand for genAI courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing. “AI isn’t replacing jobs — it’s fundamentally redefining how work gets done. The break point where technology truly displaces a position is when roughly 80% of tasks can be fully automated,” Mitchell said. “We’re nowhere near that threshold for most roles. Instead, we’re seeing AI augment skill sets and make professionals more capable, faster, and able to focus on higher-value work.” Leaders use AI as a strategic enabler — embedding it to enhance, not compete with, human developers, she said. Some industry forecasts predict a 30% productivity boost from AI tools, potentially adding more than trillion to global GDP. For example, AI tools are expected to perform the lion’s share of coding. Techniques where humans use AI-augmented coding tools, such as “vibe coding,” are set to revolutionize software development by creating source code, generating tests automatically, and freeing up developer time for innovation instead of debugging code.  With vibe coding, developers use natural language in a conversational way that prompts the AI model to offer contextual ideas and generate code based on the conversation. By 2028, 75% of professional developers will be using vibe coding and other genAI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said. A report from MIT Technology Review Insights found that 94% of business leaders now use genAI in software development, with 82% applying it in multiple stages — and 26% in four or more. Some industry experts place genAI’s use in creating code much higher. “What we are finding is that we’re three to six months from a world where AI is writing 90% of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code,” Anthropic CEO Dario Amodei said in a recent report and video interview. “The realtransformation is in role evolution. Developers are becoming strategic technology orchestrators,” Mitchell from Experis said. “Data professionals are becoming business problem solvers. The demand isn’t disappearing; it’s becoming more sophisticated and more valuable. “In today’s economic climate, having the right tech talent with AI-enhanced capabilities isn’t a nice-to-have, it’s your competitive edge,” she said. #economic #pressures #reshape #tech #jobs
    WWW.COMPUTERWORLD.COM
    AI and economic pressures reshape tech jobs amid layoffs
    Tech layoffs have continued in 2025. Much of that is being blamed on a combination of a slower economy and the adoption of automation via artificial intelligence. Nearly four in 10 Americans, for instance, believe generative AI (genAI) could diminish the number of available jobs as it advances, according to a study released in October by the New York Federal Reserve Bank. And the World Economic Forum’s Jobs Initiative study found that close to half (44%) of worker skills will be disrupted in the next five years — and 40% of tasks will be affected by the use of genAI tools and the large language models (LLMs) that underpin them. In April, the US tech industry lost 214,000 positions as companies shifted toward AI roles and skills-based hiring amid economic uncertainty. Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released by the US Bureau of Labor Statistics (BLS) showed. This year, 137 tech companies have fired 62,114 tech employees, according to Layoffs.fyi. Efforts to reduce headcount at government agencies by the unofficial US Department of Government Efficiency (DOGE) saw an additional 61,296 federal workers fired this year. Kye Mitchell, president of tech workforce staffing firm Experis US, believes the IT employment market is undergoing a fundamental transformation rather than experiencing traditional cyclical layoffs. Although Experis is seeing a 13% month-over-month decline in traditional software developer postings, it doesn’t represent “job destruction, it’s market evolution,” Mitchell said. “What we’re witnessing is the emergence of strategic technology orchestrators who harness AI to drive unprecedented business value,” she said. For example, organizations that once deployed two scrum teams of ten people to develop high-quality software are now achieving superior results with a single team of five AI-empowered developers. “This isn’t about cutting jobs; it’s about elevating roles,” Mitchell said. Specialized roles in particular are surging. Database architect positions are up 2,312%, statistician roles have increased 382%, and jobs for mathematicians have increased 1,272%. “These aren’t replacements; they’re vital for an AI-driven future,” she said. In fact, it’s an IT talent gap, not an employee surplus, that is now challenging organizations — and will continue to do so. With 76% of IT employers already struggling to find skilled tech talent, the market fundamentals favor skilled professionals, according to Mitchell. “The question isn’t whether there will be IT jobs — it’s whether we can develop the right skills fast enough to meet demand,” she said. For federal tech workers, outdated systems and slow procurement make it hard to attract and keep top tech talent. Agencies expect fast team deployment but operate with rigid, outdated processes, according to Justin Vianello, CEO of technology workforce development firm SkillStorm. Long security clearance delays add cost and time, often forcing companies to hire expensive, already-cleared talent. Meanwhile, modern technologists want to use current tools and make an impact — something hard to do with legacy systems and decade-long modernization efforts, he added. Many suggest turning to AI to will solve the tech talent shortage, but there is no evidence that AI will lead to a reduction in demand for tech talent, Vianello said. “On the contrary, companies see that the demand for tech talent has increased as they invest in preparing their workforce to properly use AI tools,” he said. A shortage of qualified talent is a bigger barrier to hiring than AI automation, he said, because organizations struggle to find candidates with the right certifications, skills, and clearances — especially in cloud, cybersecurity, and AI. Tech workers often lack skills in these areas because technology evolves faster than education and training can keep up, Vianello said. And while AI helps automate routine tasks, it can’t replace the strategic roles filled by skilled professionals. Seven out of 10 US organizations are struggling to find skilled workers to fill roles in an ever-evolving digital transformation landscape, and genAI has added to that headache, according to a ManpowerGroup survey released earlier this year. Job postings for AI skills surged 2,000% in 2024, but education and training in this area haven’t kept pace, according to Kelly Stratman, global ecosystem relationships enablement leader at Ernst & Young. “As formal education and training in AI skills still lag, it results in a shortage of AI talent that can effectively manage these technologies and demands,” she said in an earlier interview. “The AI talent shortage is most prominent among highly technical roles like data scientists/analysts, machine learning engineers, and software developers.” Economic uncertainty is creating a cautious hiring environment, but it’s more complex than tariffs alone. Experis data shows employers adopting a “wait and watch” stance as they monitor economic signals, with job openings down 11% year-over-year, according to Mitchell. “However, the bigger story is strategic workforce planning in an era of rapid technological change. Companies are being incredibly precise about where they allocate resources. Not because of economic pressure alone, but because the skills landscape is shifting so rapidly,” Mitchell said. “They’re prioritizing mission-critical roles while restructuring others around AI capabilities.” Top organizations see AI as a strategic shift, not just cost-cutting. Cutting talent now risks weakening core areas like cybersecurity, according to Mitchell. Skillstorm’s Vianello suggests that IT job hunters should begin to upgrade their skills with certifications that matter: AWS, Azure, CISSP, Security+, and AI/ML credentials open doors quickly, he said. “Veterans, in particular, have an edge; they bring leadership, discipline, and security clearances. Apprenticeships and fellowships offer a fast track into full-time roles by giving you experience that actually counts. And don’t overlook the intangibles: soft skills and project leadership are what elevate technologists into impact-makers,” Vianello said. Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing (such as Rust), and AI prompt engineering. In fact, demand for genAI courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing. “AI isn’t replacing jobs — it’s fundamentally redefining how work gets done. The break point where technology truly displaces a position is when roughly 80% of tasks can be fully automated,” Mitchell said. “We’re nowhere near that threshold for most roles. Instead, we’re seeing AI augment skill sets and make professionals more capable, faster, and able to focus on higher-value work.” Leaders use AI as a strategic enabler — embedding it to enhance, not compete with, human developers, she said. Some industry forecasts predict a 30% productivity boost from AI tools, potentially adding more than $1.5 trillion to global GDP. For example, AI tools are expected to perform the lion’s share of coding. Techniques where humans use AI-augmented coding tools, such as “vibe coding,” are set to revolutionize software development by creating source code, generating tests automatically, and freeing up developer time for innovation instead of debugging code.  With vibe coding, developers use natural language in a conversational way that prompts the AI model to offer contextual ideas and generate code based on the conversation. By 2028, 75% of professional developers will be using vibe coding and other genAI-powered coding tools, up from less than 10% in September 2023, according to Gartner Research. And within three years, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering tool chain — a significant increase from approximately 15% early last year, Gartner said. A report from MIT Technology Review Insights found that 94% of business leaders now use genAI in software development, with 82% applying it in multiple stages — and 26% in four or more. Some industry experts place genAI’s use in creating code much higher. “What we are finding is that we’re three to six months from a world where AI is writing 90% of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code,” Anthropic CEO Dario Amodei said in a recent report and video interview. “The real [AI] transformation is in role evolution. Developers are becoming strategic technology orchestrators,” Mitchell from Experis said. “Data professionals are becoming business problem solvers. The demand isn’t disappearing; it’s becoming more sophisticated and more valuable. “In today’s economic climate, having the right tech talent with AI-enhanced capabilities isn’t a nice-to-have, it’s your competitive edge,” she said.
    0 Commentaires 0 Parts
  • Google I/O 2025: Android Takes A Back Seat To AI And XR

    Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag
    Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025.

    I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O.Google’s monthly tokens processedAnshel Sag

    Incorporating Gemini And AI Everywhere
    For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience.

    To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time.
    Android XR, Smart Glasses And The Xreal Partnership
    Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag

    In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted million for its partnership with Warby Parker, with million already committed to product development and commercialization and the remaining million dependent on reaching certain milestones.
    I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with.
    Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag
    For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions.
    The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens.
    Project Starline Becomes Google Beam
    Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech.
    Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well.
    Analyst Takeaways From Google I/O
    I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen.
    All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space. More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini.
    Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently hasa paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions
    #google #android #takes #back #seat
    Google I/O 2025: Android Takes A Back Seat To AI And XR
    Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025. I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O.Google’s monthly tokens processedAnshel Sag Incorporating Gemini And AI Everywhere For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience. To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time. Android XR, Smart Glasses And The Xreal Partnership Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted million for its partnership with Warby Parker, with million already committed to product development and commercialization and the remaining million dependent on reaching certain milestones. I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with. Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions. The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens. Project Starline Becomes Google Beam Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech. Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well. Analyst Takeaways From Google I/O I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen. All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space. More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini. Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently hasa paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions #google #android #takes #back #seat
    WWW.FORBES.COM
    Google I/O 2025: Android Takes A Back Seat To AI And XR
    Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025. I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O. (Note: Google is an advisory client of my firm, Moor Insights & Strategy.) Google’s monthly tokens processedAnshel Sag Incorporating Gemini And AI Everywhere For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience. To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time. Android XR, Smart Glasses And The Xreal Partnership Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted $150 million for its partnership with Warby Parker, with $75 million already committed to product development and commercialization and the remaining $75 million dependent on reaching certain milestones. I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with. Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions. The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens. Project Starline Becomes Google Beam Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech. Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well. Analyst Takeaways From Google I/O I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen. All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space (specifically Daydream and Glass). More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini. Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions
    0 Commentaires 0 Parts
  • Senior Data Scientist – Data Science Analytics and Enablement (DSAE) at Sony Playstation

    Senior Data Scientist – Data Science Analytics and EnablementSony PlaystationUnited Kingdom, London40 seconds agoApplyWhy PlayStation?PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, acclaimed PlayStation software titles from PlayStation Studios, and more.PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Group Corporation.Our Data Science Analytics and Enablementteam inspires PlayStation to make impactful, customer centric decisions through seamless integration of data.Currently there are over 100 people in the global DSAE team, including data science, data governance and analytics professionals. We work closely with engineering and product management teams to deliver data products, insight, predictive analytics, and data visualisation.DSAE is looking to recruit dedicated, highly driven individuals who have excelled in previous roles and are looking for a new challenge in a dynamic and exciting environment.What You’ll Be Doing:As a key leader in our global experimentation efforts, you will raise the bar on how we test, measure, and learn across PlayStation’s most impactful products and initiatives.This role is based in London with hybrid working flexibility.You will:Define and standardise experimentation strategy, including best practices in test design, allocation, and statistical analysisCollaborate with commercial, engineering, analytics, and product teams to ensure flawless execution and clean data captureApply causal inference techniques when randomisation isn’t feasibleOwn the interpretation of experimental results, delivering both topline summaries and deep performance insightsProvide mid-test updates that build stakeholder confidence and advise adjustments during live testsCommunicate insights and recommendations with clarity and influence across working groups and senior leadership forumsGuide and mentor other data scientists, ensuring consistency, quality, and alignment across experimentation workRepresent experimentation at the strategic level, advocating for rigorous methods that drive long-term learning and impactCreate reusable documentation, tooling, and training materials to elevate experimentation maturity across the organisationWhat We’re Looking ForSignificant experience in data science and experimentation, ideally within consumer tech or digital commerceStrong foundation in statistical testing, power analysis, and causal inference methodologiesExpertise in SQL and Pythonfor data querying, preparation, and sophisticated analysisExceptional communication skills - with a proven track record to present findings to non-technical audiences, advocate for experimentation results, and influence business and product leadersExperience working on or advising experimentation platforms and measurement frameworksCommercial awareness and confidence in shaping decisions through data-driven evidenceDemonstrated experience mentoring junior team members and upholding high analytical standardsCollaborative, proactive attitude with strong ability to align and influence cross-functional partnersFamiliarity with personalisation systems, recommender models, or A/B testing in an e-commerce or customer lifecycle contextExperience with large-scale experiments, particularly in high-traffic environmentsStrong problem-solving, critical thinking, and adaptability skillsCommitment to continuous improvement and staying updated with the latest trends and standard methodologies in experimentation and measurementBenefits:Discretionary bonus opportunityHybrid WorkingPrivate Medical InsuranceDental Scheme25 days holiday per yearOn Site GymSubsidised CaféFree soft drinksOn site barAccess to cycle garage and showersEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender, race, religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment.
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #senior #data #scientist #science #analytics
    Senior Data Scientist – Data Science Analytics and Enablement (DSAE) at Sony Playstation
    Senior Data Scientist – Data Science Analytics and EnablementSony PlaystationUnited Kingdom, London40 seconds agoApplyWhy PlayStation?PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, acclaimed PlayStation software titles from PlayStation Studios, and more.PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Group Corporation.Our Data Science Analytics and Enablementteam inspires PlayStation to make impactful, customer centric decisions through seamless integration of data.Currently there are over 100 people in the global DSAE team, including data science, data governance and analytics professionals. We work closely with engineering and product management teams to deliver data products, insight, predictive analytics, and data visualisation.DSAE is looking to recruit dedicated, highly driven individuals who have excelled in previous roles and are looking for a new challenge in a dynamic and exciting environment.What You’ll Be Doing:As a key leader in our global experimentation efforts, you will raise the bar on how we test, measure, and learn across PlayStation’s most impactful products and initiatives.This role is based in London with hybrid working flexibility.You will:Define and standardise experimentation strategy, including best practices in test design, allocation, and statistical analysisCollaborate with commercial, engineering, analytics, and product teams to ensure flawless execution and clean data captureApply causal inference techniques when randomisation isn’t feasibleOwn the interpretation of experimental results, delivering both topline summaries and deep performance insightsProvide mid-test updates that build stakeholder confidence and advise adjustments during live testsCommunicate insights and recommendations with clarity and influence across working groups and senior leadership forumsGuide and mentor other data scientists, ensuring consistency, quality, and alignment across experimentation workRepresent experimentation at the strategic level, advocating for rigorous methods that drive long-term learning and impactCreate reusable documentation, tooling, and training materials to elevate experimentation maturity across the organisationWhat We’re Looking ForSignificant experience in data science and experimentation, ideally within consumer tech or digital commerceStrong foundation in statistical testing, power analysis, and causal inference methodologiesExpertise in SQL and Pythonfor data querying, preparation, and sophisticated analysisExceptional communication skills - with a proven track record to present findings to non-technical audiences, advocate for experimentation results, and influence business and product leadersExperience working on or advising experimentation platforms and measurement frameworksCommercial awareness and confidence in shaping decisions through data-driven evidenceDemonstrated experience mentoring junior team members and upholding high analytical standardsCollaborative, proactive attitude with strong ability to align and influence cross-functional partnersFamiliarity with personalisation systems, recommender models, or A/B testing in an e-commerce or customer lifecycle contextExperience with large-scale experiments, particularly in high-traffic environmentsStrong problem-solving, critical thinking, and adaptability skillsCommitment to continuous improvement and staying updated with the latest trends and standard methodologies in experimentation and measurementBenefits:Discretionary bonus opportunityHybrid WorkingPrivate Medical InsuranceDental Scheme25 days holiday per yearOn Site GymSubsidised CaféFree soft drinksOn site barAccess to cycle garage and showersEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender, race, religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #senior #data #scientist #science #analytics
    Senior Data Scientist – Data Science Analytics and Enablement (DSAE) at Sony Playstation
    Senior Data Scientist – Data Science Analytics and Enablement (DSAE)Sony PlaystationUnited Kingdom, London40 seconds agoApplyWhy PlayStation?PlayStation isn’t just the Best Place to Play — it’s also the Best Place to Work. Today, we’re recognized as a global leader in entertainment producing The PlayStation family of products and services including PlayStation®5, PlayStation®4, PlayStation®VR, PlayStation®Plus, acclaimed PlayStation software titles from PlayStation Studios, and more.PlayStation also strives to create an inclusive environment that empowers employees and embraces diversity. We welcome and encourage everyone who has a passion and curiosity for innovation, technology, and play to explore our open positions and join our growing global team.The PlayStation brand falls under Sony Interactive Entertainment, a wholly-owned subsidiary of Sony Group Corporation.Our Data Science Analytics and Enablement (DSAE) team inspires PlayStation to make impactful, customer centric decisions through seamless integration of data.Currently there are over 100 people in the global DSAE team, including data science, data governance and analytics professionals. We work closely with engineering and product management teams to deliver data products, insight, predictive analytics, and data visualisation.DSAE is looking to recruit dedicated, highly driven individuals who have excelled in previous roles and are looking for a new challenge in a dynamic and exciting environment.What You’ll Be Doing:As a key leader in our global experimentation efforts, you will raise the bar on how we test, measure, and learn across PlayStation’s most impactful products and initiatives.This role is based in London with hybrid working flexibility.You will:Define and standardise experimentation strategy, including best practices in test design, allocation, and statistical analysisCollaborate with commercial, engineering, analytics, and product teams to ensure flawless execution and clean data captureApply causal inference techniques when randomisation isn’t feasibleOwn the interpretation of experimental results, delivering both topline summaries and deep performance insightsProvide mid-test updates that build stakeholder confidence and advise adjustments during live testsCommunicate insights and recommendations with clarity and influence across working groups and senior leadership forumsGuide and mentor other data scientists, ensuring consistency, quality, and alignment across experimentation workRepresent experimentation at the strategic level, advocating for rigorous methods that drive long-term learning and impactCreate reusable documentation, tooling, and training materials to elevate experimentation maturity across the organisationWhat We’re Looking ForSignificant experience in data science and experimentation, ideally within consumer tech or digital commerceStrong foundation in statistical testing, power analysis, and causal inference methodologiesExpertise in SQL and Python (or R) for data querying, preparation, and sophisticated analysisExceptional communication skills - with a proven track record to present findings to non-technical audiences, advocate for experimentation results, and influence business and product leadersExperience working on or advising experimentation platforms and measurement frameworksCommercial awareness and confidence in shaping decisions through data-driven evidenceDemonstrated experience mentoring junior team members and upholding high analytical standardsCollaborative, proactive attitude with strong ability to align and influence cross-functional partnersFamiliarity with personalisation systems, recommender models, or A/B testing in an e-commerce or customer lifecycle contextExperience with large-scale experiments, particularly in high-traffic environmentsStrong problem-solving, critical thinking, and adaptability skillsCommitment to continuous improvement and staying updated with the latest trends and standard methodologies in experimentation and measurementBenefits:Discretionary bonus opportunityHybrid Working (within Flexmodes)Private Medical InsuranceDental Scheme25 days holiday per yearOn Site GymSubsidised CaféFree soft drinksOn site barAccess to cycle garage and showersEqual Opportunity Statement:Sony is an Equal Opportunity Employer. All persons will receive consideration for employment without regard to gender (including gender identity, gender expression and gender reassignment), race (including colour, nationality, ethnic or national origin), religion or belief, marital or civil partnership status, disability, age, sexual orientation, pregnancy, maternity or parental status, trade union membership or membership in any other legally protected category.We strive to create an inclusive environment, empower employees and embrace diversity. We encourage everyone to respond.PlayStation is a Fair Chance employer and qualified applicants with arrest and conviction records will be considered for employment. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Commentaires 0 Parts
  • Design in the age of vibes

    What the new wave of new AI design and dev tools, — Bolt, V0, Lovable, and Figma Make — mean for the future of software design.Prompt by the author, image generated by Sora.This article builds on reflections I shared last July in The expanded scope and blurring boundaries of AI-powered design, outlining what’s changed in a short time, and what it means for those designing software and leading design teams.Like many others, I’ve been exploring tools like Bolt, Lovable, V0, and most recently Figma Make, looking at how they are changing the way we build software today, and what that means for the future. For those who may not know, these tools are part of a new wave of AI-powered design and development platforms that aim to speed up how we go from prompt to prototype, automating front-end code, generating UI from prompts, and bridging the gap between design and engineering. Bolt is now the second fastest-growing product in history, just behind ChatGPT.While the AI hype hasn’t slowed since ChatGPT’s launch, it’s quickly becoming apparent that these tools represent a step change, one that is rapidly reshaping how we work, and how software gets built.A example of the Bolt.new UI interfaceThis shift didn’t start with AIEven before the recent explosion of AI tooling, design teams have been evolving their approach and expanding their scope of impact. Products like Figma enabled more fluid communication and cross-disciplinary collaboration, while design systems and front-end frameworks like Material, Tailwind, Radix and other libraries helped codify and systematise best practices for visual design, interaction an accessibility.This enabled designers to spend more time thinking about the broader systems, increasing iteration cycles — and less time debating padding. While such tools and frameworks helped to elevate the baseline user experience for many products, in enterprise SaaS in particular, they have had their share of criticism from the resulting sea of sameness that they generated. AI tools are now accelerating and amplifying some of the consequences, both positive and negative. These products represent not just a tooling upgrade, but a shift in what design is, who does it, and how teams are built.Design has evolved from the design of objects, both physical and immaterial, to the design of systems, to the design of complex adaptive systems. The evolution is shifting the role of designers; they are no longer the central planner but rather participants within the systems they exist in. This is a fundamental shift — one that requires a new set of values— Joi Ito, MIT Media LabWhat AI tools are making possibleThis new wave of AI tools can generate high-quality UIs from a prompt, screenshot, or Figma frame. Work that once required a multidisciplinary team and weeks of effort — from concept to coded prototype — can now happen in a matter of hours. Best practices are baked in. Layouts are responsive by default. Interaction logic is defined in a sentence. Even connecting to real data is no longer a blocker, it’s part of the flow.Lovable, one of the many new AI design and full-stack development tools launched recentlyThese tools differ from popular IDE-based assistants like Cursor, Copilot and Windsurf in both purpose and level of abstraction. UI-based tools like Bolt automate many of the more complex and often intimidating parts of the developer workflows; spinning up environments, scaffolding projects, managing dependencies, and deploying apps. That makes sense, given that many of them were built by hosting platforms such as Vercel and Replit.With this new speed and ease of use, designers don’t need to wait on engineers to see how something feels in practice. They can test ideas with higher fidelity faster, explore more variations, and evolve the experience in tight feedback loops.Figma Make: Start with a design and prompt your way to a functional prototype, fast — all in Figma.This shift has also given rise to what some are calling ‘Vibe coding’, a term coined by Andrej Karpathy, that captures this expressive, real-time way of building software. Instead of following a strict spec or writing code line by line, you start with a vibe or loose concept, and use these tools to sculpt the idea into something functional. You prompt, tweak, adjust components, and refine until it feels right. It’s intuitive, fast, and fluid.There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMsare getting too good.— Andrej KarpathyIn this new paradigm, the output isn’t just faster, it’s driven by rapid judgement and intuition, not necessarily depth of technical experience. In addition, the barrier to entry for non-designers to explore ideas has lowered too. Now that anyone can create compelling, usable apps with front-end and back-end logic, what does that mean for design?I would love to say that this means more time spent on outcomes and higher-impact work for designers, but it’s more likely to disrupt the foundations of what it means to be a designer. The boundaries between the classic product triad of design, engineering and product management were already blurring but this looks like it will accelerate even more.We are in the middle of a significant industry shift, we’re heading into a period of rapid, unpredictable change.. While testing some of these new AI tools, I have had several ‘oh shit’ moments where I get a sense of how things might evolve…. this is what copywriters and others in similar writing roles must have felt when ChatGPT first came out.The author, while vibe codingWhat this might mean for designAs UI generation becomes commoditized, the value of design shifts upstream. With that, the scope of what is expected from design will shift. Future designs team are likely to be smaller, and more embedded in product strategy. As companies grow, design functions won’t necessarily need bigger design teams, they will need higher-leverage ones.Meanwhile, designers and engineers will work more closely together — not through handoff, but through shared tools and live collaboration. In enterprise environments in particular, much of the engineering work is not so much about zero-to-one implementation but about working within and around established technical constraints. As front-end becomes commoditized, engineers will shift their focus further upstream to establishing strong technical foundations and systems for teams to build from.From years of experience to mindsetSome worry this shift will reduce opportunities for junior designers. It’s true there may be fewer entry-level roles focused on production work. But AI-native designers entering the field now may have an edge over seasoned professionals who are tied to traditional methods.In an AI-driven world, knowing the “right” design process won’t matter as much. Technical skills, domain expertise and a strong craft will still help, but what really counts is getting results — regardless of how you get there.The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic— Peter DruckerMindset will matter more than experience. Those who adapt fast and use AI to learn new domains quickly will stand out. We are already starting to see this unfold. Tobi Lutke, CEO of Shopify recently stated that AI usage is now a baseline expectation at Shopfiy. He went even further, starting that “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI”.This demonstrates that adaptability and AI fluency are becoming core expectations. In this new landscape, titles and years of experience will matter less. Designers who can leverage AI as a force multiplier will outpace and outshine those relying on traditional workflows or rigid processes.Speed isn’t everythingNote that I didn’t use the word taste, which many now describe as critical in the AI era. I cringe a little when I hear it — taste feels vague and subjective, often tied to the ‘I’ll know it when I see it’ mindset the design industry has been trying to shake off for years. While I get the intent, I prefer to describe this as judgment: the ability to make calls informed by experience and grounded in clear intent, shared principles, and a solid grasp of user and technical context — not personal preference or aesthetic instinct. When you can create infinite variations almost instantly, judgment is what helps you identify what’s truly distinct, useful and worth refining.What does this mean for designing within enterprise environmentsI lead the design team at DataRobot, a platform that helps AI builders create and manage agentic, generative and predictive workflows within large enterprises. We’ve been exploring how AI tools can augment design and development across the org.Screens from the DataRobot AI platformWhile these tools are great for initial ideation, this is often only a small part of the work in enterprise environments. Here, the reality is more complex: teams work within deeply established workflows, technical frameworks, and products with large surface areas.This differs from consumer design, where teams often have more freedom to invent patterns and push visual boundaries. Enterprise design is about reliability, scalability, and trust. It means navigating legacy systems, aligning with highly technical stakeholders, and ensuring consistency across a broad suite of tools.For us, one of the clearest use cases for AI tooling has been accelerating early-stage concepting and customer validation. While most of our focus is on providing infrastructure to build and manage AI models, we’ve recently expanded into custom AI apps, tailored for specialized workflows across a broad range of industries and verticals. The number of UI variants we would need to support is simply too vast for traditional design processes to cover.Some examples of DataRobot applications — both production and concept.In the past, this would have meant manually designing multiple static iterations and getting feedback based on static mocks. Now, AI tools let us spin up tailored interfaces, with dynamic UI elements tailored for different industries and customer contexts, while adhering to our design system and following best practices for accessibility. Customers get to try something close to the real output and we get better signal earlier in the cycle, reducing wasted effort and resources.In this context, the strict frameworks used by tools like V0are an advantage. They provide guardrails, meaning you need to go out of your way to create a bad experience. It’s early days, but this is helping non-designers in particular to get early-stage validation with customers and prospects.This means the role of the design team is to provide the framework for others to execute, creating prompt guides that codify our design system and visual language, so that outputs remain on brand. Then we step in deeper after direction is validated. In effect, we’re shifting from execution to enablement. Design is being democratized. That’s a good thing, as long as we set the frame.Beyond the baselineAI has raised the baseline. That helps with speed and early validation, but it won’t help you break new ground. Generative tools are by their nature derivative.When everything trends toward average, we need new ways to raise the ceiling. Leadership in this context means knowing when to push beyond the baseline, shaping a distinct point of view grounded in reality and underpinned by strong principles. That point of view should be shaped through deep cross-functional collaboration, with a clear understanding of strategy, user needs, and the broader market.In a world where AI makes it easier than ever to build software, design is becoming more essential and more powerful. It’s craft, quality, and point of view that makes a product stand out and be loved.— Dylan FieldWhat to focus on nowFor individual contributors or those just starting out, it can feel daunting and difficult to know where to start:Start experimenting: Don’t wait for the perfect course, permission or excuse. Just jump in and run small tests. See how you can replicate previous briefsin order to get a feel for where they excel and where they break.Look for leverage: Don’t just use these tools to move faster — use them to think differently. How might you explore more directions, test ideas earlier, or involve others upstream?Contribute to the system: Consider how you might codify what works to improve patterns, prompts, or workflows. This is increasingly where high-impact work will live.If you’re leading a design team:Design the system, not just the UI: Build the tools, patterns, and prompts that others can use to move fast.Codify best practices: Think how you might translate tribal knowledge into actionable context and principles, for both internal teams and AI systems.Exercisejudgement: Train your team to recognize good from average in the context of your product. Establish a shared language for what good means in your context, and how you might elevate your baseline.Final thoughtsThe UI layer is becoming automated. That doesn’t make design less important — it makes it more critical. Now everyone can ship something decent, but only a great team can ship something exceptional.AI might handle the pixels, but it’s just a tool. Design’s purpose is clearer than ever: understanding users, shaping systems, and delivering better outcomes. AI tools should amplify our capabilities, not make us complacent. This means that while we integrate them into our workflows, we must continue to sharpen our core skills. What Paul Graham said about writing applies equally to design.When you lose the ability to write, you also lose some of your ability to think— Paul GrahamThis article was written with the assistance of ChatGPT 4o.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.Design in the age of vibes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #design #age #vibes
    Design in the age of vibes
    What the new wave of new AI design and dev tools, — Bolt, V0, Lovable, and Figma Make — mean for the future of software design.Prompt by the author, image generated by Sora.This article builds on reflections I shared last July in The expanded scope and blurring boundaries of AI-powered design, outlining what’s changed in a short time, and what it means for those designing software and leading design teams.Like many others, I’ve been exploring tools like Bolt, Lovable, V0, and most recently Figma Make, looking at how they are changing the way we build software today, and what that means for the future. For those who may not know, these tools are part of a new wave of AI-powered design and development platforms that aim to speed up how we go from prompt to prototype, automating front-end code, generating UI from prompts, and bridging the gap between design and engineering. Bolt is now the second fastest-growing product in history, just behind ChatGPT.While the AI hype hasn’t slowed since ChatGPT’s launch, it’s quickly becoming apparent that these tools represent a step change, one that is rapidly reshaping how we work, and how software gets built.A example of the Bolt.new UI interfaceThis shift didn’t start with AIEven before the recent explosion of AI tooling, design teams have been evolving their approach and expanding their scope of impact. Products like Figma enabled more fluid communication and cross-disciplinary collaboration, while design systems and front-end frameworks like Material, Tailwind, Radix and other libraries helped codify and systematise best practices for visual design, interaction an accessibility.This enabled designers to spend more time thinking about the broader systems, increasing iteration cycles — and less time debating padding. While such tools and frameworks helped to elevate the baseline user experience for many products, in enterprise SaaS in particular, they have had their share of criticism from the resulting sea of sameness that they generated. AI tools are now accelerating and amplifying some of the consequences, both positive and negative. These products represent not just a tooling upgrade, but a shift in what design is, who does it, and how teams are built.Design has evolved from the design of objects, both physical and immaterial, to the design of systems, to the design of complex adaptive systems. The evolution is shifting the role of designers; they are no longer the central planner but rather participants within the systems they exist in. This is a fundamental shift — one that requires a new set of values— Joi Ito, MIT Media LabWhat AI tools are making possibleThis new wave of AI tools can generate high-quality UIs from a prompt, screenshot, or Figma frame. Work that once required a multidisciplinary team and weeks of effort — from concept to coded prototype — can now happen in a matter of hours. Best practices are baked in. Layouts are responsive by default. Interaction logic is defined in a sentence. Even connecting to real data is no longer a blocker, it’s part of the flow.Lovable, one of the many new AI design and full-stack development tools launched recentlyThese tools differ from popular IDE-based assistants like Cursor, Copilot and Windsurf in both purpose and level of abstraction. UI-based tools like Bolt automate many of the more complex and often intimidating parts of the developer workflows; spinning up environments, scaffolding projects, managing dependencies, and deploying apps. That makes sense, given that many of them were built by hosting platforms such as Vercel and Replit.With this new speed and ease of use, designers don’t need to wait on engineers to see how something feels in practice. They can test ideas with higher fidelity faster, explore more variations, and evolve the experience in tight feedback loops.Figma Make: Start with a design and prompt your way to a functional prototype, fast — all in Figma.This shift has also given rise to what some are calling ‘Vibe coding’, a term coined by Andrej Karpathy, that captures this expressive, real-time way of building software. Instead of following a strict spec or writing code line by line, you start with a vibe or loose concept, and use these tools to sculpt the idea into something functional. You prompt, tweak, adjust components, and refine until it feels right. It’s intuitive, fast, and fluid.There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMsare getting too good.— Andrej KarpathyIn this new paradigm, the output isn’t just faster, it’s driven by rapid judgement and intuition, not necessarily depth of technical experience. In addition, the barrier to entry for non-designers to explore ideas has lowered too. Now that anyone can create compelling, usable apps with front-end and back-end logic, what does that mean for design?I would love to say that this means more time spent on outcomes and higher-impact work for designers, but it’s more likely to disrupt the foundations of what it means to be a designer. The boundaries between the classic product triad of design, engineering and product management were already blurring but this looks like it will accelerate even more.We are in the middle of a significant industry shift, we’re heading into a period of rapid, unpredictable change.. While testing some of these new AI tools, I have had several ‘oh shit’ moments where I get a sense of how things might evolve…. this is what copywriters and others in similar writing roles must have felt when ChatGPT first came out.The author, while vibe codingWhat this might mean for designAs UI generation becomes commoditized, the value of design shifts upstream. With that, the scope of what is expected from design will shift. Future designs team are likely to be smaller, and more embedded in product strategy. As companies grow, design functions won’t necessarily need bigger design teams, they will need higher-leverage ones.Meanwhile, designers and engineers will work more closely together — not through handoff, but through shared tools and live collaboration. In enterprise environments in particular, much of the engineering work is not so much about zero-to-one implementation but about working within and around established technical constraints. As front-end becomes commoditized, engineers will shift their focus further upstream to establishing strong technical foundations and systems for teams to build from.From years of experience to mindsetSome worry this shift will reduce opportunities for junior designers. It’s true there may be fewer entry-level roles focused on production work. But AI-native designers entering the field now may have an edge over seasoned professionals who are tied to traditional methods.In an AI-driven world, knowing the “right” design process won’t matter as much. Technical skills, domain expertise and a strong craft will still help, but what really counts is getting results — regardless of how you get there.The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic— Peter DruckerMindset will matter more than experience. Those who adapt fast and use AI to learn new domains quickly will stand out. We are already starting to see this unfold. Tobi Lutke, CEO of Shopify recently stated that AI usage is now a baseline expectation at Shopfiy. He went even further, starting that “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI”.This demonstrates that adaptability and AI fluency are becoming core expectations. In this new landscape, titles and years of experience will matter less. Designers who can leverage AI as a force multiplier will outpace and outshine those relying on traditional workflows or rigid processes.Speed isn’t everythingNote that I didn’t use the word taste, which many now describe as critical in the AI era. I cringe a little when I hear it — taste feels vague and subjective, often tied to the ‘I’ll know it when I see it’ mindset the design industry has been trying to shake off for years. While I get the intent, I prefer to describe this as judgment: the ability to make calls informed by experience and grounded in clear intent, shared principles, and a solid grasp of user and technical context — not personal preference or aesthetic instinct. When you can create infinite variations almost instantly, judgment is what helps you identify what’s truly distinct, useful and worth refining.What does this mean for designing within enterprise environmentsI lead the design team at DataRobot, a platform that helps AI builders create and manage agentic, generative and predictive workflows within large enterprises. We’ve been exploring how AI tools can augment design and development across the org.Screens from the DataRobot AI platformWhile these tools are great for initial ideation, this is often only a small part of the work in enterprise environments. Here, the reality is more complex: teams work within deeply established workflows, technical frameworks, and products with large surface areas.This differs from consumer design, where teams often have more freedom to invent patterns and push visual boundaries. Enterprise design is about reliability, scalability, and trust. It means navigating legacy systems, aligning with highly technical stakeholders, and ensuring consistency across a broad suite of tools.For us, one of the clearest use cases for AI tooling has been accelerating early-stage concepting and customer validation. While most of our focus is on providing infrastructure to build and manage AI models, we’ve recently expanded into custom AI apps, tailored for specialized workflows across a broad range of industries and verticals. The number of UI variants we would need to support is simply too vast for traditional design processes to cover.Some examples of DataRobot applications — both production and concept.In the past, this would have meant manually designing multiple static iterations and getting feedback based on static mocks. Now, AI tools let us spin up tailored interfaces, with dynamic UI elements tailored for different industries and customer contexts, while adhering to our design system and following best practices for accessibility. Customers get to try something close to the real output and we get better signal earlier in the cycle, reducing wasted effort and resources.In this context, the strict frameworks used by tools like V0are an advantage. They provide guardrails, meaning you need to go out of your way to create a bad experience. It’s early days, but this is helping non-designers in particular to get early-stage validation with customers and prospects.This means the role of the design team is to provide the framework for others to execute, creating prompt guides that codify our design system and visual language, so that outputs remain on brand. Then we step in deeper after direction is validated. In effect, we’re shifting from execution to enablement. Design is being democratized. That’s a good thing, as long as we set the frame.Beyond the baselineAI has raised the baseline. That helps with speed and early validation, but it won’t help you break new ground. Generative tools are by their nature derivative.When everything trends toward average, we need new ways to raise the ceiling. Leadership in this context means knowing when to push beyond the baseline, shaping a distinct point of view grounded in reality and underpinned by strong principles. That point of view should be shaped through deep cross-functional collaboration, with a clear understanding of strategy, user needs, and the broader market.In a world where AI makes it easier than ever to build software, design is becoming more essential and more powerful. It’s craft, quality, and point of view that makes a product stand out and be loved.— Dylan FieldWhat to focus on nowFor individual contributors or those just starting out, it can feel daunting and difficult to know where to start:Start experimenting: Don’t wait for the perfect course, permission or excuse. Just jump in and run small tests. See how you can replicate previous briefsin order to get a feel for where they excel and where they break.Look for leverage: Don’t just use these tools to move faster — use them to think differently. How might you explore more directions, test ideas earlier, or involve others upstream?Contribute to the system: Consider how you might codify what works to improve patterns, prompts, or workflows. This is increasingly where high-impact work will live.If you’re leading a design team:Design the system, not just the UI: Build the tools, patterns, and prompts that others can use to move fast.Codify best practices: Think how you might translate tribal knowledge into actionable context and principles, for both internal teams and AI systems.Exercisejudgement: Train your team to recognize good from average in the context of your product. Establish a shared language for what good means in your context, and how you might elevate your baseline.Final thoughtsThe UI layer is becoming automated. That doesn’t make design less important — it makes it more critical. Now everyone can ship something decent, but only a great team can ship something exceptional.AI might handle the pixels, but it’s just a tool. Design’s purpose is clearer than ever: understanding users, shaping systems, and delivering better outcomes. AI tools should amplify our capabilities, not make us complacent. This means that while we integrate them into our workflows, we must continue to sharpen our core skills. What Paul Graham said about writing applies equally to design.When you lose the ability to write, you also lose some of your ability to think— Paul GrahamThis article was written with the assistance of ChatGPT 4o.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.Design in the age of vibes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #design #age #vibes
    UXDESIGN.CC
    Design in the age of vibes
    What the new wave of new AI design and dev tools, — Bolt, V0, Lovable, and Figma Make — mean for the future of software design.Prompt by the author, image generated by Sora.This article builds on reflections I shared last July in The expanded scope and blurring boundaries of AI-powered design, outlining what’s changed in a short time, and what it means for those designing software and leading design teams.Like many others, I’ve been exploring tools like Bolt, Lovable, V0, and most recently Figma Make, looking at how they are changing the way we build software today, and what that means for the future. For those who may not know, these tools are part of a new wave of AI-powered design and development platforms that aim to speed up how we go from prompt to prototype, automating front-end code, generating UI from prompts, and bridging the gap between design and engineering. Bolt is now the second fastest-growing product in history, just behind ChatGPT.While the AI hype hasn’t slowed since ChatGPT’s launch, it’s quickly becoming apparent that these tools represent a step change, one that is rapidly reshaping how we work, and how software gets built.A example of the Bolt.new UI interfaceThis shift didn’t start with AIEven before the recent explosion of AI tooling, design teams have been evolving their approach and expanding their scope of impact. Products like Figma enabled more fluid communication and cross-disciplinary collaboration, while design systems and front-end frameworks like Material, Tailwind, Radix and other libraries helped codify and systematise best practices for visual design, interaction an accessibility.This enabled designers to spend more time thinking about the broader systems, increasing iteration cycles — and less time debating padding. While such tools and frameworks helped to elevate the baseline user experience for many products, in enterprise SaaS in particular, they have had their share of criticism from the resulting sea of sameness that they generated. AI tools are now accelerating and amplifying some of the consequences, both positive and negative. These products represent not just a tooling upgrade, but a shift in what design is, who does it, and how teams are built.Design has evolved from the design of objects, both physical and immaterial, to the design of systems, to the design of complex adaptive systems. The evolution is shifting the role of designers; they are no longer the central planner but rather participants within the systems they exist in. This is a fundamental shift — one that requires a new set of values— Joi Ito, MIT Media Lab (Jan 2016)What AI tools are making possibleThis new wave of AI tools can generate high-quality UIs from a prompt, screenshot, or Figma frame. Work that once required a multidisciplinary team and weeks of effort — from concept to coded prototype — can now happen in a matter of hours. Best practices are baked in. Layouts are responsive by default. Interaction logic is defined in a sentence. Even connecting to real data is no longer a blocker, it’s part of the flow.Lovable, one of the many new AI design and full-stack development tools launched recentlyThese tools differ from popular IDE-based assistants like Cursor, Copilot and Windsurf in both purpose and level of abstraction. UI-based tools like Bolt automate many of the more complex and often intimidating parts of the developer workflows; spinning up environments, scaffolding projects, managing dependencies, and deploying apps. That makes sense, given that many of them were built by hosting platforms such as Vercel and Replit.With this new speed and ease of use, designers don’t need to wait on engineers to see how something feels in practice. They can test ideas with higher fidelity faster, explore more variations, and evolve the experience in tight feedback loops.Figma Make: Start with a design and prompt your way to a functional prototype, fast — all in Figma.This shift has also given rise to what some are calling ‘Vibe coding’, a term coined by Andrej Karpathy, that captures this expressive, real-time way of building software. Instead of following a strict spec or writing code line by line, you start with a vibe or loose concept, and use these tools to sculpt the idea into something functional. You prompt, tweak, adjust components, and refine until it feels right. It’s intuitive, fast, and fluid.There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.— Andrej KarpathyIn this new paradigm, the output isn’t just faster, it’s driven by rapid judgement and intuition, not necessarily depth of technical experience. In addition, the barrier to entry for non-designers to explore ideas has lowered too. Now that anyone can create compelling, usable apps with front-end and back-end logic, what does that mean for design?I would love to say that this means more time spent on outcomes and higher-impact work for designers, but it’s more likely to disrupt the foundations of what it means to be a designer. The boundaries between the classic product triad of design, engineering and product management were already blurring but this looks like it will accelerate even more.We are in the middle of a significant industry shift, we’re heading into a period of rapid, unpredictable change.. While testing some of these new AI tools, I have had several ‘oh shit’ moments where I get a sense of how things might evolve…. this is what copywriters and others in similar writing roles must have felt when ChatGPT first came out.The author, while vibe coding (image via Giphy)What this might mean for designAs UI generation becomes commoditized, the value of design shifts upstream. With that, the scope of what is expected from design will shift. Future designs team are likely to be smaller, and more embedded in product strategy. As companies grow, design functions won’t necessarily need bigger design teams, they will need higher-leverage ones.Meanwhile, designers and engineers will work more closely together — not through handoff, but through shared tools and live collaboration. In enterprise environments in particular, much of the engineering work is not so much about zero-to-one implementation but about working within and around established technical constraints. As front-end becomes commoditized, engineers will shift their focus further upstream to establishing strong technical foundations and systems for teams to build from.From years of experience to mindsetSome worry this shift will reduce opportunities for junior designers. It’s true there may be fewer entry-level roles focused on production work. But AI-native designers entering the field now may have an edge over seasoned professionals who are tied to traditional methods.In an AI-driven world, knowing the “right” design process won’t matter as much. Technical skills, domain expertise and a strong craft will still help, but what really counts is getting results — regardless of how you get there.The greatest danger in times of turbulence is not the turbulence, it is to act with yesterday’s logic— Peter DruckerMindset will matter more than experience. Those who adapt fast and use AI to learn new domains quickly will stand out. We are already starting to see this unfold. Tobi Lutke, CEO of Shopify recently stated that AI usage is now a baseline expectation at Shopfiy. He went even further, starting that “Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI”.This demonstrates that adaptability and AI fluency are becoming core expectations. In this new landscape, titles and years of experience will matter less. Designers who can leverage AI as a force multiplier will outpace and outshine those relying on traditional workflows or rigid processes.Speed isn’t everythingNote that I didn’t use the word taste, which many now describe as critical in the AI era. I cringe a little when I hear it — taste feels vague and subjective, often tied to the ‘I’ll know it when I see it’ mindset the design industry has been trying to shake off for years. While I get the intent, I prefer to describe this as judgment: the ability to make calls informed by experience and grounded in clear intent, shared principles, and a solid grasp of user and technical context — not personal preference or aesthetic instinct. When you can create infinite variations almost instantly, judgment is what helps you identify what’s truly distinct, useful and worth refining.What does this mean for designing within enterprise environmentsI lead the design team at DataRobot, a platform that helps AI builders create and manage agentic, generative and predictive workflows within large enterprises. We’ve been exploring how AI tools can augment design and development across the org.Screens from the DataRobot AI platformWhile these tools are great for initial ideation, this is often only a small part of the work in enterprise environments. Here, the reality is more complex: teams work within deeply established workflows, technical frameworks, and products with large surface areas.This differs from consumer design, where teams often have more freedom to invent patterns and push visual boundaries. Enterprise design is about reliability, scalability, and trust. It means navigating legacy systems, aligning with highly technical stakeholders, and ensuring consistency across a broad suite of tools.For us, one of the clearest use cases for AI tooling has been accelerating early-stage concepting and customer validation. While most of our focus is on providing infrastructure to build and manage AI models, we’ve recently expanded into custom AI apps, tailored for specialized workflows across a broad range of industries and verticals. The number of UI variants we would need to support is simply too vast for traditional design processes to cover.Some examples of DataRobot applications — both production and concept.In the past, this would have meant manually designing multiple static iterations and getting feedback based on static mocks. Now, AI tools let us spin up tailored interfaces, with dynamic UI elements tailored for different industries and customer contexts, while adhering to our design system and following best practices for accessibility. Customers get to try something close to the real output and we get better signal earlier in the cycle, reducing wasted effort and resources.In this context, the strict frameworks used by tools like V0 (like Tailwind) are an advantage. They provide guardrails, meaning you need to go out of your way to create a bad experience. It’s early days, but this is helping non-designers in particular to get early-stage validation with customers and prospects.This means the role of the design team is to provide the framework for others to execute, creating prompt guides that codify our design system and visual language, so that outputs remain on brand. Then we step in deeper after direction is validated. In effect, we’re shifting from execution to enablement. Design is being democratized. That’s a good thing, as long as we set the frame.Beyond the baselineAI has raised the baseline. That helps with speed and early validation, but it won’t help you break new ground. Generative tools are by their nature derivative.When everything trends toward average, we need new ways to raise the ceiling. Leadership in this context means knowing when to push beyond the baseline, shaping a distinct point of view grounded in reality and underpinned by strong principles. That point of view should be shaped through deep cross-functional collaboration, with a clear understanding of strategy, user needs, and the broader market.In a world where AI makes it easier than ever to build software, design is becoming more essential and more powerful. It’s craft, quality, and point of view that makes a product stand out and be loved.— Dylan FieldWhat to focus on nowFor individual contributors or those just starting out, it can feel daunting and difficult to know where to start:Start experimenting: Don’t wait for the perfect course, permission or excuse. Just jump in and run small tests. See how you can replicate previous briefs (or current briefs in parallel) in order to get a feel for where they excel and where they break.Look for leverage: Don’t just use these tools to move faster — use them to think differently. How might you explore more directions, test ideas earlier, or involve others upstream?Contribute to the system: Consider how you might codify what works to improve patterns, prompts, or workflows. This is increasingly where high-impact work will live.If you’re leading a design team:Design the system, not just the UI: Build the tools, patterns, and prompts that others can use to move fast.Codify best practices: Think how you might translate tribal knowledge into actionable context and principles, for both internal teams and AI systems.Exercise (your) judgement: Train your team to recognize good from average in the context of your product. Establish a shared language for what good means in your context, and how you might elevate your baseline.Final thoughtsThe UI layer is becoming automated. That doesn’t make design less important — it makes it more critical. Now everyone can ship something decent, but only a great team can ship something exceptional.AI might handle the pixels, but it’s just a tool. Design’s purpose is clearer than ever: understanding users, shaping systems, and delivering better outcomes. AI tools should amplify our capabilities, not make us complacent. This means that while we integrate them into our workflows, we must continue to sharpen our core skills. What Paul Graham said about writing applies equally to design.When you lose the ability to write, you also lose some of your ability to think— Paul GrahamThis article was written with the assistance of ChatGPT 4o.John Moriarty leads the design team at DataRobot, an enterprise AI platform that helps AI practitioners to build, govern and operate predictive and generative AI models. Before this, he worked in Accenture, HMH and Design Partners.Design in the age of vibes was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Commentaires 0 Parts
  • AI in business intelligence: Caveat emptor

    One of the ways in which organisations are using the latest AI algorithms to help them grow and thrive is the adoption of privately-held AI models in aligning their business strategies.The differentiation between private and public AI is important in this context – most organisations are rightly wary of allowing public AIs access to what are sensitive data sets, such as HR information, financial data, and details of operational history.It stands to reason that if an AI is given specific data on which to base its responses, its output will be more relevant, and be therefore more effective in helping decision-makers to judge how to strategise. Using private reasoning engines is the logical way that companies can get the best results from AI and keep their intellectual property safe.Enterprise-specific data and the ability to fine-tune a local AI model give organisations the ability to provide bespoke forecasting and operational tuning that are more grounded in the day-to-day reality of a company’s work. A Deloitte Strategy Insight paper calls private AI a “bespoke compass”, and places the use of internal data as a competitive advantage, and Accenture describes AIs as “poised to provide the most significant economic uplift and change to work since the agricultural and industrial revolutions.”There is the possibility, however, that like traditional business intelligence, using historical data drawn from several years of operations across the enterprise, can entrench decision-making in patterns from the past. McKinsey says companies are in danger of “mirroring their institutional past in algorithmic amber.” The Harvard Business Review picks up on some of the technical complexity, stating that the act of customising a model so that it’s activities are more relevant to the company is difficult, and perhaps, therefore, not a task to be taken on by any but the most AI-literate at a level of data science and programming.MIT Sloane strikes a balance between the fervent advocates and the conservative voices for private AI in business strategising. It advises that AI be regarded as a co-pilot, and urges continual questioning and verification of AI output, especially when the stakes are high.Believe in the revolutionHowever, decision-makers considering pursuing this course of actionmay wish to consider the motivations of those sources of advice that advocate strongly for AI enablement in this way.Deloitte, for example, builds and manages AI solutions for clients using custom infrastructure such as its factory-as-a-service offerings, while Accenture has practices dedicated to its clients’ AI strategy, such as Accenture Applied Intelligence. It partners with AWS and Azure, building bespoke AI systems for Fortune 500 companies, among others, and Deloitte is partners with Oracle and Nvidia.With ‘skin in the game’, phrases such as “the most significantchange to work since the agricultural and industrial revolutions” and a “bespoke compass” are inspiring, but the vendors’ motivations may not be entirely altruistic.Advocates for AI in general rightly point to the ability of models to identify trends and statistical undercurrents much more efficiently than humans. Given the mass of data available to the modern enterprise, comprising both internal and externally-available information, having software that can parse data at scale is an incredible advantage. Instead of manually creating analysis of huge repositories of data – which is time-consuming and error-prove – AI can see through the chaff and surface real, actionable insights.Asking the right questionsAdditionally, AI models can interpret queries couched in normal language, and make predictions based on empirical information, which, in the context of private AIs, is highly-relevant to the organisation. Relatively unskilled personnel can query data without having skills in statistical analysis or database query languages, and get answers that otherwise would have involved multiple teams and skill-sets drawn from across the enterprise. That time-saving alone is considerable, letting organisations focus on strategy, rather than forming the necessary data points and manually querying the information they’ve managed to gather.Both McKinsey and Gartner warn, however, of overconfidence and data obsolescence. On the latter, historical data may not be relevant to strategising, especially if records go back several years. Overconfidence is perhaps best termed in the context of AI as operators trusting AI responses without question, not delving independently into responses’ detail, or in some cases, taking as fact the responses to badly-phrased queries.For any software algorithm, human phrases such as “base your findings on our historical data” are open to interpretation, unlike, for example, “base your findings on the last twelve months’ sales data, ignoring outliers that differ from the mean by over 30%, although do state those instances for me to consider.”Software of experienceOrganisations might pursue private AI solutions alongside mature, existing business intelligence platforms. SAP Business Organisations is nearly 30 years old, yet a youngster compared to SAS Business Intelligence that’s been around since before the internet became mainstream in the 1990s. Even relative newcomers such as Microsoft Power BI represents at least a decade of development, iteration, customer feedback, and real-world use in business analysis. It seems sensible, therefore, that private AI’s deployment on business data should be regarded as an addition to the strategiser’s toolkit, rather than a silver bullet that replaces “traditional” tools.For users of private AI that have the capacity to audit and tweak their model’s inputs and inner algorithms, retaining human control and oversight is important – just as it is with tools like Oracle’s Business Intelligence suite. There are some scenarios where the intelligent processing of and acting on real-time datagives AI analysis a competitive edge over the incumbent BI platforms. But AI has yet to develop into a magical Swiss Army Knife for business strategy.Until AI purposed for business data analysis is as developed, iterated on, battle-hardened, and mature as some of the market’s go-to BI platforms, early adopters might temper the enthusiasm of AI and AI service vendors with practical experience and a critical eye. AI is a new tool, and one with a great deal of potential. However, it remains first-generation in its current guises, public and private.
    #business #intelligence #caveat #emptor
    AI in business intelligence: Caveat emptor
    One of the ways in which organisations are using the latest AI algorithms to help them grow and thrive is the adoption of privately-held AI models in aligning their business strategies.The differentiation between private and public AI is important in this context – most organisations are rightly wary of allowing public AIs access to what are sensitive data sets, such as HR information, financial data, and details of operational history.It stands to reason that if an AI is given specific data on which to base its responses, its output will be more relevant, and be therefore more effective in helping decision-makers to judge how to strategise. Using private reasoning engines is the logical way that companies can get the best results from AI and keep their intellectual property safe.Enterprise-specific data and the ability to fine-tune a local AI model give organisations the ability to provide bespoke forecasting and operational tuning that are more grounded in the day-to-day reality of a company’s work. A Deloitte Strategy Insight paper calls private AI a “bespoke compass”, and places the use of internal data as a competitive advantage, and Accenture describes AIs as “poised to provide the most significant economic uplift and change to work since the agricultural and industrial revolutions.”There is the possibility, however, that like traditional business intelligence, using historical data drawn from several years of operations across the enterprise, can entrench decision-making in patterns from the past. McKinsey says companies are in danger of “mirroring their institutional past in algorithmic amber.” The Harvard Business Review picks up on some of the technical complexity, stating that the act of customising a model so that it’s activities are more relevant to the company is difficult, and perhaps, therefore, not a task to be taken on by any but the most AI-literate at a level of data science and programming.MIT Sloane strikes a balance between the fervent advocates and the conservative voices for private AI in business strategising. It advises that AI be regarded as a co-pilot, and urges continual questioning and verification of AI output, especially when the stakes are high.Believe in the revolutionHowever, decision-makers considering pursuing this course of actionmay wish to consider the motivations of those sources of advice that advocate strongly for AI enablement in this way.Deloitte, for example, builds and manages AI solutions for clients using custom infrastructure such as its factory-as-a-service offerings, while Accenture has practices dedicated to its clients’ AI strategy, such as Accenture Applied Intelligence. It partners with AWS and Azure, building bespoke AI systems for Fortune 500 companies, among others, and Deloitte is partners with Oracle and Nvidia.With ‘skin in the game’, phrases such as “the most significantchange to work since the agricultural and industrial revolutions” and a “bespoke compass” are inspiring, but the vendors’ motivations may not be entirely altruistic.Advocates for AI in general rightly point to the ability of models to identify trends and statistical undercurrents much more efficiently than humans. Given the mass of data available to the modern enterprise, comprising both internal and externally-available information, having software that can parse data at scale is an incredible advantage. Instead of manually creating analysis of huge repositories of data – which is time-consuming and error-prove – AI can see through the chaff and surface real, actionable insights.Asking the right questionsAdditionally, AI models can interpret queries couched in normal language, and make predictions based on empirical information, which, in the context of private AIs, is highly-relevant to the organisation. Relatively unskilled personnel can query data without having skills in statistical analysis or database query languages, and get answers that otherwise would have involved multiple teams and skill-sets drawn from across the enterprise. That time-saving alone is considerable, letting organisations focus on strategy, rather than forming the necessary data points and manually querying the information they’ve managed to gather.Both McKinsey and Gartner warn, however, of overconfidence and data obsolescence. On the latter, historical data may not be relevant to strategising, especially if records go back several years. Overconfidence is perhaps best termed in the context of AI as operators trusting AI responses without question, not delving independently into responses’ detail, or in some cases, taking as fact the responses to badly-phrased queries.For any software algorithm, human phrases such as “base your findings on our historical data” are open to interpretation, unlike, for example, “base your findings on the last twelve months’ sales data, ignoring outliers that differ from the mean by over 30%, although do state those instances for me to consider.”Software of experienceOrganisations might pursue private AI solutions alongside mature, existing business intelligence platforms. SAP Business Organisations is nearly 30 years old, yet a youngster compared to SAS Business Intelligence that’s been around since before the internet became mainstream in the 1990s. Even relative newcomers such as Microsoft Power BI represents at least a decade of development, iteration, customer feedback, and real-world use in business analysis. It seems sensible, therefore, that private AI’s deployment on business data should be regarded as an addition to the strategiser’s toolkit, rather than a silver bullet that replaces “traditional” tools.For users of private AI that have the capacity to audit and tweak their model’s inputs and inner algorithms, retaining human control and oversight is important – just as it is with tools like Oracle’s Business Intelligence suite. There are some scenarios where the intelligent processing of and acting on real-time datagives AI analysis a competitive edge over the incumbent BI platforms. But AI has yet to develop into a magical Swiss Army Knife for business strategy.Until AI purposed for business data analysis is as developed, iterated on, battle-hardened, and mature as some of the market’s go-to BI platforms, early adopters might temper the enthusiasm of AI and AI service vendors with practical experience and a critical eye. AI is a new tool, and one with a great deal of potential. However, it remains first-generation in its current guises, public and private. #business #intelligence #caveat #emptor
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    AI in business intelligence: Caveat emptor
    One of the ways in which organisations are using the latest AI algorithms to help them grow and thrive is the adoption of privately-held AI models in aligning their business strategies.The differentiation between private and public AI is important in this context – most organisations are rightly wary of allowing public AIs access to what are sensitive data sets, such as HR information, financial data, and details of operational history.It stands to reason that if an AI is given specific data on which to base its responses, its output will be more relevant, and be therefore more effective in helping decision-makers to judge how to strategise. Using private reasoning engines is the logical way that companies can get the best results from AI and keep their intellectual property safe.Enterprise-specific data and the ability to fine-tune a local AI model give organisations the ability to provide bespoke forecasting and operational tuning that are more grounded in the day-to-day reality of a company’s work. A Deloitte Strategy Insight paper calls private AI a “bespoke compass”, and places the use of internal data as a competitive advantage, and Accenture describes AIs as “poised to provide the most significant economic uplift and change to work since the agricultural and industrial revolutions.”There is the possibility, however, that like traditional business intelligence, using historical data drawn from several years of operations across the enterprise, can entrench decision-making in patterns from the past. McKinsey says companies are in danger of “mirroring their institutional past in algorithmic amber.” The Harvard Business Review picks up on some of the technical complexity, stating that the act of customising a model so that it’s activities are more relevant to the company is difficult, and perhaps, therefore, not a task to be taken on by any but the most AI-literate at a level of data science and programming.MIT Sloane strikes a balance between the fervent advocates and the conservative voices for private AI in business strategising. It advises that AI be regarded as a co-pilot, and urges continual questioning and verification of AI output, especially when the stakes are high.Believe in the revolutionHowever, decision-makers considering pursuing this course of action (getting on the AI wave, but doing so in a private, safety-conscious way) may wish to consider the motivations of those sources of advice that advocate strongly for AI enablement in this way.Deloitte, for example, builds and manages AI solutions for clients using custom infrastructure such as its factory-as-a-service offerings, while Accenture has practices dedicated to its clients’ AI strategy, such as Accenture Applied Intelligence. It partners with AWS and Azure, building bespoke AI systems for Fortune 500 companies, among others, and Deloitte is partners with Oracle and Nvidia.With ‘skin in the game’, phrases such as “the most significant […] change to work since the agricultural and industrial revolutions” and a “bespoke compass” are inspiring, but the vendors’ motivations may not be entirely altruistic.Advocates for AI in general rightly point to the ability of models to identify trends and statistical undercurrents much more efficiently than humans. Given the mass of data available to the modern enterprise, comprising both internal and externally-available information, having software that can parse data at scale is an incredible advantage. Instead of manually creating analysis of huge repositories of data – which is time-consuming and error-prove – AI can see through the chaff and surface real, actionable insights.Asking the right questionsAdditionally, AI models can interpret queries couched in normal language, and make predictions based on empirical information, which, in the context of private AIs, is highly-relevant to the organisation. Relatively unskilled personnel can query data without having skills in statistical analysis or database query languages, and get answers that otherwise would have involved multiple teams and skill-sets drawn from across the enterprise. That time-saving alone is considerable, letting organisations focus on strategy, rather than forming the necessary data points and manually querying the information they’ve managed to gather.Both McKinsey and Gartner warn, however, of overconfidence and data obsolescence. On the latter, historical data may not be relevant to strategising, especially if records go back several years. Overconfidence is perhaps best termed in the context of AI as operators trusting AI responses without question, not delving independently into responses’ detail, or in some cases, taking as fact the responses to badly-phrased queries.For any software algorithm, human phrases such as “base your findings on our historical data” are open to interpretation, unlike, for example, “base your findings on the last twelve months’ sales data, ignoring outliers that differ from the mean by over 30%, although do state those instances for me to consider.”Software of experienceOrganisations might pursue private AI solutions alongside mature, existing business intelligence platforms. SAP Business Organisations is nearly 30 years old, yet a youngster compared to SAS Business Intelligence that’s been around since before the internet became mainstream in the 1990s. Even relative newcomers such as Microsoft Power BI represents at least a decade of development, iteration, customer feedback, and real-world use in business analysis. It seems sensible, therefore, that private AI’s deployment on business data should be regarded as an addition to the strategiser’s toolkit, rather than a silver bullet that replaces “traditional” tools.For users of private AI that have the capacity to audit and tweak their model’s inputs and inner algorithms, retaining human control and oversight is important – just as it is with tools like Oracle’s Business Intelligence suite. There are some scenarios where the intelligent processing of and acting on real-time data (online retail pricing mechanisms, for example) gives AI analysis a competitive edge over the incumbent BI platforms. But AI has yet to develop into a magical Swiss Army Knife for business strategy.Until AI purposed for business data analysis is as developed, iterated on, battle-hardened, and mature as some of the market’s go-to BI platforms, early adopters might temper the enthusiasm of AI and AI service vendors with practical experience and a critical eye. AI is a new tool, and one with a great deal of potential. However, it remains first-generation in its current guises, public and private.(Image source: “It’s about rules and strategy” by pshutterbug is licensed under CC BY 2.0.)
    0 Commentaires 0 Parts
  • 5 BCDR Essentials for Effective Ransomware Defense

    May 15, 2025The Hacker NewsRansomware Defense / Business Continuity

    Ransomware has evolved into a deceptive, highly coordinated and dangerously sophisticated threat capable of crippling organizations of any size. Cybercriminals now exploit even legitimate IT tools to infiltrate networks and launch ransomware attacks. In a chilling example, Microsoft recently disclosed how threat actors misused its Quick Assist remote assistance tool to deploy the destructive Black Basta ransomware strain. And what’s worse? Innovations like Ransomware-as-a-Serviceare lowering the bar for entry, making ransomware attacks more frequent and far-reaching than ever before. According to Cybersecurity Ventures, by 2031, a new ransomware attack is expected every 2 seconds, with projected damages hitting an astronomical billion annually.
    No organization is immune to ransomware, and building a strong recovery strategy is equally, if not even more, important than attempting to prevent all attacks in the first place. A solid business continuity and disaster recoverystrategy can be your last and most critical line of defense when ransomware breaks through, allowing you to bounce back quickly from the attack, resume operations and avoid paying ransom. Notably, the cost of investing in BCDR is negligible compared to the devastation that prolonged downtime or data loss can cause.
    In this article, we’ll break down the five essential BCDR capabilities you should have in place to effectively recover from ransomware. These strategies can mean the difference between swift recovery and business failure after an attack. Let’s explore what every organization must do before it’s too late.
    Follow the 3-2-1backup rule

    The 3-2-1 backup rule has long been the gold standard: keep three copies of your data, store them on two different media and keep one copy off-site. But in the age of ransomware, that’s no longer enough.
    Experts now recommend the 3-2-1-1-0 strategy. The extra 1 stands for one immutable copy — a backup that can’t be changed or deleted. The 0 represents zero doubt in your ability to recover, with verified, tested recovery points.
    Why the upgrade? Ransomware doesn’t just target production systems anymore. It actively seeks and encrypts backups as well. That’s why isolation, immutability and verification are key. Cloud-based and air-gapped backup storage provide essential layers of protection, keeping backups out of reach from threats that even use stolen admin credentials.
    Having such immutable backups ensures recovery points remain untampered, no matter what. They’re your safety net when everything else is compromised. Plus, this level of data protection helps meet rising cyber insurance standards and compliance obligations.
    Bonus tip: Look for solutions offering a hardened Linux architecture to camouflage and isolate backups outside of the common Windows attack surface.
    Automate and monitor backups continuously
    Automation is powerful, but without active monitoring, it can become your biggest blind spot. While scheduling backups and automating verification saves time, it’s just as important to ensure that those backups are actually happening and that they’re usable.
    Use built-in tools or custom scripting to monitor backup jobs, trigger alerts on failures and verify the integrity of your recovery points. It’s simple: either monitor continuously or risk finding out too late that your backups never had your back. Regularly testing and validating the recovery points is the only way to trust your recovery plan.
    Bonus tip: Choose solutions that integrate with professional services automationticketing systems to automatically raise alerts and tickets for any backup hiccups.
    Protect your backup infrastructure from ransomware and internal threats
    Your backup infrastructure must be isolated, hardened and tightly controlled to prevent unauthorized access or tampering. You must:

    Lock down your backup network environment.
    Host your backup server in a secure local area networksegment with no inbound internet access.
    Allow outbound communication from the backup server only to approved vendor networks. Block all unapproved outbound traffic using strict firewall rules.
    Permit communication only between protected systems and the backup server.
    Use firewalls and port-based access control listson network switches to enforce granular access control.
    Apply agent-level encryption so data is protected at rest, using keys generated from a secure passphrase only you control.
    Enforce strict access controls and authentication.
    Implement role-based access controlwith least-privilege roles for Tier 1 techs.
    Ensure multifactor authenticationfor all access to the backup management console.
    Monitor audit logs continuously for privilege escalations or unauthorized role changes.
    Ensure audit logs are immutable.

    Review regularly for:

    Security-related events like failed logins, privilege escalations, deletion of backups and device removal.
    Administrative actions such as changes to backup schedules, changes to retention settings, new user creation and changes to user roles.
    Backup and backup copysuccess/failure rates and backup verification success/failure rates.
    Stay alert to serious risks.
    Configure automatic alerts for policy violations and high-severity security events, such as an unauthorized change to backup retention policies.

    Test restores regularly and include them in your DR plan
    Backups mean nothing if you can’t restore from them quickly and completely, and that’s why regular testing is essential. Recovery drills must be scheduled and integrated into your disaster recoveryplan. The goal is to build muscle memory, reveal weaknesses and confirm that your recovery plan actually works under pressure.
    Start by defining the recovery time objectiveand the recovery point objectivefor every system. These determine how fast and how recent your recoverable data needs to be. Testing against those targets helps ensure your strategy aligns with business expectations.
    Importantly, don’t limit testing to one type of restore. Simulate file-level recoveries, full bare-metal restores and full-scale cloud failovers. Each scenario uncovers different vulnerabilities, such as time delays, compatibility issues or infrastructure gaps.
    Also, recovery is more than a technical task. Involve stakeholders across departments to test communication protocols, role responsibilities and customer-facing impacts. Who talks to clients? Who triggers the internal chain of command? Everyone should know their role when every second counts.
    Detect threats early with backup-level visibility
    When it comes to ransomware, speed of detection is everything. While endpoint and network tools often get the spotlight, your backup layer is also a powerful, often overlooked line of defense. Monitoring backup data for anomalies can reveal early signs of ransomware activity, giving you a critical head start before widespread damage occurs.
    Backup-level visibility allows you to detect telltale signs like sudden encryption, mass deletions or abnormal file modifications. For example, if a process begins overwriting file contents with random data while leaving all modified timestamps intact, that’s a major red flag. No legitimate program behaves that way. With smart detection at the backup layer, you can catch these behaviors and get alerted immediately.
    This capability doesn’t replace your endpoint detection and responseor antivirussolutions; it supercharges them. It speeds up triage, helps isolate compromised systems faster and reduces an attack’s overall blast radius.
    For maximum impact, choose backup solutions that offer real-time anomaly detection and support integration with your security information and event managementor centralized logging systems. The faster you see the threat, the faster you can act — and that can be the difference between a minor disruption and a major disaster.
    Bonus tip: Train end users to recognize and report suspicious activity early
    If BCDR is your last line of defense, your end users are the first. Cybercriminals are increasingly targeting end users today. According to Microsoft Digital Defense Report 2024, threat actors are trying to access user credentials through various methods, such as phishing, malware and brute-force/password spray attacks. Over the last year, around 7,000 password attacks were blocked per second in Entra ID alone.
    In fact, ransomware attacks often begin with a single click, usually via phishing emails or compromised credentials. Regular security training — especially simulated phishing exercises — helps build awareness of red flags and risky behaviors. Equip your team with the knowledge to spot ransomware warning signs, recognize unsafe data practices and respond appropriately.
    Encourage immediate reporting of anything that seems off. Foster a culture of enablement, not blame. When people feel safe to speak up, they’re more likely to take action. You can even take it further by launching internal programs that reward vigilance, such as a Cybersecurity Hero initiative to recognize and celebrate early reporters of potential threats.
    Final thoughts
    Ransomware doesn’t have to be feared; it has to be planned for. The five BCDR capabilities we discussed above will equip you to withstand even the most advanced ransomware threats and ensure your organization can recover quickly, completely and confidently.
    To seamlessly implement these strategies, consider Datto BCDR, a unified platform that integrates all these capabilities. It’s built to help you stay resilient, no matter what happens. Don’t wait for a ransom note to discover that your backups weren’t enough. Explore how Datto can strengthen your ransomware resilience. Get custom Datto BCDR pricing today.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #bcdr #essentials #effective #ransomware #defense
    5 BCDR Essentials for Effective Ransomware Defense
    May 15, 2025The Hacker NewsRansomware Defense / Business Continuity Ransomware has evolved into a deceptive, highly coordinated and dangerously sophisticated threat capable of crippling organizations of any size. Cybercriminals now exploit even legitimate IT tools to infiltrate networks and launch ransomware attacks. In a chilling example, Microsoft recently disclosed how threat actors misused its Quick Assist remote assistance tool to deploy the destructive Black Basta ransomware strain. And what’s worse? Innovations like Ransomware-as-a-Serviceare lowering the bar for entry, making ransomware attacks more frequent and far-reaching than ever before. According to Cybersecurity Ventures, by 2031, a new ransomware attack is expected every 2 seconds, with projected damages hitting an astronomical billion annually. No organization is immune to ransomware, and building a strong recovery strategy is equally, if not even more, important than attempting to prevent all attacks in the first place. A solid business continuity and disaster recoverystrategy can be your last and most critical line of defense when ransomware breaks through, allowing you to bounce back quickly from the attack, resume operations and avoid paying ransom. Notably, the cost of investing in BCDR is negligible compared to the devastation that prolonged downtime or data loss can cause. In this article, we’ll break down the five essential BCDR capabilities you should have in place to effectively recover from ransomware. These strategies can mean the difference between swift recovery and business failure after an attack. Let’s explore what every organization must do before it’s too late. Follow the 3-2-1backup rule The 3-2-1 backup rule has long been the gold standard: keep three copies of your data, store them on two different media and keep one copy off-site. But in the age of ransomware, that’s no longer enough. Experts now recommend the 3-2-1-1-0 strategy. The extra 1 stands for one immutable copy — a backup that can’t be changed or deleted. The 0 represents zero doubt in your ability to recover, with verified, tested recovery points. Why the upgrade? Ransomware doesn’t just target production systems anymore. It actively seeks and encrypts backups as well. That’s why isolation, immutability and verification are key. Cloud-based and air-gapped backup storage provide essential layers of protection, keeping backups out of reach from threats that even use stolen admin credentials. Having such immutable backups ensures recovery points remain untampered, no matter what. They’re your safety net when everything else is compromised. Plus, this level of data protection helps meet rising cyber insurance standards and compliance obligations. Bonus tip: Look for solutions offering a hardened Linux architecture to camouflage and isolate backups outside of the common Windows attack surface. Automate and monitor backups continuously Automation is powerful, but without active monitoring, it can become your biggest blind spot. While scheduling backups and automating verification saves time, it’s just as important to ensure that those backups are actually happening and that they’re usable. Use built-in tools or custom scripting to monitor backup jobs, trigger alerts on failures and verify the integrity of your recovery points. It’s simple: either monitor continuously or risk finding out too late that your backups never had your back. Regularly testing and validating the recovery points is the only way to trust your recovery plan. Bonus tip: Choose solutions that integrate with professional services automationticketing systems to automatically raise alerts and tickets for any backup hiccups. Protect your backup infrastructure from ransomware and internal threats Your backup infrastructure must be isolated, hardened and tightly controlled to prevent unauthorized access or tampering. You must: Lock down your backup network environment. Host your backup server in a secure local area networksegment with no inbound internet access. Allow outbound communication from the backup server only to approved vendor networks. Block all unapproved outbound traffic using strict firewall rules. Permit communication only between protected systems and the backup server. Use firewalls and port-based access control listson network switches to enforce granular access control. Apply agent-level encryption so data is protected at rest, using keys generated from a secure passphrase only you control. Enforce strict access controls and authentication. Implement role-based access controlwith least-privilege roles for Tier 1 techs. Ensure multifactor authenticationfor all access to the backup management console. Monitor audit logs continuously for privilege escalations or unauthorized role changes. Ensure audit logs are immutable. Review regularly for: Security-related events like failed logins, privilege escalations, deletion of backups and device removal. Administrative actions such as changes to backup schedules, changes to retention settings, new user creation and changes to user roles. Backup and backup copysuccess/failure rates and backup verification success/failure rates. Stay alert to serious risks. Configure automatic alerts for policy violations and high-severity security events, such as an unauthorized change to backup retention policies. Test restores regularly and include them in your DR plan Backups mean nothing if you can’t restore from them quickly and completely, and that’s why regular testing is essential. Recovery drills must be scheduled and integrated into your disaster recoveryplan. The goal is to build muscle memory, reveal weaknesses and confirm that your recovery plan actually works under pressure. Start by defining the recovery time objectiveand the recovery point objectivefor every system. These determine how fast and how recent your recoverable data needs to be. Testing against those targets helps ensure your strategy aligns with business expectations. Importantly, don’t limit testing to one type of restore. Simulate file-level recoveries, full bare-metal restores and full-scale cloud failovers. Each scenario uncovers different vulnerabilities, such as time delays, compatibility issues or infrastructure gaps. Also, recovery is more than a technical task. Involve stakeholders across departments to test communication protocols, role responsibilities and customer-facing impacts. Who talks to clients? Who triggers the internal chain of command? Everyone should know their role when every second counts. Detect threats early with backup-level visibility When it comes to ransomware, speed of detection is everything. While endpoint and network tools often get the spotlight, your backup layer is also a powerful, often overlooked line of defense. Monitoring backup data for anomalies can reveal early signs of ransomware activity, giving you a critical head start before widespread damage occurs. Backup-level visibility allows you to detect telltale signs like sudden encryption, mass deletions or abnormal file modifications. For example, if a process begins overwriting file contents with random data while leaving all modified timestamps intact, that’s a major red flag. No legitimate program behaves that way. With smart detection at the backup layer, you can catch these behaviors and get alerted immediately. This capability doesn’t replace your endpoint detection and responseor antivirussolutions; it supercharges them. It speeds up triage, helps isolate compromised systems faster and reduces an attack’s overall blast radius. For maximum impact, choose backup solutions that offer real-time anomaly detection and support integration with your security information and event managementor centralized logging systems. The faster you see the threat, the faster you can act — and that can be the difference between a minor disruption and a major disaster. Bonus tip: Train end users to recognize and report suspicious activity early If BCDR is your last line of defense, your end users are the first. Cybercriminals are increasingly targeting end users today. According to Microsoft Digital Defense Report 2024, threat actors are trying to access user credentials through various methods, such as phishing, malware and brute-force/password spray attacks. Over the last year, around 7,000 password attacks were blocked per second in Entra ID alone. In fact, ransomware attacks often begin with a single click, usually via phishing emails or compromised credentials. Regular security training — especially simulated phishing exercises — helps build awareness of red flags and risky behaviors. Equip your team with the knowledge to spot ransomware warning signs, recognize unsafe data practices and respond appropriately. Encourage immediate reporting of anything that seems off. Foster a culture of enablement, not blame. When people feel safe to speak up, they’re more likely to take action. You can even take it further by launching internal programs that reward vigilance, such as a Cybersecurity Hero initiative to recognize and celebrate early reporters of potential threats. Final thoughts Ransomware doesn’t have to be feared; it has to be planned for. The five BCDR capabilities we discussed above will equip you to withstand even the most advanced ransomware threats and ensure your organization can recover quickly, completely and confidently. To seamlessly implement these strategies, consider Datto BCDR, a unified platform that integrates all these capabilities. It’s built to help you stay resilient, no matter what happens. Don’t wait for a ransom note to discover that your backups weren’t enough. Explore how Datto can strengthen your ransomware resilience. Get custom Datto BCDR pricing today. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #bcdr #essentials #effective #ransomware #defense
    THEHACKERNEWS.COM
    5 BCDR Essentials for Effective Ransomware Defense
    May 15, 2025The Hacker NewsRansomware Defense / Business Continuity Ransomware has evolved into a deceptive, highly coordinated and dangerously sophisticated threat capable of crippling organizations of any size. Cybercriminals now exploit even legitimate IT tools to infiltrate networks and launch ransomware attacks. In a chilling example, Microsoft recently disclosed how threat actors misused its Quick Assist remote assistance tool to deploy the destructive Black Basta ransomware strain. And what’s worse? Innovations like Ransomware-as-a-Service (RaaS) are lowering the bar for entry, making ransomware attacks more frequent and far-reaching than ever before. According to Cybersecurity Ventures, by 2031, a new ransomware attack is expected every 2 seconds, with projected damages hitting an astronomical $275 billion annually. No organization is immune to ransomware, and building a strong recovery strategy is equally, if not even more, important than attempting to prevent all attacks in the first place. A solid business continuity and disaster recovery (BCDR) strategy can be your last and most critical line of defense when ransomware breaks through, allowing you to bounce back quickly from the attack, resume operations and avoid paying ransom. Notably, the cost of investing in BCDR is negligible compared to the devastation that prolonged downtime or data loss can cause. In this article, we’ll break down the five essential BCDR capabilities you should have in place to effectively recover from ransomware. These strategies can mean the difference between swift recovery and business failure after an attack. Let’s explore what every organization must do before it’s too late. Follow the 3-2-1 (and then some!) backup rule The 3-2-1 backup rule has long been the gold standard: keep three copies of your data, store them on two different media and keep one copy off-site. But in the age of ransomware, that’s no longer enough. Experts now recommend the 3-2-1-1-0 strategy. The extra 1 stands for one immutable copy — a backup that can’t be changed or deleted. The 0 represents zero doubt in your ability to recover, with verified, tested recovery points. Why the upgrade? Ransomware doesn’t just target production systems anymore. It actively seeks and encrypts backups as well. That’s why isolation, immutability and verification are key. Cloud-based and air-gapped backup storage provide essential layers of protection, keeping backups out of reach from threats that even use stolen admin credentials. Having such immutable backups ensures recovery points remain untampered, no matter what. They’re your safety net when everything else is compromised. Plus, this level of data protection helps meet rising cyber insurance standards and compliance obligations. Bonus tip: Look for solutions offering a hardened Linux architecture to camouflage and isolate backups outside of the common Windows attack surface. Automate and monitor backups continuously Automation is powerful, but without active monitoring, it can become your biggest blind spot. While scheduling backups and automating verification saves time, it’s just as important to ensure that those backups are actually happening and that they’re usable. Use built-in tools or custom scripting to monitor backup jobs, trigger alerts on failures and verify the integrity of your recovery points. It’s simple: either monitor continuously or risk finding out too late that your backups never had your back. Regularly testing and validating the recovery points is the only way to trust your recovery plan. Bonus tip: Choose solutions that integrate with professional services automation (PSA) ticketing systems to automatically raise alerts and tickets for any backup hiccups. Protect your backup infrastructure from ransomware and internal threats Your backup infrastructure must be isolated, hardened and tightly controlled to prevent unauthorized access or tampering. You must: Lock down your backup network environment. Host your backup server in a secure local area network (LAN) segment with no inbound internet access. Allow outbound communication from the backup server only to approved vendor networks. Block all unapproved outbound traffic using strict firewall rules. Permit communication only between protected systems and the backup server. Use firewalls and port-based access control lists (ACLs) on network switches to enforce granular access control. Apply agent-level encryption so data is protected at rest, using keys generated from a secure passphrase only you control. Enforce strict access controls and authentication. Implement role-based access control (RBAC) with least-privilege roles for Tier 1 techs. Ensure multifactor authentication (MFA) for all access to the backup management console. Monitor audit logs continuously for privilege escalations or unauthorized role changes. Ensure audit logs are immutable. Review regularly for: Security-related events like failed logins, privilege escalations, deletion of backups and device removal. Administrative actions such as changes to backup schedules, changes to retention settings, new user creation and changes to user roles. Backup and backup copy (replication) success/failure rates and backup verification success/failure rates. Stay alert to serious risks. Configure automatic alerts for policy violations and high-severity security events, such as an unauthorized change to backup retention policies. Test restores regularly and include them in your DR plan Backups mean nothing if you can’t restore from them quickly and completely, and that’s why regular testing is essential. Recovery drills must be scheduled and integrated into your disaster recovery (DR) plan. The goal is to build muscle memory, reveal weaknesses and confirm that your recovery plan actually works under pressure. Start by defining the recovery time objective (RTO) and the recovery point objective (RPO) for every system. These determine how fast and how recent your recoverable data needs to be. Testing against those targets helps ensure your strategy aligns with business expectations. Importantly, don’t limit testing to one type of restore. Simulate file-level recoveries, full bare-metal restores and full-scale cloud failovers. Each scenario uncovers different vulnerabilities, such as time delays, compatibility issues or infrastructure gaps. Also, recovery is more than a technical task. Involve stakeholders across departments to test communication protocols, role responsibilities and customer-facing impacts. Who talks to clients? Who triggers the internal chain of command? Everyone should know their role when every second counts. Detect threats early with backup-level visibility When it comes to ransomware, speed of detection is everything. While endpoint and network tools often get the spotlight, your backup layer is also a powerful, often overlooked line of defense. Monitoring backup data for anomalies can reveal early signs of ransomware activity, giving you a critical head start before widespread damage occurs. Backup-level visibility allows you to detect telltale signs like sudden encryption, mass deletions or abnormal file modifications. For example, if a process begins overwriting file contents with random data while leaving all modified timestamps intact, that’s a major red flag. No legitimate program behaves that way. With smart detection at the backup layer, you can catch these behaviors and get alerted immediately. This capability doesn’t replace your endpoint detection and response (EDR) or antivirus (AV) solutions; it supercharges them. It speeds up triage, helps isolate compromised systems faster and reduces an attack’s overall blast radius. For maximum impact, choose backup solutions that offer real-time anomaly detection and support integration with your security information and event management (SIEM) or centralized logging systems. The faster you see the threat, the faster you can act — and that can be the difference between a minor disruption and a major disaster. Bonus tip: Train end users to recognize and report suspicious activity early If BCDR is your last line of defense, your end users are the first. Cybercriminals are increasingly targeting end users today. According to Microsoft Digital Defense Report 2024, threat actors are trying to access user credentials through various methods, such as phishing, malware and brute-force/password spray attacks. Over the last year, around 7,000 password attacks were blocked per second in Entra ID alone. In fact, ransomware attacks often begin with a single click, usually via phishing emails or compromised credentials. Regular security training — especially simulated phishing exercises — helps build awareness of red flags and risky behaviors. Equip your team with the knowledge to spot ransomware warning signs, recognize unsafe data practices and respond appropriately. Encourage immediate reporting of anything that seems off. Foster a culture of enablement, not blame. When people feel safe to speak up, they’re more likely to take action. You can even take it further by launching internal programs that reward vigilance, such as a Cybersecurity Hero initiative to recognize and celebrate early reporters of potential threats. Final thoughts Ransomware doesn’t have to be feared; it has to be planned for. The five BCDR capabilities we discussed above will equip you to withstand even the most advanced ransomware threats and ensure your organization can recover quickly, completely and confidently. To seamlessly implement these strategies, consider Datto BCDR, a unified platform that integrates all these capabilities. It’s built to help you stay resilient, no matter what happens. Don’t wait for a ransom note to discover that your backups weren’t enough. Explore how Datto can strengthen your ransomware resilience. Get custom Datto BCDR pricing today. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Commentaires 0 Parts